Test Report: Hyper-V_Windows 17824

                    
                      e73fe628963980756e0b55e8e214a727ecfdefcc:2023-12-18:32333
                    
                

Test fail (28/252)

x
+
TestAddons/parallel/Registry (84.61s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:329: registry stabilized in 27.7954ms
addons_test.go:331: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-ln7vz" [86e7fa30-9c50-4fed-8fed-b6315dd21140] Running
addons_test.go:331: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.0438482s
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-6fcmp" [236c99f4-02df-4cb3-9586-be8eb8f00a39] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 6.0109918s
addons_test.go:339: (dbg) Run:  kubectl --context addons-922300 delete po -l run=registry-test --now
addons_test.go:344: (dbg) Run:  kubectl --context addons-922300 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:344: (dbg) Done: kubectl --context addons-922300 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (12.3722777s)
addons_test.go:358: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-922300 ip
addons_test.go:358: (dbg) Done: out/minikube-windows-amd64.exe -p addons-922300 ip: (2.7806784s)
addons_test.go:363: expected stderr to be -empty- but got: *"W1218 11:49:26.168682    8292 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube7\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified.\n"* .  args "out/minikube-windows-amd64.exe -p addons-922300 ip"
2023/12/18 11:49:28 [DEBUG] GET http://192.168.238.87:5000
addons_test.go:387: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-922300 addons disable registry --alsologtostderr -v=1
addons_test.go:387: (dbg) Done: out/minikube-windows-amd64.exe -p addons-922300 addons disable registry --alsologtostderr -v=1: (17.9585502s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p addons-922300 -n addons-922300
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p addons-922300 -n addons-922300: (14.3096593s)
helpers_test.go:244: <<< TestAddons/parallel/Registry FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Registry]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-922300 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p addons-922300 logs -n 25: (10.7374086s)
helpers_test.go:252: TestAddons/parallel/Registry logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------------------------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |       User        | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	| start   | -o=json --download-only                                                                     | download-only-453500 | minikube7\jenkins | v1.32.0 | 18 Dec 23 11:41 UTC |                     |
	|         | -p download-only-453500                                                                     |                      |                   |         |                     |                     |
	|         | --force --alsologtostderr                                                                   |                      |                   |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                                                                |                      |                   |         |                     |                     |
	|         | --container-runtime=docker                                                                  |                      |                   |         |                     |                     |
	|         | --driver=hyperv                                                                             |                      |                   |         |                     |                     |
	| start   | -o=json --download-only                                                                     | download-only-453500 | minikube7\jenkins | v1.32.0 | 18 Dec 23 11:42 UTC |                     |
	|         | -p download-only-453500                                                                     |                      |                   |         |                     |                     |
	|         | --force --alsologtostderr                                                                   |                      |                   |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                                                                |                      |                   |         |                     |                     |
	|         | --container-runtime=docker                                                                  |                      |                   |         |                     |                     |
	|         | --driver=hyperv                                                                             |                      |                   |         |                     |                     |
	| start   | -o=json --download-only                                                                     | download-only-453500 | minikube7\jenkins | v1.32.0 | 18 Dec 23 11:42 UTC |                     |
	|         | -p download-only-453500                                                                     |                      |                   |         |                     |                     |
	|         | --force --alsologtostderr                                                                   |                      |                   |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                                                           |                      |                   |         |                     |                     |
	|         | --container-runtime=docker                                                                  |                      |                   |         |                     |                     |
	|         | --driver=hyperv                                                                             |                      |                   |         |                     |                     |
	| delete  | --all                                                                                       | minikube             | minikube7\jenkins | v1.32.0 | 18 Dec 23 11:42 UTC | 18 Dec 23 11:42 UTC |
	| delete  | -p download-only-453500                                                                     | download-only-453500 | minikube7\jenkins | v1.32.0 | 18 Dec 23 11:42 UTC | 18 Dec 23 11:42 UTC |
	| delete  | -p download-only-453500                                                                     | download-only-453500 | minikube7\jenkins | v1.32.0 | 18 Dec 23 11:42 UTC | 18 Dec 23 11:42 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-275000 | minikube7\jenkins | v1.32.0 | 18 Dec 23 11:42 UTC |                     |
	|         | binary-mirror-275000                                                                        |                      |                   |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |                   |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |                   |         |                     |                     |
	|         | http://127.0.0.1:57962                                                                      |                      |                   |         |                     |                     |
	|         | --driver=hyperv                                                                             |                      |                   |         |                     |                     |
	| delete  | -p binary-mirror-275000                                                                     | binary-mirror-275000 | minikube7\jenkins | v1.32.0 | 18 Dec 23 11:42 UTC | 18 Dec 23 11:42 UTC |
	| addons  | disable dashboard -p                                                                        | addons-922300        | minikube7\jenkins | v1.32.0 | 18 Dec 23 11:42 UTC |                     |
	|         | addons-922300                                                                               |                      |                   |         |                     |                     |
	| addons  | enable dashboard -p                                                                         | addons-922300        | minikube7\jenkins | v1.32.0 | 18 Dec 23 11:42 UTC |                     |
	|         | addons-922300                                                                               |                      |                   |         |                     |                     |
	| start   | -p addons-922300 --wait=true                                                                | addons-922300        | minikube7\jenkins | v1.32.0 | 18 Dec 23 11:42 UTC | 18 Dec 23 11:49 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |                   |         |                     |                     |
	|         | --addons=registry                                                                           |                      |                   |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |                   |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |                   |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |                   |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |                   |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |                   |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |                   |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |                   |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |                   |         |                     |                     |
	|         | --driver=hyperv --addons=ingress                                                            |                      |                   |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |                   |         |                     |                     |
	|         | --addons=helm-tiller                                                                        |                      |                   |         |                     |                     |
	| addons  | addons-922300 addons                                                                        | addons-922300        | minikube7\jenkins | v1.32.0 | 18 Dec 23 11:49 UTC | 18 Dec 23 11:49 UTC |
	|         | disable metrics-server                                                                      |                      |                   |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |                   |         |                     |                     |
	| ssh     | addons-922300 ssh cat                                                                       | addons-922300        | minikube7\jenkins | v1.32.0 | 18 Dec 23 11:49 UTC | 18 Dec 23 11:49 UTC |
	|         | /opt/local-path-provisioner/pvc-72013ae2-6033-4d78-8186-560dbbf121d3_default_test-pvc/file1 |                      |                   |         |                     |                     |
	| ip      | addons-922300 ip                                                                            | addons-922300        | minikube7\jenkins | v1.32.0 | 18 Dec 23 11:49 UTC | 18 Dec 23 11:49 UTC |
	| addons  | disable nvidia-device-plugin                                                                | addons-922300        | minikube7\jenkins | v1.32.0 | 18 Dec 23 11:49 UTC | 18 Dec 23 11:49 UTC |
	|         | -p addons-922300                                                                            |                      |                   |         |                     |                     |
	| addons  | addons-922300 addons disable                                                                | addons-922300        | minikube7\jenkins | v1.32.0 | 18 Dec 23 11:49 UTC | 18 Dec 23 11:49 UTC |
	|         | registry --alsologtostderr                                                                  |                      |                   |         |                     |                     |
	|         | -v=1                                                                                        |                      |                   |         |                     |                     |
	| addons  | addons-922300 addons disable                                                                | addons-922300        | minikube7\jenkins | v1.32.0 | 18 Dec 23 11:49 UTC | 18 Dec 23 11:49 UTC |
	|         | storage-provisioner-rancher                                                                 |                      |                   |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |                   |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-922300        | minikube7\jenkins | v1.32.0 | 18 Dec 23 11:49 UTC |                     |
	|         | -p addons-922300                                                                            |                      |                   |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |                   |         |                     |                     |
	| addons  | addons-922300 addons                                                                        | addons-922300        | minikube7\jenkins | v1.32.0 | 18 Dec 23 11:49 UTC |                     |
	|         | disable csi-hostpath-driver                                                                 |                      |                   |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |                   |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-922300        | minikube7\jenkins | v1.32.0 | 18 Dec 23 11:49 UTC |                     |
	|         | addons-922300                                                                               |                      |                   |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/12/18 11:42:39
	Running on machine: minikube7
	Binary: Built with gc go1.21.5 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1218 11:42:39.454100   14276 out.go:296] Setting OutFile to fd 472 ...
	I1218 11:42:39.455375   14276 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1218 11:42:39.455375   14276 out.go:309] Setting ErrFile to fd 784...
	I1218 11:42:39.455440   14276 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1218 11:42:39.480124   14276 out.go:303] Setting JSON to false
	I1218 11:42:39.483551   14276 start.go:128] hostinfo: {"hostname":"minikube7","uptime":234,"bootTime":1702899525,"procs":200,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.3803 Build 19045.3803","kernelVersion":"10.0.19045.3803 Build 19045.3803","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f66ed2ea-6c04-4a6b-8eea-b2fb0953e990"}
	W1218 11:42:39.483551   14276 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1218 11:42:39.484947   14276 out.go:177] * [addons-922300] minikube v1.32.0 on Microsoft Windows 10 Enterprise N 10.0.19045.3803 Build 19045.3803
	I1218 11:42:39.485910   14276 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	I1218 11:42:39.485910   14276 notify.go:220] Checking for updates...
	I1218 11:42:39.486543   14276 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1218 11:42:39.487339   14276 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube7\minikube-integration\.minikube
	I1218 11:42:39.488038   14276 out.go:177]   - MINIKUBE_LOCATION=17824
	I1218 11:42:39.488641   14276 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1218 11:42:39.489996   14276 driver.go:392] Setting default libvirt URI to qemu:///system
	I1218 11:42:44.854954   14276 out.go:177] * Using the hyperv driver based on user configuration
	I1218 11:42:44.856142   14276 start.go:298] selected driver: hyperv
	I1218 11:42:44.856142   14276 start.go:902] validating driver "hyperv" against <nil>
	I1218 11:42:44.856142   14276 start.go:913] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1218 11:42:44.908792   14276 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1218 11:42:44.910132   14276 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1218 11:42:44.910132   14276 cni.go:84] Creating CNI manager for ""
	I1218 11:42:44.910132   14276 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1218 11:42:44.910132   14276 start_flags.go:318] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1218 11:42:44.910132   14276 start_flags.go:323] config:
	{Name:addons-922300 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702585004-17765@sha256:ba2fbb9efd5b81a443834ba0800f3bc13feea942ce199df74b0054a9bdb32bbd Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:addons-922300 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker C
RISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1218 11:42:44.910795   14276 iso.go:125] acquiring lock: {Name:mka7fdf4332632f41b99a369cc525e40499a0030 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1218 11:42:44.912249   14276 out.go:177] * Starting control plane node addons-922300 in cluster addons-922300
	I1218 11:42:44.912906   14276 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I1218 11:42:44.912906   14276 preload.go:148] Found local preload: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4
	I1218 11:42:44.912906   14276 cache.go:56] Caching tarball of preloaded images
	I1218 11:42:44.913561   14276 preload.go:174] Found C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1218 11:42:44.913561   14276 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on docker
	I1218 11:42:44.914268   14276 profile.go:148] Saving config to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-922300\config.json ...
	I1218 11:42:44.914933   14276 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-922300\config.json: {Name:mk911c00f8678be609cf1ac291b3055ee5a32a3a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1218 11:42:44.915608   14276 start.go:365] acquiring machines lock for addons-922300: {Name:mk814f158b6187cc9297257c36fdbe0d2871c950 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1218 11:42:44.916256   14276 start.go:369] acquired machines lock for "addons-922300" in 114.8µs
	I1218 11:42:44.916256   14276 start.go:93] Provisioning new machine with config: &{Name:addons-922300 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17765/minikube-v1.32.1-1702490427-17765-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702585004-17765@sha256:ba2fbb9efd5b81a443834ba0800f3bc13feea942ce199df74b0054a9bdb32bbd Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.28.4 ClusterName:addons-922300 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Mount
MSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1218 11:42:44.916831   14276 start.go:125] createHost starting for "" (driver="hyperv")
	I1218 11:42:44.917156   14276 out.go:204] * Creating hyperv VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I1218 11:42:44.917911   14276 start.go:159] libmachine.API.Create for "addons-922300" (driver="hyperv")
	I1218 11:42:44.917911   14276 client.go:168] LocalClient.Create starting
	I1218 11:42:44.919099   14276 main.go:141] libmachine: Creating CA: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem
	I1218 11:42:45.047034   14276 main.go:141] libmachine: Creating client certificate: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem
	I1218 11:42:45.221264   14276 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I1218 11:42:47.347569   14276 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I1218 11:42:47.347684   14276 main.go:141] libmachine: [stderr =====>] : 
	I1218 11:42:47.347811   14276 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I1218 11:42:49.064231   14276 main.go:141] libmachine: [stdout =====>] : False
	
	I1218 11:42:49.064231   14276 main.go:141] libmachine: [stderr =====>] : 
	I1218 11:42:49.064504   14276 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I1218 11:42:50.526473   14276 main.go:141] libmachine: [stdout =====>] : True
	
	I1218 11:42:50.526780   14276 main.go:141] libmachine: [stderr =====>] : 
	I1218 11:42:50.526900   14276 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I1218 11:42:54.407699   14276 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I1218 11:42:54.407699   14276 main.go:141] libmachine: [stderr =====>] : 
	I1218 11:42:54.411634   14276 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube7/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.32.1-1702490427-17765-amd64.iso...
	I1218 11:42:54.946551   14276 main.go:141] libmachine: Creating SSH key...
	I1218 11:42:55.287328   14276 main.go:141] libmachine: Creating VM...
	I1218 11:42:55.287849   14276 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I1218 11:42:58.145500   14276 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I1218 11:42:58.145500   14276 main.go:141] libmachine: [stderr =====>] : 
	I1218 11:42:58.145818   14276 main.go:141] libmachine: Using switch "Default Switch"
	I1218 11:42:58.145818   14276 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I1218 11:42:59.937828   14276 main.go:141] libmachine: [stdout =====>] : True
	
	I1218 11:42:59.937828   14276 main.go:141] libmachine: [stderr =====>] : 
	I1218 11:42:59.937828   14276 main.go:141] libmachine: Creating VHD
	I1218 11:42:59.937828   14276 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\addons-922300\fixed.vhd' -SizeBytes 10MB -Fixed
	I1218 11:43:03.707927   14276 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube7
	Path                    : C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\addons-922300\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 69D14033-FAA6-46E5-B774-C2E9D8DF94AD
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I1218 11:43:03.708014   14276 main.go:141] libmachine: [stderr =====>] : 
	I1218 11:43:03.708096   14276 main.go:141] libmachine: Writing magic tar header
	I1218 11:43:03.708226   14276 main.go:141] libmachine: Writing SSH key tar header
	I1218 11:43:03.717544   14276 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\addons-922300\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\addons-922300\disk.vhd' -VHDType Dynamic -DeleteSource
	I1218 11:43:06.871964   14276 main.go:141] libmachine: [stdout =====>] : 
	I1218 11:43:06.871964   14276 main.go:141] libmachine: [stderr =====>] : 
	I1218 11:43:06.871964   14276 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\addons-922300\disk.vhd' -SizeBytes 20000MB
	I1218 11:43:09.423714   14276 main.go:141] libmachine: [stdout =====>] : 
	I1218 11:43:09.423884   14276 main.go:141] libmachine: [stderr =====>] : 
	I1218 11:43:09.423884   14276 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM addons-922300 -Path 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\addons-922300' -SwitchName 'Default Switch' -MemoryStartupBytes 4000MB
	I1218 11:43:13.037955   14276 main.go:141] libmachine: [stdout =====>] : 
	Name          State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----          ----- ----------- ----------------- ------   ------             -------
	addons-922300 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I1218 11:43:13.037955   14276 main.go:141] libmachine: [stderr =====>] : 
	I1218 11:43:13.038149   14276 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName addons-922300 -DynamicMemoryEnabled $false
	I1218 11:43:15.216514   14276 main.go:141] libmachine: [stdout =====>] : 
	I1218 11:43:15.216712   14276 main.go:141] libmachine: [stderr =====>] : 
	I1218 11:43:15.216712   14276 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor addons-922300 -Count 2
	I1218 11:43:17.337639   14276 main.go:141] libmachine: [stdout =====>] : 
	I1218 11:43:17.337839   14276 main.go:141] libmachine: [stderr =====>] : 
	I1218 11:43:17.337839   14276 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName addons-922300 -Path 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\addons-922300\boot2docker.iso'
	I1218 11:43:19.909410   14276 main.go:141] libmachine: [stdout =====>] : 
	I1218 11:43:19.909725   14276 main.go:141] libmachine: [stderr =====>] : 
	I1218 11:43:19.909876   14276 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName addons-922300 -Path 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\addons-922300\disk.vhd'
	I1218 11:43:22.462399   14276 main.go:141] libmachine: [stdout =====>] : 
	I1218 11:43:22.462399   14276 main.go:141] libmachine: [stderr =====>] : 
	I1218 11:43:22.462399   14276 main.go:141] libmachine: Starting VM...
	I1218 11:43:22.462673   14276 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM addons-922300
	I1218 11:43:25.342675   14276 main.go:141] libmachine: [stdout =====>] : 
	I1218 11:43:25.342930   14276 main.go:141] libmachine: [stderr =====>] : 
	I1218 11:43:25.342930   14276 main.go:141] libmachine: Waiting for host to start...
	I1218 11:43:25.343173   14276 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-922300 ).state
	I1218 11:43:27.619746   14276 main.go:141] libmachine: [stdout =====>] : Running
	
	I1218 11:43:27.620058   14276 main.go:141] libmachine: [stderr =====>] : 
	I1218 11:43:27.620097   14276 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-922300 ).networkadapters[0]).ipaddresses[0]
	I1218 11:43:30.077445   14276 main.go:141] libmachine: [stdout =====>] : 
	I1218 11:43:30.077445   14276 main.go:141] libmachine: [stderr =====>] : 
	I1218 11:43:31.091800   14276 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-922300 ).state
	I1218 11:43:33.284600   14276 main.go:141] libmachine: [stdout =====>] : Running
	
	I1218 11:43:33.284600   14276 main.go:141] libmachine: [stderr =====>] : 
	I1218 11:43:33.284725   14276 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-922300 ).networkadapters[0]).ipaddresses[0]
	I1218 11:43:35.759427   14276 main.go:141] libmachine: [stdout =====>] : 
	I1218 11:43:35.759427   14276 main.go:141] libmachine: [stderr =====>] : 
	I1218 11:43:36.775006   14276 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-922300 ).state
	I1218 11:43:38.981618   14276 main.go:141] libmachine: [stdout =====>] : Running
	
	I1218 11:43:38.981880   14276 main.go:141] libmachine: [stderr =====>] : 
	I1218 11:43:38.982008   14276 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-922300 ).networkadapters[0]).ipaddresses[0]
	I1218 11:43:41.504233   14276 main.go:141] libmachine: [stdout =====>] : 
	I1218 11:43:41.504470   14276 main.go:141] libmachine: [stderr =====>] : 
	I1218 11:43:42.518716   14276 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-922300 ).state
	I1218 11:43:44.674712   14276 main.go:141] libmachine: [stdout =====>] : Running
	
	I1218 11:43:44.674817   14276 main.go:141] libmachine: [stderr =====>] : 
	I1218 11:43:44.675005   14276 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-922300 ).networkadapters[0]).ipaddresses[0]
	I1218 11:43:47.199205   14276 main.go:141] libmachine: [stdout =====>] : 
	I1218 11:43:47.199578   14276 main.go:141] libmachine: [stderr =====>] : 
	I1218 11:43:48.213538   14276 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-922300 ).state
	I1218 11:43:50.370265   14276 main.go:141] libmachine: [stdout =====>] : Running
	
	I1218 11:43:50.370265   14276 main.go:141] libmachine: [stderr =====>] : 
	I1218 11:43:50.370265   14276 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-922300 ).networkadapters[0]).ipaddresses[0]
	I1218 11:43:52.940374   14276 main.go:141] libmachine: [stdout =====>] : 192.168.238.87
	
	I1218 11:43:52.940542   14276 main.go:141] libmachine: [stderr =====>] : 
	I1218 11:43:52.940633   14276 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-922300 ).state
	I1218 11:43:55.040749   14276 main.go:141] libmachine: [stdout =====>] : Running
	
	I1218 11:43:55.040923   14276 main.go:141] libmachine: [stderr =====>] : 
	I1218 11:43:55.040923   14276 machine.go:88] provisioning docker machine ...
	I1218 11:43:55.041023   14276 buildroot.go:166] provisioning hostname "addons-922300"
	I1218 11:43:55.041169   14276 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-922300 ).state
	I1218 11:43:57.142484   14276 main.go:141] libmachine: [stdout =====>] : Running
	
	I1218 11:43:57.142484   14276 main.go:141] libmachine: [stderr =====>] : 
	I1218 11:43:57.142645   14276 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-922300 ).networkadapters[0]).ipaddresses[0]
	I1218 11:43:59.621218   14276 main.go:141] libmachine: [stdout =====>] : 192.168.238.87
	
	I1218 11:43:59.621218   14276 main.go:141] libmachine: [stderr =====>] : 
	I1218 11:43:59.626497   14276 main.go:141] libmachine: Using SSH client type: native
	I1218 11:43:59.636260   14276 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d4f40] 0x13d7a80 <nil>  [] 0s} 192.168.238.87 22 <nil> <nil>}
	I1218 11:43:59.636260   14276 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-922300 && echo "addons-922300" | sudo tee /etc/hostname
	I1218 11:43:59.789420   14276 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-922300
	
	I1218 11:43:59.789503   14276 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-922300 ).state
	I1218 11:44:01.907633   14276 main.go:141] libmachine: [stdout =====>] : Running
	
	I1218 11:44:01.907633   14276 main.go:141] libmachine: [stderr =====>] : 
	I1218 11:44:01.907970   14276 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-922300 ).networkadapters[0]).ipaddresses[0]
	I1218 11:44:04.384075   14276 main.go:141] libmachine: [stdout =====>] : 192.168.238.87
	
	I1218 11:44:04.384075   14276 main.go:141] libmachine: [stderr =====>] : 
	I1218 11:44:04.389462   14276 main.go:141] libmachine: Using SSH client type: native
	I1218 11:44:04.390260   14276 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d4f40] 0x13d7a80 <nil>  [] 0s} 192.168.238.87 22 <nil> <nil>}
	I1218 11:44:04.390260   14276 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-922300' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-922300/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-922300' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1218 11:44:04.544303   14276 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1218 11:44:04.544488   14276 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube7\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube7\minikube-integration\.minikube}
	I1218 11:44:04.544488   14276 buildroot.go:174] setting up certificates
	I1218 11:44:04.544552   14276 provision.go:83] configureAuth start
	I1218 11:44:04.544625   14276 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-922300 ).state
	I1218 11:44:06.636841   14276 main.go:141] libmachine: [stdout =====>] : Running
	
	I1218 11:44:06.637034   14276 main.go:141] libmachine: [stderr =====>] : 
	I1218 11:44:06.637117   14276 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-922300 ).networkadapters[0]).ipaddresses[0]
	I1218 11:44:09.181550   14276 main.go:141] libmachine: [stdout =====>] : 192.168.238.87
	
	I1218 11:44:09.181837   14276 main.go:141] libmachine: [stderr =====>] : 
	I1218 11:44:09.181837   14276 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-922300 ).state
	I1218 11:44:11.251456   14276 main.go:141] libmachine: [stdout =====>] : Running
	
	I1218 11:44:11.251456   14276 main.go:141] libmachine: [stderr =====>] : 
	I1218 11:44:11.251662   14276 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-922300 ).networkadapters[0]).ipaddresses[0]
	I1218 11:44:13.741981   14276 main.go:141] libmachine: [stdout =====>] : 192.168.238.87
	
	I1218 11:44:13.742229   14276 main.go:141] libmachine: [stderr =====>] : 
	I1218 11:44:13.742229   14276 provision.go:138] copyHostCerts
	I1218 11:44:13.742398   14276 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/ca.pem (1078 bytes)
	I1218 11:44:13.744552   14276 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/cert.pem (1123 bytes)
	I1218 11:44:13.746108   14276 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/key.pem (1679 bytes)
	I1218 11:44:13.747593   14276 provision.go:112] generating server cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.addons-922300 san=[192.168.238.87 192.168.238.87 localhost 127.0.0.1 minikube addons-922300]
	I1218 11:44:13.911799   14276 provision.go:172] copyRemoteCerts
	I1218 11:44:13.924333   14276 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1218 11:44:13.924333   14276 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-922300 ).state
	I1218 11:44:16.065411   14276 main.go:141] libmachine: [stdout =====>] : Running
	
	I1218 11:44:16.065650   14276 main.go:141] libmachine: [stderr =====>] : 
	I1218 11:44:16.065650   14276 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-922300 ).networkadapters[0]).ipaddresses[0]
	I1218 11:44:18.545997   14276 main.go:141] libmachine: [stdout =====>] : 192.168.238.87
	
	I1218 11:44:18.546287   14276 main.go:141] libmachine: [stderr =====>] : 
	I1218 11:44:18.546966   14276 sshutil.go:53] new ssh client: &{IP:192.168.238.87 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\addons-922300\id_rsa Username:docker}
	I1218 11:44:18.656803   14276 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.7324677s)
	I1218 11:44:18.657069   14276 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1218 11:44:18.702551   14276 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1216 bytes)
	I1218 11:44:18.743276   14276 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1218 11:44:18.789433   14276 provision.go:86] duration metric: configureAuth took 14.2447876s
	I1218 11:44:18.789490   14276 buildroot.go:189] setting minikube options for container-runtime
	I1218 11:44:18.790096   14276 config.go:182] Loaded profile config "addons-922300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1218 11:44:18.790096   14276 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-922300 ).state
	I1218 11:44:20.898549   14276 main.go:141] libmachine: [stdout =====>] : Running
	
	I1218 11:44:20.898763   14276 main.go:141] libmachine: [stderr =====>] : 
	I1218 11:44:20.899016   14276 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-922300 ).networkadapters[0]).ipaddresses[0]
	I1218 11:44:23.431428   14276 main.go:141] libmachine: [stdout =====>] : 192.168.238.87
	
	I1218 11:44:23.431428   14276 main.go:141] libmachine: [stderr =====>] : 
	I1218 11:44:23.438670   14276 main.go:141] libmachine: Using SSH client type: native
	I1218 11:44:23.439411   14276 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d4f40] 0x13d7a80 <nil>  [] 0s} 192.168.238.87 22 <nil> <nil>}
	I1218 11:44:23.439411   14276 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1218 11:44:23.579004   14276 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1218 11:44:23.579082   14276 buildroot.go:70] root file system type: tmpfs
	I1218 11:44:23.579336   14276 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I1218 11:44:23.579398   14276 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-922300 ).state
	I1218 11:44:25.663644   14276 main.go:141] libmachine: [stdout =====>] : Running
	
	I1218 11:44:25.663644   14276 main.go:141] libmachine: [stderr =====>] : 
	I1218 11:44:25.663748   14276 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-922300 ).networkadapters[0]).ipaddresses[0]
	I1218 11:44:28.156323   14276 main.go:141] libmachine: [stdout =====>] : 192.168.238.87
	
	I1218 11:44:28.156559   14276 main.go:141] libmachine: [stderr =====>] : 
	I1218 11:44:28.161575   14276 main.go:141] libmachine: Using SSH client type: native
	I1218 11:44:28.162313   14276 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d4f40] 0x13d7a80 <nil>  [] 0s} 192.168.238.87 22 <nil> <nil>}
	I1218 11:44:28.162313   14276 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1218 11:44:28.353624   14276 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1218 11:44:28.353624   14276 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-922300 ).state
	I1218 11:44:30.507212   14276 main.go:141] libmachine: [stdout =====>] : Running
	
	I1218 11:44:30.507368   14276 main.go:141] libmachine: [stderr =====>] : 
	I1218 11:44:30.507368   14276 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-922300 ).networkadapters[0]).ipaddresses[0]
	I1218 11:44:33.022912   14276 main.go:141] libmachine: [stdout =====>] : 192.168.238.87
	
	I1218 11:44:33.023233   14276 main.go:141] libmachine: [stderr =====>] : 
	I1218 11:44:33.028579   14276 main.go:141] libmachine: Using SSH client type: native
	I1218 11:44:33.029305   14276 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d4f40] 0x13d7a80 <nil>  [] 0s} 192.168.238.87 22 <nil> <nil>}
	I1218 11:44:33.029336   14276 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1218 11:44:34.016976   14276 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I1218 11:44:34.016976   14276 machine.go:91] provisioned docker machine in 38.9760288s
	I1218 11:44:34.016976   14276 client.go:171] LocalClient.Create took 1m49.0990026s
	I1218 11:44:34.016976   14276 start.go:167] duration metric: libmachine.API.Create for "addons-922300" took 1m49.0990026s
	I1218 11:44:34.016976   14276 start.go:300] post-start starting for "addons-922300" (driver="hyperv")
	I1218 11:44:34.017529   14276 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1218 11:44:34.030799   14276 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1218 11:44:34.030799   14276 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-922300 ).state
	I1218 11:44:36.146724   14276 main.go:141] libmachine: [stdout =====>] : Running
	
	I1218 11:44:36.146868   14276 main.go:141] libmachine: [stderr =====>] : 
	I1218 11:44:36.146944   14276 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-922300 ).networkadapters[0]).ipaddresses[0]
	I1218 11:44:38.709655   14276 main.go:141] libmachine: [stdout =====>] : 192.168.238.87
	
	I1218 11:44:38.709803   14276 main.go:141] libmachine: [stderr =====>] : 
	I1218 11:44:38.709803   14276 sshutil.go:53] new ssh client: &{IP:192.168.238.87 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\addons-922300\id_rsa Username:docker}
	I1218 11:44:38.820805   14276 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.7898842s)
	I1218 11:44:38.835715   14276 ssh_runner.go:195] Run: cat /etc/os-release
	I1218 11:44:38.841800   14276 info.go:137] Remote host: Buildroot 2021.02.12
	I1218 11:44:38.841800   14276 filesync.go:126] Scanning C:\Users\jenkins.minikube7\minikube-integration\.minikube\addons for local assets ...
	I1218 11:44:38.842723   14276 filesync.go:126] Scanning C:\Users\jenkins.minikube7\minikube-integration\.minikube\files for local assets ...
	I1218 11:44:38.842723   14276 start.go:303] post-start completed in 4.8257431s
	I1218 11:44:38.845286   14276 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-922300 ).state
	I1218 11:44:41.076920   14276 main.go:141] libmachine: [stdout =====>] : Running
	
	I1218 11:44:41.077147   14276 main.go:141] libmachine: [stderr =====>] : 
	I1218 11:44:41.077147   14276 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-922300 ).networkadapters[0]).ipaddresses[0]
	I1218 11:44:43.560546   14276 main.go:141] libmachine: [stdout =====>] : 192.168.238.87
	
	I1218 11:44:43.560694   14276 main.go:141] libmachine: [stderr =====>] : 
	I1218 11:44:43.560979   14276 profile.go:148] Saving config to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-922300\config.json ...
	I1218 11:44:43.565006   14276 start.go:128] duration metric: createHost completed in 1m58.64801s
	I1218 11:44:43.565169   14276 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-922300 ).state
	I1218 11:44:45.692793   14276 main.go:141] libmachine: [stdout =====>] : Running
	
	I1218 11:44:45.692793   14276 main.go:141] libmachine: [stderr =====>] : 
	I1218 11:44:45.692879   14276 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-922300 ).networkadapters[0]).ipaddresses[0]
	I1218 11:44:48.252588   14276 main.go:141] libmachine: [stdout =====>] : 192.168.238.87
	
	I1218 11:44:48.252685   14276 main.go:141] libmachine: [stderr =====>] : 
	I1218 11:44:48.257677   14276 main.go:141] libmachine: Using SSH client type: native
	I1218 11:44:48.258306   14276 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d4f40] 0x13d7a80 <nil>  [] 0s} 192.168.238.87 22 <nil> <nil>}
	I1218 11:44:48.258306   14276 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1218 11:44:48.404938   14276 main.go:141] libmachine: SSH cmd err, output: <nil>: 1702899888.406464701
	
	I1218 11:44:48.404938   14276 fix.go:206] guest clock: 1702899888.406464701
	I1218 11:44:48.404938   14276 fix.go:219] Guest: 2023-12-18 11:44:48.406464701 +0000 UTC Remote: 2023-12-18 11:44:43.5651448 +0000 UTC m=+124.279022201 (delta=4.841319901s)
	I1218 11:44:48.404938   14276 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-922300 ).state
	I1218 11:44:50.689633   14276 main.go:141] libmachine: [stdout =====>] : Running
	
	I1218 11:44:50.689963   14276 main.go:141] libmachine: [stderr =====>] : 
	I1218 11:44:50.689963   14276 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-922300 ).networkadapters[0]).ipaddresses[0]
	I1218 11:44:53.423701   14276 main.go:141] libmachine: [stdout =====>] : 192.168.238.87
	
	I1218 11:44:53.423996   14276 main.go:141] libmachine: [stderr =====>] : 
	I1218 11:44:53.429212   14276 main.go:141] libmachine: Using SSH client type: native
	I1218 11:44:53.429985   14276 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d4f40] 0x13d7a80 <nil>  [] 0s} 192.168.238.87 22 <nil> <nil>}
	I1218 11:44:53.429985   14276 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1702899888
	I1218 11:44:53.584890   14276 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Dec 18 11:44:48 UTC 2023
	
	I1218 11:44:53.584890   14276 fix.go:226] clock set: Mon Dec 18 11:44:48 UTC 2023
	 (err=<nil>)
	I1218 11:44:53.584890   14276 start.go:83] releasing machines lock for "addons-922300", held for 2m8.6685571s
	I1218 11:44:53.585440   14276 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-922300 ).state
	I1218 11:44:55.755028   14276 main.go:141] libmachine: [stdout =====>] : Running
	
	I1218 11:44:55.755193   14276 main.go:141] libmachine: [stderr =====>] : 
	I1218 11:44:55.755193   14276 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-922300 ).networkadapters[0]).ipaddresses[0]
	I1218 11:44:58.336517   14276 main.go:141] libmachine: [stdout =====>] : 192.168.238.87
	
	I1218 11:44:58.336517   14276 main.go:141] libmachine: [stderr =====>] : 
	I1218 11:44:58.340543   14276 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1218 11:44:58.340622   14276 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-922300 ).state
	I1218 11:44:58.356350   14276 ssh_runner.go:195] Run: cat /version.json
	I1218 11:44:58.356350   14276 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-922300 ).state
	I1218 11:45:00.598872   14276 main.go:141] libmachine: [stdout =====>] : Running
	
	I1218 11:45:00.598936   14276 main.go:141] libmachine: [stderr =====>] : 
	I1218 11:45:00.598936   14276 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-922300 ).networkadapters[0]).ipaddresses[0]
	I1218 11:45:00.660450   14276 main.go:141] libmachine: [stdout =====>] : Running
	
	I1218 11:45:00.660450   14276 main.go:141] libmachine: [stderr =====>] : 
	I1218 11:45:00.660582   14276 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-922300 ).networkadapters[0]).ipaddresses[0]
	I1218 11:45:03.319785   14276 main.go:141] libmachine: [stdout =====>] : 192.168.238.87
	
	I1218 11:45:03.320029   14276 main.go:141] libmachine: [stderr =====>] : 
	I1218 11:45:03.320407   14276 sshutil.go:53] new ssh client: &{IP:192.168.238.87 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\addons-922300\id_rsa Username:docker}
	I1218 11:45:03.398467   14276 main.go:141] libmachine: [stdout =====>] : 192.168.238.87
	
	I1218 11:45:03.398467   14276 main.go:141] libmachine: [stderr =====>] : 
	I1218 11:45:03.399222   14276 sshutil.go:53] new ssh client: &{IP:192.168.238.87 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\addons-922300\id_rsa Username:docker}
	I1218 11:45:03.488821   14276 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (5.1482748s)
	I1218 11:45:03.496617   14276 ssh_runner.go:235] Completed: cat /version.json: (5.1402634s)
	I1218 11:45:03.508372   14276 ssh_runner.go:195] Run: systemctl --version
	I1218 11:45:03.530603   14276 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1218 11:45:03.541190   14276 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1218 11:45:03.554075   14276 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1218 11:45:03.585021   14276 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1218 11:45:03.585178   14276 start.go:475] detecting cgroup driver to use...
	I1218 11:45:03.585573   14276 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1218 11:45:03.633712   14276 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I1218 11:45:03.662417   14276 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1218 11:45:03.679540   14276 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I1218 11:45:03.691888   14276 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1218 11:45:03.722701   14276 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1218 11:45:03.753394   14276 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1218 11:45:03.784955   14276 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1218 11:45:03.817612   14276 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1218 11:45:03.849235   14276 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1218 11:45:03.880926   14276 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1218 11:45:03.910012   14276 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1218 11:45:03.938661   14276 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1218 11:45:04.137969   14276 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1218 11:45:04.167297   14276 start.go:475] detecting cgroup driver to use...
	I1218 11:45:04.180496   14276 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1218 11:45:04.213693   14276 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1218 11:45:04.250614   14276 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1218 11:45:04.289271   14276 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1218 11:45:04.324939   14276 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1218 11:45:04.360402   14276 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1218 11:45:04.413140   14276 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1218 11:45:04.436175   14276 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1218 11:45:04.482466   14276 ssh_runner.go:195] Run: which cri-dockerd
	I1218 11:45:04.501667   14276 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1218 11:45:04.520983   14276 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I1218 11:45:04.565358   14276 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1218 11:45:04.766253   14276 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1218 11:45:04.944829   14276 docker.go:560] configuring docker to use "cgroupfs" as cgroup driver...
	I1218 11:45:04.944829   14276 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1218 11:45:04.987404   14276 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1218 11:45:05.184214   14276 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1218 11:45:06.686359   14276 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.5021437s)
	I1218 11:45:06.699615   14276 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1218 11:45:06.893447   14276 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1218 11:45:07.083510   14276 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1218 11:45:07.289038   14276 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1218 11:45:07.496484   14276 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1218 11:45:07.540458   14276 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1218 11:45:07.740322   14276 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I1218 11:45:07.866907   14276 start.go:522] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1218 11:45:07.879762   14276 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1218 11:45:07.891720   14276 start.go:543] Will wait 60s for crictl version
	I1218 11:45:07.904269   14276 ssh_runner.go:195] Run: which crictl
	I1218 11:45:07.922283   14276 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1218 11:45:08.012730   14276 start.go:559] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  24.0.7
	RuntimeApiVersion:  v1
	I1218 11:45:08.023191   14276 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1218 11:45:08.075358   14276 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1218 11:45:08.114074   14276 out.go:204] * Preparing Kubernetes v1.28.4 on Docker 24.0.7 ...
	I1218 11:45:08.114602   14276 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I1218 11:45:08.119122   14276 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I1218 11:45:08.119646   14276 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I1218 11:45:08.119646   14276 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I1218 11:45:08.119646   14276 ip.go:207] Found interface: {Index:8 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:ed:dc:88 Flags:up|broadcast|multicast|running}
	I1218 11:45:08.123356   14276 ip.go:210] interface addr: fe80::61bd:e46f:b0aa:cbb0/64
	I1218 11:45:08.123356   14276 ip.go:210] interface addr: 192.168.224.1/20
	I1218 11:45:08.137942   14276 ssh_runner.go:195] Run: grep 192.168.224.1	host.minikube.internal$ /etc/hosts
	I1218 11:45:08.144146   14276 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.224.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1218 11:45:08.165411   14276 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I1218 11:45:08.175245   14276 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1218 11:45:08.203061   14276 docker.go:671] Got preloaded images: 
	I1218 11:45:08.203061   14276 docker.go:677] registry.k8s.io/kube-apiserver:v1.28.4 wasn't preloaded
	I1218 11:45:08.215732   14276 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1218 11:45:08.245437   14276 ssh_runner.go:195] Run: which lz4
	I1218 11:45:08.264952   14276 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1218 11:45:08.271116   14276 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1218 11:45:08.271333   14276 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (423165415 bytes)
	I1218 11:45:10.546395   14276 docker.go:635] Took 2.294568 seconds to copy over tarball
	I1218 11:45:10.559384   14276 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1218 11:45:16.497912   14276 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (5.9384761s)
	I1218 11:45:16.497912   14276 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1218 11:45:16.565120   14276 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1218 11:45:16.581253   14276 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2629 bytes)
	I1218 11:45:16.620306   14276 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1218 11:45:16.800473   14276 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1218 11:45:23.189746   14276 ssh_runner.go:235] Completed: sudo systemctl restart docker: (6.389268s)
	I1218 11:45:23.200410   14276 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1218 11:45:23.231920   14276 docker.go:671] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.28.4
	registry.k8s.io/kube-proxy:v1.28.4
	registry.k8s.io/kube-controller-manager:v1.28.4
	registry.k8s.io/kube-scheduler:v1.28.4
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1218 11:45:23.231920   14276 cache_images.go:84] Images are preloaded, skipping loading
	I1218 11:45:23.240731   14276 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1218 11:45:23.281551   14276 cni.go:84] Creating CNI manager for ""
	I1218 11:45:23.281918   14276 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1218 11:45:23.281996   14276 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1218 11:45:23.281996   14276 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.238.87 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-922300 NodeName:addons-922300 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.238.87"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.238.87 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1218 11:45:23.282344   14276 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.238.87
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "addons-922300"
	  kubeletExtraArgs:
	    node-ip: 192.168.238.87
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.238.87"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1218 11:45:23.282595   14276 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=addons-922300 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.238.87
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:addons-922300 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1218 11:45:23.295158   14276 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I1218 11:45:23.314219   14276 binaries.go:44] Found k8s binaries, skipping transfer
	I1218 11:45:23.326122   14276 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1218 11:45:23.342139   14276 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (376 bytes)
	I1218 11:45:23.374117   14276 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1218 11:45:23.408007   14276 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2102 bytes)
	I1218 11:45:23.458874   14276 ssh_runner.go:195] Run: grep 192.168.238.87	control-plane.minikube.internal$ /etc/hosts
	I1218 11:45:23.463875   14276 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.238.87	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1218 11:45:23.485307   14276 certs.go:56] Setting up C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-922300 for IP: 192.168.238.87
	I1218 11:45:23.485307   14276 certs.go:190] acquiring lock for shared ca certs: {Name:mk3bd6e475aadf28590677101b36f61c132d4290 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1218 11:45:23.485307   14276 certs.go:204] generating minikubeCA CA: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.key
	I1218 11:45:23.677106   14276 crypto.go:156] Writing cert to C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.crt ...
	I1218 11:45:23.677106   14276 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.crt: {Name:mkfaab427ca81a644dd8158f14f3f807f65e8ec2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1218 11:45:23.679250   14276 crypto.go:164] Writing key to C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.key ...
	I1218 11:45:23.679250   14276 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.key: {Name:mke77f92a4900f4ba92d06a20a85ddb2e967d43b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1218 11:45:23.680243   14276 certs.go:204] generating proxyClientCA CA: C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.key
	I1218 11:45:23.814796   14276 crypto.go:156] Writing cert to C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.crt ...
	I1218 11:45:23.814796   14276 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.crt: {Name:mk06242bb3e648e29b1f160fecc7578d1c3ccbe2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1218 11:45:23.816917   14276 crypto.go:164] Writing key to C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.key ...
	I1218 11:45:23.816917   14276 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.key: {Name:mk9dbfc690f0c353aa1a789ba901364f0646dd1e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1218 11:45:23.819090   14276 certs.go:319] generating minikube-user signed cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-922300\client.key
	I1218 11:45:23.819918   14276 crypto.go:68] Generating cert C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-922300\client.crt with IP's: []
	I1218 11:45:23.937984   14276 crypto.go:156] Writing cert to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-922300\client.crt ...
	I1218 11:45:23.937984   14276 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-922300\client.crt: {Name:mk63194215424848a274b3ca01348ab51571000c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1218 11:45:23.938896   14276 crypto.go:164] Writing key to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-922300\client.key ...
	I1218 11:45:23.938896   14276 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-922300\client.key: {Name:mk8bc1038fa6fb0f4fc054f65337ea0bc37a4686 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1218 11:45:23.940848   14276 certs.go:319] generating minikube signed cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-922300\apiserver.key.41f2a41d
	I1218 11:45:23.940848   14276 crypto.go:68] Generating cert C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-922300\apiserver.crt.41f2a41d with IP's: [192.168.238.87 10.96.0.1 127.0.0.1 10.0.0.1]
	I1218 11:45:24.200023   14276 crypto.go:156] Writing cert to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-922300\apiserver.crt.41f2a41d ...
	I1218 11:45:24.200023   14276 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-922300\apiserver.crt.41f2a41d: {Name:mk0c7c4f0dea97d3d13013a54cb31dec164f2362 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1218 11:45:24.201081   14276 crypto.go:164] Writing key to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-922300\apiserver.key.41f2a41d ...
	I1218 11:45:24.201081   14276 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-922300\apiserver.key.41f2a41d: {Name:mk40bb45beccf34dbcc04a6ae2e56ecaebc7f9a8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1218 11:45:24.202025   14276 certs.go:337] copying C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-922300\apiserver.crt.41f2a41d -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-922300\apiserver.crt
	I1218 11:45:24.215025   14276 certs.go:341] copying C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-922300\apiserver.key.41f2a41d -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-922300\apiserver.key
	I1218 11:45:24.216035   14276 certs.go:319] generating aggregator signed cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-922300\proxy-client.key
	I1218 11:45:24.216035   14276 crypto.go:68] Generating cert C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-922300\proxy-client.crt with IP's: []
	I1218 11:45:24.611586   14276 crypto.go:156] Writing cert to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-922300\proxy-client.crt ...
	I1218 11:45:24.611731   14276 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-922300\proxy-client.crt: {Name:mk277b810df905f84da7d5362f3afc53a41b0cdb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1218 11:45:24.612899   14276 crypto.go:164] Writing key to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-922300\proxy-client.key ...
	I1218 11:45:24.612899   14276 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-922300\proxy-client.key: {Name:mkccbe7dcb96a1660ad33d686c27f6afe2a16ec4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1218 11:45:24.624901   14276 certs.go:437] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I1218 11:45:24.624901   14276 certs.go:437] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I1218 11:45:24.625927   14276 certs.go:437] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I1218 11:45:24.625927   14276 certs.go:437] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I1218 11:45:24.627492   14276 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-922300\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1218 11:45:24.668426   14276 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-922300\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1218 11:45:24.713167   14276 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-922300\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1218 11:45:24.750699   14276 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-922300\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1218 11:45:24.790204   14276 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1218 11:45:24.833184   14276 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1218 11:45:24.874917   14276 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1218 11:45:24.919483   14276 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1218 11:45:24.962597   14276 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1218 11:45:25.007462   14276 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1218 11:45:25.052846   14276 ssh_runner.go:195] Run: openssl version
	I1218 11:45:25.072776   14276 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1218 11:45:25.106658   14276 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1218 11:45:25.113927   14276 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Dec 18 11:45 /usr/share/ca-certificates/minikubeCA.pem
	I1218 11:45:25.126558   14276 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1218 11:45:25.147043   14276 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1218 11:45:25.176041   14276 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1218 11:45:25.181752   14276 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1218 11:45:25.182145   14276 kubeadm.go:404] StartCluster: {Name:addons-922300 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17765/minikube-v1.32.1-1702490427-17765-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702585004-17765@sha256:ba2fbb9efd5b81a443834ba0800f3bc13feea942ce199df74b0054a9bdb32bbd Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:
v1.28.4 ClusterName:addons-922300 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.238.87 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:26
2144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1218 11:45:25.191184   14276 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1218 11:45:25.231184   14276 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1218 11:45:25.260724   14276 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1218 11:45:25.287231   14276 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1218 11:45:25.304253   14276 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1218 11:45:25.304386   14276 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1218 11:45:25.609950   14276 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1218 11:45:40.010758   14276 kubeadm.go:322] [init] Using Kubernetes version: v1.28.4
	I1218 11:45:40.011048   14276 kubeadm.go:322] [preflight] Running pre-flight checks
	I1218 11:45:40.011048   14276 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1218 11:45:40.011048   14276 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1218 11:45:40.011645   14276 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1218 11:45:40.011789   14276 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1218 11:45:40.012461   14276 out.go:204]   - Generating certificates and keys ...
	I1218 11:45:40.012798   14276 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1218 11:45:40.012918   14276 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1218 11:45:40.013035   14276 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1218 11:45:40.013258   14276 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I1218 11:45:40.013298   14276 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I1218 11:45:40.013298   14276 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I1218 11:45:40.013298   14276 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I1218 11:45:40.013298   14276 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [addons-922300 localhost] and IPs [192.168.238.87 127.0.0.1 ::1]
	I1218 11:45:40.013298   14276 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I1218 11:45:40.013298   14276 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [addons-922300 localhost] and IPs [192.168.238.87 127.0.0.1 ::1]
	I1218 11:45:40.014251   14276 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1218 11:45:40.014251   14276 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I1218 11:45:40.014251   14276 kubeadm.go:322] [certs] Generating "sa" key and public key
	I1218 11:45:40.014251   14276 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1218 11:45:40.014251   14276 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1218 11:45:40.014251   14276 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1218 11:45:40.014251   14276 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1218 11:45:40.014251   14276 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1218 11:45:40.015254   14276 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1218 11:45:40.015254   14276 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1218 11:45:40.015254   14276 out.go:204]   - Booting up control plane ...
	I1218 11:45:40.016246   14276 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1218 11:45:40.016246   14276 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1218 11:45:40.016246   14276 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1218 11:45:40.016246   14276 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1218 11:45:40.016246   14276 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1218 11:45:40.016246   14276 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1218 11:45:40.017255   14276 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1218 11:45:40.017255   14276 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.009790 seconds
	I1218 11:45:40.017255   14276 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1218 11:45:40.017255   14276 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1218 11:45:40.018257   14276 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1218 11:45:40.018257   14276 kubeadm.go:322] [mark-control-plane] Marking the node addons-922300 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1218 11:45:40.018257   14276 kubeadm.go:322] [bootstrap-token] Using token: fplj9k.4vivvlyi1ljo9rqt
	I1218 11:45:40.019244   14276 out.go:204]   - Configuring RBAC rules ...
	I1218 11:45:40.019244   14276 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1218 11:45:40.019244   14276 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1218 11:45:40.019244   14276 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1218 11:45:40.020256   14276 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1218 11:45:40.020256   14276 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1218 11:45:40.020256   14276 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1218 11:45:40.020256   14276 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1218 11:45:40.020256   14276 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1218 11:45:40.021256   14276 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1218 11:45:40.021256   14276 kubeadm.go:322] 
	I1218 11:45:40.021256   14276 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1218 11:45:40.021256   14276 kubeadm.go:322] 
	I1218 11:45:40.021256   14276 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1218 11:45:40.021256   14276 kubeadm.go:322] 
	I1218 11:45:40.021256   14276 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1218 11:45:40.021256   14276 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1218 11:45:40.021256   14276 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1218 11:45:40.021256   14276 kubeadm.go:322] 
	I1218 11:45:40.021256   14276 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I1218 11:45:40.021256   14276 kubeadm.go:322] 
	I1218 11:45:40.022255   14276 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1218 11:45:40.022255   14276 kubeadm.go:322] 
	I1218 11:45:40.022255   14276 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1218 11:45:40.022255   14276 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1218 11:45:40.022255   14276 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1218 11:45:40.022255   14276 kubeadm.go:322] 
	I1218 11:45:40.022255   14276 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1218 11:45:40.022255   14276 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1218 11:45:40.022255   14276 kubeadm.go:322] 
	I1218 11:45:40.023253   14276 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token fplj9k.4vivvlyi1ljo9rqt \
	I1218 11:45:40.023253   14276 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:b2fa66f0127ff189a61b5e0d7ad6d9c9a72d2910f0374f3c179dae174436a982 \
	I1218 11:45:40.023253   14276 kubeadm.go:322] 	--control-plane 
	I1218 11:45:40.023253   14276 kubeadm.go:322] 
	I1218 11:45:40.023253   14276 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1218 11:45:40.023253   14276 kubeadm.go:322] 
	I1218 11:45:40.023253   14276 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token fplj9k.4vivvlyi1ljo9rqt \
	I1218 11:45:40.023253   14276 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:b2fa66f0127ff189a61b5e0d7ad6d9c9a72d2910f0374f3c179dae174436a982 
	I1218 11:45:40.024247   14276 cni.go:84] Creating CNI manager for ""
	I1218 11:45:40.024247   14276 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1218 11:45:40.024247   14276 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1218 11:45:40.038244   14276 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1218 11:45:40.066560   14276 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1218 11:45:40.099991   14276 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1218 11:45:40.118857   14276 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1218 11:45:40.119854   14276 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=30d8ecd1811578f7b9db580c501c654c189f68d4 minikube.k8s.io/name=addons-922300 minikube.k8s.io/updated_at=2023_12_18T11_45_40_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1218 11:45:40.156641   14276 ops.go:34] apiserver oom_adj: -16
	I1218 11:45:40.489656   14276 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1218 11:45:40.999577   14276 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1218 11:45:41.503394   14276 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1218 11:45:41.990398   14276 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1218 11:45:42.493596   14276 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1218 11:45:42.991135   14276 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1218 11:45:43.494773   14276 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1218 11:45:43.993899   14276 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1218 11:45:44.499919   14276 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1218 11:45:45.005467   14276 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1218 11:45:45.492527   14276 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1218 11:45:45.992361   14276 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1218 11:45:46.500584   14276 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1218 11:45:46.995123   14276 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1218 11:45:47.503136   14276 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1218 11:45:47.989698   14276 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1218 11:45:48.490503   14276 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1218 11:45:48.992706   14276 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1218 11:45:49.494641   14276 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1218 11:45:50.006210   14276 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1218 11:45:50.500710   14276 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1218 11:45:51.004346   14276 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1218 11:45:51.491319   14276 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1218 11:45:51.994616   14276 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1218 11:45:52.251413   14276 kubeadm.go:1088] duration metric: took 12.1513397s to wait for elevateKubeSystemPrivileges.
	I1218 11:45:52.251413   14276 kubeadm.go:406] StartCluster complete in 27.069249s
	I1218 11:45:52.251413   14276 settings.go:142] acquiring lock: {Name:mk2f48f1c2db86c45c5c20d13312e07e9c171d09 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1218 11:45:52.251413   14276 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	I1218 11:45:52.253130   14276 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\kubeconfig: {Name:mk3b9816be6135fef42a073128e9ed356868417a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1218 11:45:52.254456   14276 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1218 11:45:52.255236   14276 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volumesnapshots:true]
	I1218 11:45:52.255293   14276 addons.go:69] Setting default-storageclass=true in profile "addons-922300"
	I1218 11:45:52.255293   14276 addons.go:69] Setting helm-tiller=true in profile "addons-922300"
	I1218 11:45:52.255293   14276 addons.go:69] Setting cloud-spanner=true in profile "addons-922300"
	I1218 11:45:52.255293   14276 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-922300"
	I1218 11:45:52.255293   14276 addons.go:231] Setting addon helm-tiller=true in "addons-922300"
	I1218 11:45:52.255293   14276 addons.go:231] Setting addon csi-hostpath-driver=true in "addons-922300"
	I1218 11:45:52.255293   14276 addons.go:69] Setting ingress-dns=true in profile "addons-922300"
	I1218 11:45:52.255293   14276 addons.go:69] Setting metrics-server=true in profile "addons-922300"
	I1218 11:45:52.255293   14276 addons.go:69] Setting ingress=true in profile "addons-922300"
	I1218 11:45:52.255293   14276 addons.go:69] Setting inspektor-gadget=true in profile "addons-922300"
	I1218 11:45:52.255293   14276 addons.go:231] Setting addon inspektor-gadget=true in "addons-922300"
	I1218 11:45:52.255293   14276 addons.go:69] Setting registry=true in profile "addons-922300"
	I1218 11:45:52.255293   14276 host.go:66] Checking if "addons-922300" exists ...
	I1218 11:45:52.255293   14276 addons.go:231] Setting addon registry=true in "addons-922300"
	I1218 11:45:52.255293   14276 host.go:66] Checking if "addons-922300" exists ...
	I1218 11:45:52.255889   14276 host.go:66] Checking if "addons-922300" exists ...
	I1218 11:45:52.255293   14276 host.go:66] Checking if "addons-922300" exists ...
	I1218 11:45:52.255293   14276 config.go:182] Loaded profile config "addons-922300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1218 11:45:52.255293   14276 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-922300"
	I1218 11:45:52.255293   14276 addons.go:69] Setting gcp-auth=true in profile "addons-922300"
	I1218 11:45:52.255293   14276 addons.go:231] Setting addon cloud-spanner=true in "addons-922300"
	I1218 11:45:52.256331   14276 host.go:66] Checking if "addons-922300" exists ...
	I1218 11:45:52.255293   14276 addons.go:69] Setting volumesnapshots=true in profile "addons-922300"
	I1218 11:45:52.256419   14276 addons.go:231] Setting addon volumesnapshots=true in "addons-922300"
	I1218 11:45:52.256533   14276 host.go:66] Checking if "addons-922300" exists ...
	I1218 11:45:52.255293   14276 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-922300"
	I1218 11:45:52.256609   14276 addons.go:231] Setting addon nvidia-device-plugin=true in "addons-922300"
	I1218 11:45:52.256699   14276 host.go:66] Checking if "addons-922300" exists ...
	I1218 11:45:52.255293   14276 addons.go:69] Setting storage-provisioner=true in profile "addons-922300"
	I1218 11:45:52.256896   14276 addons.go:231] Setting addon storage-provisioner=true in "addons-922300"
	I1218 11:45:52.257058   14276 host.go:66] Checking if "addons-922300" exists ...
	I1218 11:45:52.255293   14276 addons.go:231] Setting addon ingress-dns=true in "addons-922300"
	I1218 11:45:52.255293   14276 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-922300"
	I1218 11:45:52.257360   14276 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-922300"
	I1218 11:45:52.257360   14276 host.go:66] Checking if "addons-922300" exists ...
	I1218 11:45:52.255293   14276 addons.go:231] Setting addon metrics-server=true in "addons-922300"
	I1218 11:45:52.257572   14276 host.go:66] Checking if "addons-922300" exists ...
	I1218 11:45:52.255293   14276 addons.go:231] Setting addon ingress=true in "addons-922300"
	I1218 11:45:52.257949   14276 host.go:66] Checking if "addons-922300" exists ...
	I1218 11:45:52.256419   14276 mustload.go:65] Loading cluster: addons-922300
	I1218 11:45:52.258794   14276 config.go:182] Loaded profile config "addons-922300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1218 11:45:52.259897   14276 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-922300 ).state
	I1218 11:45:52.260903   14276 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-922300 ).state
	I1218 11:45:52.262480   14276 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-922300 ).state
	I1218 11:45:52.262581   14276 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-922300 ).state
	I1218 11:45:52.262671   14276 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-922300 ).state
	I1218 11:45:52.263176   14276 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-922300 ).state
	I1218 11:45:52.263176   14276 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-922300 ).state
	I1218 11:45:52.264646   14276 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-922300 ).state
	I1218 11:45:52.264646   14276 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-922300 ).state
	I1218 11:45:52.265539   14276 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-922300 ).state
	I1218 11:45:52.265539   14276 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-922300 ).state
	I1218 11:45:52.265539   14276 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-922300 ).state
	I1218 11:45:52.265539   14276 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-922300 ).state
	I1218 11:45:52.265539   14276 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-922300 ).state
	I1218 11:45:53.039942   14276 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.224.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1218 11:45:53.593941   14276 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-922300" context rescaled to 1 replicas
	I1218 11:45:53.593941   14276 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.238.87 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1218 11:45:53.627943   14276 out.go:177] * Verifying Kubernetes components...
	I1218 11:45:53.682941   14276 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1218 11:45:58.059979   14276 main.go:141] libmachine: [stdout =====>] : Running
	
	I1218 11:45:58.059979   14276 main.go:141] libmachine: [stderr =====>] : 
	I1218 11:45:58.061698   14276 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I1218 11:45:58.064163   14276 addons.go:423] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I1218 11:45:58.064163   14276 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I1218 11:45:58.064163   14276 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-922300 ).state
	I1218 11:45:58.250458   14276 main.go:141] libmachine: [stdout =====>] : Running
	
	I1218 11:45:58.250458   14276 main.go:141] libmachine: [stderr =====>] : 
	I1218 11:45:58.254456   14276 addons.go:231] Setting addon default-storageclass=true in "addons-922300"
	I1218 11:45:58.254456   14276 host.go:66] Checking if "addons-922300" exists ...
	I1218 11:45:58.256456   14276 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-922300 ).state
	I1218 11:45:58.324581   14276 main.go:141] libmachine: [stdout =====>] : Running
	
	I1218 11:45:58.324581   14276 main.go:141] libmachine: [stderr =====>] : 
	I1218 11:45:58.325581   14276 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.23.1
	I1218 11:45:58.326586   14276 addons.go:423] installing /etc/kubernetes/addons/ig-namespace.yaml
	I1218 11:45:58.326586   14276 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I1218 11:45:58.326586   14276 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-922300 ).state
	I1218 11:45:58.365140   14276 main.go:141] libmachine: [stdout =====>] : Running
	
	I1218 11:45:58.365140   14276 main.go:141] libmachine: [stderr =====>] : 
	I1218 11:45:58.365778   14276 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.12
	I1218 11:45:58.367575   14276 addons.go:423] installing /etc/kubernetes/addons/deployment.yaml
	I1218 11:45:58.367575   14276 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1218 11:45:58.367575   14276 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-922300 ).state
	I1218 11:45:58.444218   14276 main.go:141] libmachine: [stdout =====>] : Running
	
	I1218 11:45:58.444218   14276 main.go:141] libmachine: [stderr =====>] : 
	I1218 11:45:58.446582   14276 addons.go:231] Setting addon storage-provisioner-rancher=true in "addons-922300"
	I1218 11:45:58.446582   14276 host.go:66] Checking if "addons-922300" exists ...
	I1218 11:45:58.448646   14276 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-922300 ).state
	I1218 11:45:58.456581   14276 main.go:141] libmachine: [stdout =====>] : Running
	
	I1218 11:45:58.456581   14276 main.go:141] libmachine: [stderr =====>] : 
	I1218 11:45:58.456581   14276 host.go:66] Checking if "addons-922300" exists ...
	I1218 11:45:58.556630   14276 main.go:141] libmachine: [stdout =====>] : Running
	
	I1218 11:45:58.556630   14276 main.go:141] libmachine: [stderr =====>] : 
	I1218 11:45:58.557632   14276 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.6.4
	I1218 11:45:58.558740   14276 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1218 11:45:58.558740   14276 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1218 11:45:58.558740   14276 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-922300 ).state
	I1218 11:45:58.608900   14276 main.go:141] libmachine: [stdout =====>] : Running
	
	I1218 11:45:58.608900   14276 main.go:141] libmachine: [stderr =====>] : 
	I1218 11:45:58.611245   14276 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.5
	I1218 11:45:58.611976   14276 out.go:177]   - Using image docker.io/registry:2.8.3
	I1218 11:45:58.613108   14276 addons.go:423] installing /etc/kubernetes/addons/registry-rc.yaml
	I1218 11:45:58.613108   14276 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (798 bytes)
	I1218 11:45:58.613108   14276 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-922300 ).state
	I1218 11:45:58.615648   14276 main.go:141] libmachine: [stdout =====>] : Running
	
	I1218 11:45:58.615648   14276 main.go:141] libmachine: [stderr =====>] : 
	I1218 11:45:58.616633   14276 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1218 11:45:58.617784   14276 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1218 11:45:58.617784   14276 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1218 11:45:58.617784   14276 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-922300 ).state
	I1218 11:45:58.665633   14276 main.go:141] libmachine: [stdout =====>] : Running
	
	I1218 11:45:58.665633   14276 main.go:141] libmachine: [stderr =====>] : 
	I1218 11:45:58.668626   14276 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	I1218 11:45:58.677642   14276 addons.go:423] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1218 11:45:58.677642   14276 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I1218 11:45:58.677642   14276 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-922300 ).state
	I1218 11:45:58.718950   14276 main.go:141] libmachine: [stdout =====>] : Running
	
	I1218 11:45:58.718950   14276 main.go:141] libmachine: [stderr =====>] : 
	I1218 11:45:58.723624   14276 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.9.4
	I1218 11:45:58.725628   14276 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
	I1218 11:45:58.726634   14276 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
	I1218 11:45:58.729633   14276 addons.go:423] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1218 11:45:58.729633   14276 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16103 bytes)
	I1218 11:45:58.729633   14276 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-922300 ).state
	I1218 11:45:58.942645   14276 main.go:141] libmachine: [stdout =====>] : Running
	
	I1218 11:45:58.942645   14276 main.go:141] libmachine: [stderr =====>] : 
	I1218 11:45:58.956643   14276 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1218 11:45:58.960644   14276 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1218 11:45:58.974642   14276 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1218 11:45:58.980646   14276 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1218 11:45:59.027644   14276 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1218 11:45:59.056482   14276 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1218 11:45:59.088641   14276 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1218 11:45:59.103769   14276 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1218 11:45:59.106777   14276 addons.go:423] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1218 11:45:59.106777   14276 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1218 11:45:59.106777   14276 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-922300 ).state
	I1218 11:45:59.120781   14276 main.go:141] libmachine: [stdout =====>] : Running
	
	I1218 11:45:59.120781   14276 main.go:141] libmachine: [stderr =====>] : 
	I1218 11:45:59.121780   14276 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.14.3
	I1218 11:45:59.122780   14276 addons.go:423] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1218 11:45:59.122780   14276 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1218 11:45:59.122780   14276 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-922300 ).state
	I1218 11:45:59.313431   14276 main.go:141] libmachine: [stdout =====>] : Running
	
	I1218 11:45:59.314824   14276 main.go:141] libmachine: [stderr =====>] : 
	I1218 11:45:59.322063   14276 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1218 11:45:59.325107   14276 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1218 11:45:59.325107   14276 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1218 11:45:59.325107   14276 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-922300 ).state
	I1218 11:45:59.904686   14276 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.224.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (6.8647393s)
	I1218 11:45:59.904686   14276 start.go:929] {"host.minikube.internal": 192.168.224.1} host record injected into CoreDNS's ConfigMap
	I1218 11:45:59.904686   14276 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (6.2217407s)
	I1218 11:45:59.907683   14276 node_ready.go:35] waiting up to 6m0s for node "addons-922300" to be "Ready" ...
	I1218 11:45:59.958686   14276 node_ready.go:49] node "addons-922300" has status "Ready":"True"
	I1218 11:45:59.958686   14276 node_ready.go:38] duration metric: took 51.0035ms waiting for node "addons-922300" to be "Ready" ...
	I1218 11:45:59.958686   14276 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1218 11:46:00.097347   14276 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-fqjjr" in "kube-system" namespace to be "Ready" ...
	I1218 11:46:02.393471   14276 pod_ready.go:102] pod "coredns-5dd5756b68-fqjjr" in "kube-system" namespace has status "Ready":"False"
	I1218 11:46:03.720805   14276 main.go:141] libmachine: [stdout =====>] : Running
	
	I1218 11:46:03.720805   14276 main.go:141] libmachine: [stderr =====>] : 
	I1218 11:46:03.720805   14276 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-922300 ).networkadapters[0]).ipaddresses[0]
	I1218 11:46:03.733804   14276 main.go:141] libmachine: [stdout =====>] : Running
	
	I1218 11:46:03.733804   14276 main.go:141] libmachine: [stderr =====>] : 
	I1218 11:46:03.733804   14276 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-922300 ).networkadapters[0]).ipaddresses[0]
	I1218 11:46:03.800849   14276 main.go:141] libmachine: [stdout =====>] : Running
	
	I1218 11:46:03.800849   14276 main.go:141] libmachine: [stderr =====>] : 
	I1218 11:46:03.801033   14276 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1218 11:46:03.801033   14276 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1218 11:46:03.801033   14276 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-922300 ).state
	I1218 11:46:03.860029   14276 main.go:141] libmachine: [stdout =====>] : Running
	
	I1218 11:46:03.860029   14276 main.go:141] libmachine: [stderr =====>] : 
	I1218 11:46:03.861035   14276 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1218 11:46:03.864051   14276 out.go:177]   - Using image docker.io/busybox:stable
	I1218 11:46:03.866052   14276 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1218 11:46:03.866052   14276 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1218 11:46:03.866052   14276 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-922300 ).state
	I1218 11:46:04.065037   14276 main.go:141] libmachine: [stdout =====>] : Running
	
	I1218 11:46:04.065037   14276 main.go:141] libmachine: [stderr =====>] : 
	I1218 11:46:04.065037   14276 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-922300 ).networkadapters[0]).ipaddresses[0]
	I1218 11:46:04.219807   14276 main.go:141] libmachine: [stdout =====>] : Running
	
	I1218 11:46:04.219870   14276 main.go:141] libmachine: [stderr =====>] : 
	I1218 11:46:04.219931   14276 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-922300 ).networkadapters[0]).ipaddresses[0]
	I1218 11:46:04.599604   14276 main.go:141] libmachine: [stdout =====>] : Running
	
	I1218 11:46:04.599604   14276 main.go:141] libmachine: [stderr =====>] : 
	I1218 11:46:04.599604   14276 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-922300 ).networkadapters[0]).ipaddresses[0]
	I1218 11:46:04.650602   14276 main.go:141] libmachine: [stdout =====>] : Running
	
	I1218 11:46:04.650602   14276 main.go:141] libmachine: [stderr =====>] : 
	I1218 11:46:04.650602   14276 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-922300 ).networkadapters[0]).ipaddresses[0]
	I1218 11:46:04.697080   14276 main.go:141] libmachine: [stdout =====>] : Running
	
	I1218 11:46:04.697080   14276 main.go:141] libmachine: [stderr =====>] : 
	I1218 11:46:04.697080   14276 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-922300 ).networkadapters[0]).ipaddresses[0]
	I1218 11:46:04.738521   14276 main.go:141] libmachine: [stdout =====>] : Running
	
	I1218 11:46:04.738521   14276 main.go:141] libmachine: [stderr =====>] : 
	I1218 11:46:04.738521   14276 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-922300 ).networkadapters[0]).ipaddresses[0]
	I1218 11:46:04.785527   14276 pod_ready.go:102] pod "coredns-5dd5756b68-fqjjr" in "kube-system" namespace has status "Ready":"False"
	I1218 11:46:05.294278   14276 main.go:141] libmachine: [stdout =====>] : Running
	
	I1218 11:46:05.294278   14276 main.go:141] libmachine: [stderr =====>] : 
	I1218 11:46:05.294278   14276 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-922300 ).networkadapters[0]).ipaddresses[0]
	I1218 11:46:05.489034   14276 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1218 11:46:05.489034   14276 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-922300 ).state
	I1218 11:46:05.753159   14276 main.go:141] libmachine: [stdout =====>] : Running
	
	I1218 11:46:05.753159   14276 main.go:141] libmachine: [stderr =====>] : 
	I1218 11:46:05.753298   14276 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-922300 ).networkadapters[0]).ipaddresses[0]
	I1218 11:46:06.620793   14276 main.go:141] libmachine: [stdout =====>] : Running
	
	I1218 11:46:06.620793   14276 main.go:141] libmachine: [stderr =====>] : 
	I1218 11:46:06.620793   14276 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-922300 ).networkadapters[0]).ipaddresses[0]
	I1218 11:46:07.137951   14276 pod_ready.go:102] pod "coredns-5dd5756b68-fqjjr" in "kube-system" namespace has status "Ready":"False"
	I1218 11:46:09.155054   14276 pod_ready.go:102] pod "coredns-5dd5756b68-fqjjr" in "kube-system" namespace has status "Ready":"False"
	I1218 11:46:09.822624   14276 main.go:141] libmachine: [stdout =====>] : Running
	
	I1218 11:46:09.822624   14276 main.go:141] libmachine: [stderr =====>] : 
	I1218 11:46:09.822624   14276 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-922300 ).networkadapters[0]).ipaddresses[0]
	I1218 11:46:09.982515   14276 main.go:141] libmachine: [stdout =====>] : Running
	
	I1218 11:46:09.982619   14276 main.go:141] libmachine: [stderr =====>] : 
	I1218 11:46:09.982619   14276 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-922300 ).networkadapters[0]).ipaddresses[0]
	I1218 11:46:10.853564   14276 main.go:141] libmachine: [stdout =====>] : 192.168.238.87
	
	I1218 11:46:10.853564   14276 main.go:141] libmachine: [stderr =====>] : 
	I1218 11:46:10.854557   14276 sshutil.go:53] new ssh client: &{IP:192.168.238.87 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\addons-922300\id_rsa Username:docker}
	I1218 11:46:10.948966   14276 main.go:141] libmachine: [stdout =====>] : 192.168.238.87
	
	I1218 11:46:10.949035   14276 main.go:141] libmachine: [stderr =====>] : 
	I1218 11:46:10.950040   14276 sshutil.go:53] new ssh client: &{IP:192.168.238.87 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\addons-922300\id_rsa Username:docker}
	I1218 11:46:11.058166   14276 main.go:141] libmachine: [stdout =====>] : 192.168.238.87
	
	I1218 11:46:11.058166   14276 main.go:141] libmachine: [stderr =====>] : 
	I1218 11:46:11.059302   14276 sshutil.go:53] new ssh client: &{IP:192.168.238.87 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\addons-922300\id_rsa Username:docker}
	I1218 11:46:11.116428   14276 pod_ready.go:92] pod "coredns-5dd5756b68-fqjjr" in "kube-system" namespace has status "Ready":"True"
	I1218 11:46:11.116428   14276 pod_ready.go:81] duration metric: took 11.0190721s waiting for pod "coredns-5dd5756b68-fqjjr" in "kube-system" namespace to be "Ready" ...
	I1218 11:46:11.116428   14276 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-gmhxp" in "kube-system" namespace to be "Ready" ...
	I1218 11:46:11.126568   14276 main.go:141] libmachine: [stdout =====>] : 192.168.238.87
	
	I1218 11:46:11.126568   14276 main.go:141] libmachine: [stderr =====>] : 
	I1218 11:46:11.127364   14276 sshutil.go:53] new ssh client: &{IP:192.168.238.87 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\addons-922300\id_rsa Username:docker}
	I1218 11:46:11.138011   14276 pod_ready.go:92] pod "coredns-5dd5756b68-gmhxp" in "kube-system" namespace has status "Ready":"True"
	I1218 11:46:11.138011   14276 pod_ready.go:81] duration metric: took 21.5832ms waiting for pod "coredns-5dd5756b68-gmhxp" in "kube-system" namespace to be "Ready" ...
	I1218 11:46:11.138011   14276 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-922300" in "kube-system" namespace to be "Ready" ...
	I1218 11:46:11.169526   14276 pod_ready.go:92] pod "etcd-addons-922300" in "kube-system" namespace has status "Ready":"True"
	I1218 11:46:11.169526   14276 pod_ready.go:81] duration metric: took 31.5143ms waiting for pod "etcd-addons-922300" in "kube-system" namespace to be "Ready" ...
	I1218 11:46:11.169526   14276 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-922300" in "kube-system" namespace to be "Ready" ...
	I1218 11:46:11.197684   14276 main.go:141] libmachine: [stdout =====>] : 192.168.238.87
	
	I1218 11:46:11.197684   14276 main.go:141] libmachine: [stderr =====>] : 
	I1218 11:46:11.198685   14276 sshutil.go:53] new ssh client: &{IP:192.168.238.87 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\addons-922300\id_rsa Username:docker}
	I1218 11:46:11.198685   14276 pod_ready.go:92] pod "kube-apiserver-addons-922300" in "kube-system" namespace has status "Ready":"True"
	I1218 11:46:11.199740   14276 pod_ready.go:81] duration metric: took 30.2145ms waiting for pod "kube-apiserver-addons-922300" in "kube-system" namespace to be "Ready" ...
	I1218 11:46:11.199740   14276 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-922300" in "kube-system" namespace to be "Ready" ...
	I1218 11:46:11.223701   14276 pod_ready.go:92] pod "kube-controller-manager-addons-922300" in "kube-system" namespace has status "Ready":"True"
	I1218 11:46:11.223701   14276 pod_ready.go:81] duration metric: took 23.961ms waiting for pod "kube-controller-manager-addons-922300" in "kube-system" namespace to be "Ready" ...
	I1218 11:46:11.223701   14276 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-crmjb" in "kube-system" namespace to be "Ready" ...
	I1218 11:46:11.246685   14276 main.go:141] libmachine: [stdout =====>] : 192.168.238.87
	
	I1218 11:46:11.247685   14276 main.go:141] libmachine: [stderr =====>] : 
	I1218 11:46:11.248675   14276 sshutil.go:53] new ssh client: &{IP:192.168.238.87 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\addons-922300\id_rsa Username:docker}
	I1218 11:46:11.272196   14276 main.go:141] libmachine: [stdout =====>] : Running
	
	I1218 11:46:11.272196   14276 main.go:141] libmachine: [stderr =====>] : 
	I1218 11:46:11.272196   14276 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-922300 ).networkadapters[0]).ipaddresses[0]
	I1218 11:46:11.279213   14276 addons.go:423] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I1218 11:46:11.279213   14276 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I1218 11:46:11.311180   14276 main.go:141] libmachine: [stdout =====>] : 192.168.238.87
	
	I1218 11:46:11.311180   14276 main.go:141] libmachine: [stderr =====>] : 
	I1218 11:46:11.312184   14276 sshutil.go:53] new ssh client: &{IP:192.168.238.87 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\addons-922300\id_rsa Username:docker}
	I1218 11:46:11.379041   14276 addons.go:423] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I1218 11:46:11.379041   14276 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I1218 11:46:11.424210   14276 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1218 11:46:11.424210   14276 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1218 11:46:11.431073   14276 addons.go:423] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I1218 11:46:11.431073   14276 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I1218 11:46:11.463129   14276 addons.go:423] installing /etc/kubernetes/addons/ig-role.yaml
	I1218 11:46:11.463129   14276 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I1218 11:46:11.519003   14276 pod_ready.go:92] pod "kube-proxy-crmjb" in "kube-system" namespace has status "Ready":"True"
	I1218 11:46:11.519553   14276 pod_ready.go:81] duration metric: took 295.8516ms waiting for pod "kube-proxy-crmjb" in "kube-system" namespace to be "Ready" ...
	I1218 11:46:11.519553   14276 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-922300" in "kube-system" namespace to be "Ready" ...
	I1218 11:46:11.582907   14276 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1218 11:46:11.582907   14276 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1218 11:46:11.590907   14276 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I1218 11:46:11.603904   14276 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1218 11:46:11.623813   14276 addons.go:423] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I1218 11:46:11.623942   14276 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I1218 11:46:11.655568   14276 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1218 11:46:11.701091   14276 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1218 11:46:11.701091   14276 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1218 11:46:11.726571   14276 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1218 11:46:11.810124   14276 addons.go:423] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I1218 11:46:11.810244   14276 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I1218 11:46:11.842935   14276 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1218 11:46:11.912767   14276 pod_ready.go:92] pod "kube-scheduler-addons-922300" in "kube-system" namespace has status "Ready":"True"
	I1218 11:46:11.912831   14276 pod_ready.go:81] duration metric: took 393.2772ms waiting for pod "kube-scheduler-addons-922300" in "kube-system" namespace to be "Ready" ...
	I1218 11:46:11.912895   14276 pod_ready.go:38] duration metric: took 11.9541349s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1218 11:46:11.912895   14276 api_server.go:52] waiting for apiserver process to appear ...
	I1218 11:46:11.936997   14276 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 11:46:11.981001   14276 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1218 11:46:12.018242   14276 addons.go:423] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I1218 11:46:12.018242   14276 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I1218 11:46:12.031108   14276 main.go:141] libmachine: [stdout =====>] : 192.168.238.87
	
	I1218 11:46:12.031201   14276 main.go:141] libmachine: [stderr =====>] : 
	I1218 11:46:12.031410   14276 sshutil.go:53] new ssh client: &{IP:192.168.238.87 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\addons-922300\id_rsa Username:docker}
	I1218 11:46:12.093960   14276 main.go:141] libmachine: [stdout =====>] : 192.168.238.87
	
	I1218 11:46:12.094202   14276 main.go:141] libmachine: [stderr =====>] : 
	I1218 11:46:12.095222   14276 sshutil.go:53] new ssh client: &{IP:192.168.238.87 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\addons-922300\id_rsa Username:docker}
	I1218 11:46:12.156721   14276 main.go:141] libmachine: [stdout =====>] : 192.168.238.87
	
	I1218 11:46:12.156721   14276 main.go:141] libmachine: [stderr =====>] : 
	I1218 11:46:12.157984   14276 sshutil.go:53] new ssh client: &{IP:192.168.238.87 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\addons-922300\id_rsa Username:docker}
	I1218 11:46:12.254239   14276 main.go:141] libmachine: [stdout =====>] : 192.168.238.87
	
	I1218 11:46:12.254239   14276 main.go:141] libmachine: [stderr =====>] : 
	I1218 11:46:12.254805   14276 sshutil.go:53] new ssh client: &{IP:192.168.238.87 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\addons-922300\id_rsa Username:docker}
	I1218 11:46:12.300265   14276 addons.go:423] installing /etc/kubernetes/addons/ig-crd.yaml
	I1218 11:46:12.300406   14276 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I1218 11:46:12.423563   14276 addons.go:423] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I1218 11:46:12.423563   14276 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I1218 11:46:12.556948   14276 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I1218 11:46:12.635014   14276 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1218 11:46:12.682489   14276 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1218 11:46:12.682489   14276 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1218 11:46:12.821684   14276 addons.go:423] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1218 11:46:12.821822   14276 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1218 11:46:12.958804   14276 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1218 11:46:12.958867   14276 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1218 11:46:12.962752   14276 addons.go:423] installing /etc/kubernetes/addons/registry-svc.yaml
	I1218 11:46:12.962866   14276 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1218 11:46:13.090577   14276 addons.go:423] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1218 11:46:13.090635   14276 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1218 11:46:13.106009   14276 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1218 11:46:13.106068   14276 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1218 11:46:13.132362   14276 addons.go:423] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1218 11:46:13.132362   14276 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1218 11:46:13.254216   14276 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1218 11:46:13.316529   14276 addons.go:423] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1218 11:46:13.316589   14276 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1218 11:46:13.331949   14276 addons.go:423] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1218 11:46:13.331949   14276 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1218 11:46:13.486886   14276 main.go:141] libmachine: [stdout =====>] : 192.168.238.87
	
	I1218 11:46:13.487278   14276 main.go:141] libmachine: [stderr =====>] : 
	I1218 11:46:13.487590   14276 sshutil.go:53] new ssh client: &{IP:192.168.238.87 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\addons-922300\id_rsa Username:docker}
	I1218 11:46:13.534895   14276 addons.go:423] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1218 11:46:13.534895   14276 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1218 11:46:13.538297   14276 addons.go:423] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1218 11:46:13.538362   14276 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1218 11:46:13.612252   14276 main.go:141] libmachine: [stdout =====>] : 192.168.238.87
	
	I1218 11:46:13.612252   14276 main.go:141] libmachine: [stderr =====>] : 
	I1218 11:46:13.612848   14276 sshutil.go:53] new ssh client: &{IP:192.168.238.87 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\addons-922300\id_rsa Username:docker}
	I1218 11:46:13.673177   14276 addons.go:423] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1218 11:46:13.673177   14276 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1218 11:46:13.689462   14276 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1218 11:46:13.780155   14276 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1218 11:46:13.780155   14276 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1218 11:46:13.907852   14276 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1218 11:46:13.908094   14276 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1218 11:46:13.973616   14276 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1218 11:46:14.192470   14276 main.go:141] libmachine: [stdout =====>] : 192.168.238.87
	
	I1218 11:46:14.192470   14276 main.go:141] libmachine: [stderr =====>] : 
	I1218 11:46:14.193196   14276 sshutil.go:53] new ssh client: &{IP:192.168.238.87 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\addons-922300\id_rsa Username:docker}
	I1218 11:46:14.224297   14276 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1218 11:46:14.224380   14276 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1218 11:46:14.478525   14276 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1218 11:46:14.512192   14276 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1218 11:46:14.512192   14276 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1218 11:46:14.687249   14276 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1218 11:46:14.687249   14276 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1218 11:46:14.862472   14276 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1218 11:46:15.008894   14276 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1218 11:46:15.159003   14276 addons.go:231] Setting addon gcp-auth=true in "addons-922300"
	I1218 11:46:15.159222   14276 host.go:66] Checking if "addons-922300" exists ...
	I1218 11:46:15.160549   14276 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-922300 ).state
	I1218 11:46:16.437653   14276 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (4.846742s)
	I1218 11:46:16.437653   14276 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (4.7819802s)
	I1218 11:46:16.437653   14276 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (4.8337449s)
	I1218 11:46:17.345308   14276 main.go:141] libmachine: [stdout =====>] : Running
	
	I1218 11:46:17.345373   14276 main.go:141] libmachine: [stderr =====>] : 
	I1218 11:46:17.360107   14276 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1218 11:46:17.360107   14276 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-922300 ).state
	I1218 11:46:17.571279   14276 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (5.844704s)
	I1218 11:46:19.763645   14276 main.go:141] libmachine: [stdout =====>] : Running
	
	I1218 11:46:19.763727   14276 main.go:141] libmachine: [stderr =====>] : 
	I1218 11:46:19.763863   14276 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-922300 ).networkadapters[0]).ipaddresses[0]
	I1218 11:46:22.557788   14276 main.go:141] libmachine: [stdout =====>] : 192.168.238.87
	
	I1218 11:46:22.557974   14276 main.go:141] libmachine: [stderr =====>] : 
	I1218 11:46:22.558747   14276 sshutil.go:53] new ssh client: &{IP:192.168.238.87 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\addons-922300\id_rsa Username:docker}
	I1218 11:46:24.655573   14276 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (12.8125843s)
	I1218 11:46:24.655698   14276 addons.go:467] Verifying addon ingress=true in "addons-922300"
	I1218 11:46:24.655739   14276 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (12.7186907s)
	I1218 11:46:24.655782   14276 api_server.go:72] duration metric: took 31.0618168s to wait for apiserver process to appear ...
	I1218 11:46:24.655782   14276 api_server.go:88] waiting for apiserver healthz status ...
	I1218 11:46:24.656427   14276 out.go:177] * Verifying ingress addon...
	I1218 11:46:24.655782   14276 api_server.go:253] Checking apiserver healthz at https://192.168.238.87:8443/healthz ...
	I1218 11:46:24.655782   14276 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (12.6747707s)
	I1218 11:46:24.657422   14276 addons.go:467] Verifying addon metrics-server=true in "addons-922300"
	I1218 11:46:24.655782   14276 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (12.0988243s)
	I1218 11:46:24.655782   14276 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (11.4010192s)
	I1218 11:46:24.655782   14276 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (12.0207586s)
	I1218 11:46:24.655782   14276 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (10.9663113s)
	W1218 11:46:24.657748   14276 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1218 11:46:24.655782   14276 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (10.6821574s)
	I1218 11:46:24.656327   14276 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (10.1777945s)
	I1218 11:46:24.657592   14276 addons.go:467] Verifying addon registry=true in "addons-922300"
	I1218 11:46:24.657822   14276 retry.go:31] will retry after 293.586922ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1218 11:46:24.658968   14276 out.go:177] * Verifying registry addon...
	I1218 11:46:24.659619   14276 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1218 11:46:24.661224   14276 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1218 11:46:24.679390   14276 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1218 11:46:24.679550   14276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 11:46:24.680835   14276 api_server.go:279] https://192.168.238.87:8443/healthz returned 200:
	ok
	W1218 11:46:24.680835   14276 out.go:239] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I1218 11:46:24.684609   14276 api_server.go:141] control plane version: v1.28.4
	I1218 11:46:24.684692   14276 api_server.go:131] duration metric: took 28.9095ms to wait for apiserver health ...
	I1218 11:46:24.684757   14276 system_pods.go:43] waiting for kube-system pods to appear ...
	I1218 11:46:24.687944   14276 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1218 11:46:24.687944   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 11:46:24.698780   14276 system_pods.go:59] 15 kube-system pods found
	I1218 11:46:24.698780   14276 system_pods.go:61] "coredns-5dd5756b68-fqjjr" [bf88f4ff-c00c-48ba-bec2-bb7b0638369d] Running
	I1218 11:46:24.698780   14276 system_pods.go:61] "etcd-addons-922300" [453bcdae-2d81-45fc-9552-98a8d9a522ef] Running
	I1218 11:46:24.698780   14276 system_pods.go:61] "kube-apiserver-addons-922300" [47ee850c-a30f-488f-88f8-e0fec9214843] Running
	I1218 11:46:24.698780   14276 system_pods.go:61] "kube-controller-manager-addons-922300" [ed52d0fb-8613-46dd-989f-b8d36d6c7664] Running
	I1218 11:46:24.698780   14276 system_pods.go:61] "kube-ingress-dns-minikube" [992cce25-289e-440e-8c49-c54d54593d16] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1218 11:46:24.698780   14276 system_pods.go:61] "kube-proxy-crmjb" [63a9dabd-268e-4448-8547-ef4382241165] Running
	I1218 11:46:24.698780   14276 system_pods.go:61] "kube-scheduler-addons-922300" [1674a3e6-6f48-4ce9-a739-6b4fc0385ccf] Running
	I1218 11:46:24.698780   14276 system_pods.go:61] "metrics-server-7c66d45ddc-z6jw8" [42fa398b-e99a-4a62-b71a-e0b0b7401b30] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1218 11:46:24.699855   14276 system_pods.go:61] "nvidia-device-plugin-daemonset-l942k" [67ec697e-80eb-48d5-9be5-6aa964458aac] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1218 11:46:24.699902   14276 system_pods.go:61] "registry-ln7vz" [86e7fa30-9c50-4fed-8fed-b6315dd21140] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1218 11:46:24.699902   14276 system_pods.go:61] "registry-proxy-6fcmp" [236c99f4-02df-4cb3-9586-be8eb8f00a39] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1218 11:46:24.699941   14276 system_pods.go:61] "snapshot-controller-58dbcc7b99-ptqqj" [1d977e64-3486-4fbe-90a4-76ddcd9aef8e] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1218 11:46:24.699941   14276 system_pods.go:61] "snapshot-controller-58dbcc7b99-wxjmq" [d2c322ad-927b-4195-941e-899f428f759c] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1218 11:46:24.699977   14276 system_pods.go:61] "storage-provisioner" [d5a23d2e-0382-41c7-ac9b-f29b894db998] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1218 11:46:24.700008   14276 system_pods.go:61] "tiller-deploy-7b677967b9-btzrr" [d71469da-c580-44a0-88d1-3239dbeb8d89] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I1218 11:46:24.700008   14276 system_pods.go:74] duration metric: took 15.2505ms to wait for pod list to return data ...
	I1218 11:46:24.700008   14276 default_sa.go:34] waiting for default service account to be created ...
	I1218 11:46:24.701844   14276 default_sa.go:45] found service account: "default"
	I1218 11:46:24.702642   14276 default_sa.go:55] duration metric: took 2.6345ms for default service account to be created ...
	I1218 11:46:24.702642   14276 system_pods.go:116] waiting for k8s-apps to be running ...
	I1218 11:46:24.712355   14276 system_pods.go:86] 15 kube-system pods found
	I1218 11:46:24.712385   14276 system_pods.go:89] "coredns-5dd5756b68-fqjjr" [bf88f4ff-c00c-48ba-bec2-bb7b0638369d] Running
	I1218 11:46:24.712385   14276 system_pods.go:89] "etcd-addons-922300" [453bcdae-2d81-45fc-9552-98a8d9a522ef] Running
	I1218 11:46:24.712450   14276 system_pods.go:89] "kube-apiserver-addons-922300" [47ee850c-a30f-488f-88f8-e0fec9214843] Running
	I1218 11:46:24.712450   14276 system_pods.go:89] "kube-controller-manager-addons-922300" [ed52d0fb-8613-46dd-989f-b8d36d6c7664] Running
	I1218 11:46:24.712513   14276 system_pods.go:89] "kube-ingress-dns-minikube" [992cce25-289e-440e-8c49-c54d54593d16] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1218 11:46:24.712513   14276 system_pods.go:89] "kube-proxy-crmjb" [63a9dabd-268e-4448-8547-ef4382241165] Running
	I1218 11:46:24.712513   14276 system_pods.go:89] "kube-scheduler-addons-922300" [1674a3e6-6f48-4ce9-a739-6b4fc0385ccf] Running
	I1218 11:46:24.712513   14276 system_pods.go:89] "metrics-server-7c66d45ddc-z6jw8" [42fa398b-e99a-4a62-b71a-e0b0b7401b30] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1218 11:46:24.712559   14276 system_pods.go:89] "nvidia-device-plugin-daemonset-l942k" [67ec697e-80eb-48d5-9be5-6aa964458aac] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1218 11:46:24.712590   14276 system_pods.go:89] "registry-ln7vz" [86e7fa30-9c50-4fed-8fed-b6315dd21140] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1218 11:46:24.712590   14276 system_pods.go:89] "registry-proxy-6fcmp" [236c99f4-02df-4cb3-9586-be8eb8f00a39] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1218 11:46:24.712590   14276 system_pods.go:89] "snapshot-controller-58dbcc7b99-ptqqj" [1d977e64-3486-4fbe-90a4-76ddcd9aef8e] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1218 11:46:24.712640   14276 system_pods.go:89] "snapshot-controller-58dbcc7b99-wxjmq" [d2c322ad-927b-4195-941e-899f428f759c] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1218 11:46:24.712640   14276 system_pods.go:89] "storage-provisioner" [d5a23d2e-0382-41c7-ac9b-f29b894db998] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1218 11:46:24.712679   14276 system_pods.go:89] "tiller-deploy-7b677967b9-btzrr" [d71469da-c580-44a0-88d1-3239dbeb8d89] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I1218 11:46:24.712679   14276 system_pods.go:126] duration metric: took 10.0366ms to wait for k8s-apps to be running ...
	I1218 11:46:24.712679   14276 system_svc.go:44] waiting for kubelet service to be running ....
	I1218 11:46:24.730077   14276 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1218 11:46:24.968361   14276 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1218 11:46:25.170281   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 11:46:25.170384   14276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 11:46:25.682653   14276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 11:46:25.688971   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 11:46:26.176852   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 11:46:26.176852   14276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 11:46:26.683114   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 11:46:26.683283   14276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 11:46:27.169949   14276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 11:46:27.173852   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 11:46:27.671431   14276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 11:46:27.684025   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 11:46:28.015459   14276 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (10.6552752s)
	I1218 11:46:28.015540   14276 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (3.2853801s)
	I1218 11:46:28.015567   14276 system_svc.go:56] duration metric: took 3.3028854s WaitForService to wait for kubelet.
	I1218 11:46:28.015567   14276 kubeadm.go:581] duration metric: took 34.4215988s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1218 11:46:28.016279   14276 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
	I1218 11:46:28.015567   14276 node_conditions.go:102] verifying NodePressure condition ...
	I1218 11:46:28.017926   14276 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.0
	I1218 11:46:28.018769   14276 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1218 11:46:28.018769   14276 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1218 11:46:28.037588   14276 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1218 11:46:28.037656   14276 node_conditions.go:123] node cpu capacity is 2
	I1218 11:46:28.037740   14276 node_conditions.go:105] duration metric: took 20.5935ms to run NodePressure ...
	I1218 11:46:28.037740   14276 start.go:228] waiting for startup goroutines ...
	I1218 11:46:28.039360   14276 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (13.0304553s)
	I1218 11:46:28.039360   14276 addons.go:467] Verifying addon csi-hostpath-driver=true in "addons-922300"
	I1218 11:46:28.040357   14276 out.go:177] * Verifying csi-hostpath-driver addon...
	I1218 11:46:28.043375   14276 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1218 11:46:28.058130   14276 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1218 11:46:28.058300   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 11:46:28.175756   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 11:46:28.175907   14276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 11:46:28.258105   14276 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1218 11:46:28.258105   14276 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1218 11:46:28.436747   14276 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1218 11:46:28.436747   14276 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5432 bytes)
	I1218 11:46:28.567973   14276 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1218 11:46:28.587988   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 11:46:28.686084   14276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 11:46:28.741908   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 11:46:28.963414   14276 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (3.9950494s)
	I1218 11:46:29.059469   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 11:46:29.186504   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 11:46:29.189462   14276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 11:46:29.557966   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 11:46:29.667642   14276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 11:46:29.673423   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 11:46:30.064458   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 11:46:30.178571   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 11:46:30.178571   14276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 11:46:30.560197   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 11:46:30.682636   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 11:46:30.682636   14276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 11:46:31.042984   14276 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (2.4750089s)
	I1218 11:46:31.051595   14276 addons.go:467] Verifying addon gcp-auth=true in "addons-922300"
	I1218 11:46:31.052307   14276 out.go:177] * Verifying gcp-auth addon...
	I1218 11:46:31.055190   14276 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1218 11:46:31.065966   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 11:46:31.071976   14276 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1218 11:46:31.072017   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 11:46:31.171608   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 11:46:31.171608   14276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 11:46:31.553327   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 11:46:31.563206   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 11:46:31.681512   14276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 11:46:31.685040   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 11:46:32.061837   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 11:46:32.072733   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 11:46:32.172699   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 11:46:32.172836   14276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 11:46:32.564542   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 11:46:32.565314   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 11:46:32.674104   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 11:46:32.674714   14276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 11:46:33.055956   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 11:46:33.059444   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 11:46:33.177860   14276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 11:46:33.185920   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 11:46:33.558662   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 11:46:33.562962   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 11:46:33.682138   14276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 11:46:33.683421   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 11:46:34.060145   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 11:46:34.066583   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 11:46:34.172294   14276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 11:46:34.174650   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 11:46:34.555142   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 11:46:34.564984   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 11:46:34.676362   14276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 11:46:34.682962   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 11:46:35.060956   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 11:46:35.064188   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 11:46:35.169882   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 11:46:35.170887   14276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 11:46:35.563520   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 11:46:35.566950   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 11:46:35.677472   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 11:46:35.678065   14276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 11:46:36.059141   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 11:46:36.062530   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 11:46:36.182126   14276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 11:46:36.185655   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 11:46:36.560327   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 11:46:36.566397   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 11:46:36.674143   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 11:46:36.678369   14276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 11:46:37.055004   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 11:46:37.066974   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 11:46:37.181404   14276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 11:46:37.183601   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 11:46:37.564085   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 11:46:37.567191   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 11:46:37.670847   14276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 11:46:37.671691   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 11:46:38.063840   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 11:46:38.067175   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 11:46:38.176640   14276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 11:46:38.179643   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 11:46:38.558157   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 11:46:38.561595   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 11:46:38.686218   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 11:46:38.690973   14276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 11:46:39.062337   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 11:46:39.066125   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 11:46:39.171722   14276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 11:46:39.174943   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 11:46:39.568036   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 11:46:39.569296   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 11:46:39.679812   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 11:46:39.680510   14276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 11:46:40.059604   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 11:46:40.061988   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 11:46:40.170724   14276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 11:46:40.175574   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 11:46:40.600326   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 11:46:40.604119   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 11:46:40.705466   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 11:46:40.706563   14276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 11:46:41.055185   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 11:46:41.058752   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 11:46:41.182176   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 11:46:41.182704   14276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 11:46:41.564119   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 11:46:41.564953   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 11:46:41.673186   14276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 11:46:41.676595   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 11:46:42.058719   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 11:46:42.063134   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 11:46:42.167111   14276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 11:46:42.171877   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 11:46:42.606467   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 11:46:42.609414   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 11:46:42.807885   14276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 11:46:42.811009   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 11:46:43.059026   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 11:46:43.061740   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 11:46:43.170887   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 11:46:43.171774   14276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 11:46:43.563161   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 11:46:43.567964   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 11:46:43.672519   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 11:46:43.673099   14276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 11:46:44.064712   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 11:46:44.068187   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 11:46:44.176205   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 11:46:44.176741   14276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 11:46:44.556785   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 11:46:44.561488   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 11:46:44.667209   14276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 11:46:44.673057   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 11:46:45.069032   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 11:46:45.074184   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 11:46:45.177011   14276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 11:46:45.177866   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 11:46:45.554224   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 11:46:45.563174   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 11:46:45.676726   14276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 11:46:45.677264   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 11:46:46.059745   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 11:46:46.063498   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 11:46:46.167360   14276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 11:46:46.171911   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 11:46:46.565135   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 11:46:46.565135   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 11:46:46.675640   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 11:46:46.676025   14276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 11:46:47.056141   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 11:46:47.064317   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 11:46:47.181134   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 11:46:47.183739   14276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 11:46:47.559413   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 11:46:47.562679   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 11:46:47.669734   14276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 11:46:47.672081   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 11:46:48.068879   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 11:46:48.070639   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 11:46:48.176151   14276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 11:46:48.178947   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 11:46:48.554522   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 11:46:48.565394   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 11:46:48.680472   14276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 11:46:48.689740   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 11:46:49.061180   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 11:46:49.064700   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 11:46:49.170729   14276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 11:46:49.172082   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 11:46:49.559476   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 11:46:49.564418   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 11:46:49.670003   14276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 11:46:49.672125   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 11:46:50.064615   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 11:46:50.065367   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 11:46:50.169319   14276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 11:46:50.172471   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 11:46:50.558424   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 11:46:50.561375   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 11:46:50.677034   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 11:46:50.677678   14276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 11:46:51.063289   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 11:46:51.063912   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 11:46:51.173027   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 11:46:51.174868   14276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 11:46:51.567270   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 11:46:51.569242   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 11:46:51.676807   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 11:46:51.676807   14276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 11:46:52.059818   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 11:46:52.061585   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 11:46:52.182370   14276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 11:46:52.183872   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 11:46:52.560376   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 11:46:52.563489   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 11:46:52.668618   14276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 11:46:52.672919   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 11:46:53.054025   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 11:46:53.063621   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 11:46:53.177084   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 11:46:53.177911   14276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 11:46:53.555153   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 11:46:53.560045   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 11:46:53.677815   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 11:46:53.678338   14276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 11:46:54.065501   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 11:46:54.069318   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 11:46:54.177761   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 11:46:54.177761   14276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 11:46:54.575321   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 11:46:54.578317   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 11:46:54.667324   14276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 11:46:54.676343   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 11:46:55.077984   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 11:46:55.079632   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 11:46:55.179906   14276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 11:46:55.181350   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 11:46:55.561754   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 11:46:55.564206   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 11:46:55.685782   14276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 11:46:55.687782   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 11:46:56.062689   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 11:46:56.064686   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 11:46:56.167198   14276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 11:46:56.171898   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 11:46:56.566819   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 11:46:56.567665   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 11:46:56.677092   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 11:46:56.677092   14276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 11:46:57.057213   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 11:46:57.062793   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 11:46:57.180426   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 11:46:57.181120   14276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 11:46:57.561188   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 11:46:57.563990   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 11:46:57.667242   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 11:46:57.669026   14276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 11:46:58.064118   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 11:46:58.066118   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 11:46:58.173103   14276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 11:46:58.174269   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 11:46:58.555710   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 11:46:58.559712   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 11:46:58.683004   14276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 11:46:58.685225   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 11:46:59.065333   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 11:46:59.068529   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 11:46:59.170491   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 11:46:59.172473   14276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 11:46:59.558236   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 11:46:59.560331   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 11:46:59.680870   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 11:46:59.681975   14276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 11:47:00.058038   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 11:47:00.060576   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 11:47:00.166563   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 11:47:00.167535   14276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 11:47:00.564079   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 11:47:00.566914   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 11:47:00.676206   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 11:47:00.676371   14276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 11:47:01.055399   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 11:47:01.059673   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 11:47:01.180590   14276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 11:47:01.182587   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 11:47:01.560444   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 11:47:01.562494   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 11:47:01.671199   14276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 11:47:01.672296   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 11:47:02.052787   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 11:47:02.066945   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 11:47:02.182832   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 11:47:02.183477   14276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 11:47:02.561005   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 11:47:02.563924   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 11:47:02.675846   14276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 11:47:02.677529   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 11:47:03.054491   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 11:47:03.066262   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 11:47:03.181177   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 11:47:03.183150   14276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 11:47:03.559302   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 11:47:03.565958   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 11:47:03.672763   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 11:47:03.673545   14276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 11:47:04.063611   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 11:47:04.064640   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 11:47:04.175876   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 11:47:04.176143   14276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 11:47:04.558195   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 11:47:04.561202   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 11:47:04.668563   14276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 11:47:04.669713   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 11:47:05.052553   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 11:47:05.064150   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 11:47:05.177377   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 11:47:05.177377   14276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 11:47:05.559520   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 11:47:05.562459   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 11:47:05.671081   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 11:47:05.671081   14276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 11:47:06.065709   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 11:47:06.066775   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 11:47:06.177962   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 11:47:06.180182   14276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 11:47:06.557251   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 11:47:06.562832   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 11:47:06.680000   14276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 11:47:06.681010   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 11:47:07.059503   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 11:47:07.063608   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 11:47:07.168213   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 11:47:07.171227   14276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 11:47:07.562383   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 11:47:07.564978   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 11:47:07.672417   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 11:47:07.673121   14276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 11:47:08.057223   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 11:47:08.060223   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 11:47:08.181173   14276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 11:47:08.183749   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 11:47:08.566678   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 11:47:08.567461   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 11:47:08.674191   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 11:47:08.674627   14276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 11:47:09.066772   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 11:47:09.071257   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 11:47:09.177336   14276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 11:47:09.178530   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 11:47:09.563457   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 11:47:09.566804   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 11:47:09.672297   14276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 11:47:09.700958   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 11:47:10.058795   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 11:47:10.062486   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 11:47:10.170825   14276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 11:47:10.176522   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 11:47:10.562789   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 11:47:10.564448   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 11:47:10.672444   14276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 11:47:10.673429   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 11:47:11.059623   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 11:47:11.063017   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 11:47:11.178556   14276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 11:47:11.180022   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 11:47:11.561490   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 11:47:11.564836   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 11:47:11.670647   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 11:47:11.671130   14276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 11:47:12.052729   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 11:47:12.064209   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 11:47:12.179186   14276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 11:47:12.179261   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 11:47:12.557055   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 11:47:12.560645   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 11:47:12.667918   14276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 11:47:12.670771   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 11:47:13.065244   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 11:47:13.068070   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 11:47:13.175720   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 11:47:13.178720   14276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 11:47:13.556083   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 11:47:13.562106   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 11:47:13.678372   14276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 11:47:13.680479   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 11:47:14.059085   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 11:47:14.061726   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 11:47:14.167649   14276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 11:47:14.173240   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 11:47:14.556414   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 11:47:14.563805   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 11:47:14.679243   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 11:47:14.681765   14276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 11:47:15.057793   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 11:47:15.060176   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 11:47:15.188604   14276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 11:47:15.191282   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 11:47:15.575558   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 11:47:15.592395   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 11:47:15.680527   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 11:47:15.680527   14276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 11:47:16.054247   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 11:47:16.065207   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 11:47:16.186281   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 11:47:16.186488   14276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 11:47:16.561611   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 11:47:16.565912   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 11:47:16.671202   14276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 11:47:16.673532   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 11:47:17.055671   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 11:47:17.059252   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 11:47:17.182403   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 11:47:17.183767   14276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 11:47:17.559600   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 11:47:17.562831   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 11:47:17.668375   14276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 11:47:17.672686   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 11:47:18.063984   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 11:47:18.067824   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 11:47:18.172548   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 11:47:18.174622   14276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 11:47:18.556470   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 11:47:18.559846   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 11:47:18.665226   14276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 11:47:18.674741   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 11:47:19.060262   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 11:47:19.064340   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 11:47:19.173656   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 11:47:19.177168   14276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 11:47:19.555210   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 11:47:19.566808   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 11:47:19.682441   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 11:47:19.682975   14276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 11:47:20.060999   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 11:47:20.067870   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 11:47:20.176523   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 11:47:20.177233   14276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 11:47:20.556628   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 11:47:20.561435   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 11:47:20.681189   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 11:47:20.681469   14276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 11:47:21.062990   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 11:47:21.066017   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 11:47:21.174100   14276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 11:47:21.176814   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 11:47:21.553833   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 11:47:21.566171   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 11:47:21.679272   14276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 11:47:21.682957   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 11:47:22.065605   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 11:47:22.067604   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 11:47:22.170606   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 11:47:22.170606   14276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 11:47:22.551606   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 11:47:22.563599   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 11:47:22.678170   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 11:47:22.678710   14276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 11:47:23.061232   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 11:47:23.063221   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 11:47:23.173262   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 11:47:23.173262   14276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 11:47:23.552156   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 11:47:23.563927   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 11:47:23.678648   14276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 11:47:23.678648   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 11:47:24.061880   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 11:47:24.065242   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 11:47:24.169079   14276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 11:47:24.171885   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 11:47:24.555696   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 11:47:24.560793   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 11:47:24.678640   14276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 11:47:24.680526   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 11:47:25.060336   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 11:47:25.062793   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 11:47:25.166422   14276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 11:47:25.171023   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 11:47:25.564163   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 11:47:25.568701   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 11:47:25.673638   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 11:47:25.673638   14276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 11:47:26.055291   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 11:47:26.059768   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 11:47:26.179554   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 11:47:26.179793   14276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 11:47:26.562750   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 11:47:26.565262   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 11:47:26.676623   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 11:47:26.676623   14276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 11:47:27.057239   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 11:47:27.061633   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 11:47:27.307669   14276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 11:47:27.310075   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 11:47:27.611245   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 11:47:27.612175   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 11:47:27.852104   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 11:47:27.852560   14276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 11:47:28.057049   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 11:47:28.059493   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 11:47:28.206570   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 11:47:28.206703   14276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 11:47:28.555748   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 11:47:28.561917   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 11:47:28.681327   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 11:47:28.681327   14276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 11:47:29.060714   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 11:47:29.064337   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 11:47:29.168298   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 11:47:29.168917   14276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 11:47:29.573573   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 11:47:29.574203   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 11:47:29.680026   14276 kapi.go:107] duration metric: took 1m5.0187491s to wait for kubernetes.io/minikube-addons=registry ...
	I1218 11:47:29.680187   14276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 11:47:30.057061   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 11:47:30.060019   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 11:47:30.178295   14276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 11:47:30.562308   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 11:47:30.568091   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 11:47:30.670862   14276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 11:47:31.068074   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 11:47:31.070342   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 11:47:31.180610   14276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 11:47:31.567411   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 11:47:31.639664   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 11:47:31.688358   14276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 11:47:32.054927   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 11:47:32.066960   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 11:47:32.179623   14276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 11:47:32.558719   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 11:47:32.562456   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 11:47:32.667221   14276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 11:47:33.066765   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 11:47:33.067590   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 11:47:33.176250   14276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 11:47:33.557622   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 11:47:33.561073   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 11:47:33.680139   14276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 11:47:34.067346   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 11:47:34.069465   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 11:47:34.170536   14276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 11:47:34.554292   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 11:47:34.564709   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 11:47:34.676979   14276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 11:47:35.105934   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 11:47:35.110564   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 11:47:35.181955   14276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 11:47:35.556629   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 11:47:35.563108   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 11:47:35.678386   14276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 11:47:36.059708   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 11:47:36.063236   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 11:47:36.169105   14276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 11:47:36.564746   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 11:47:36.568533   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 11:47:36.678684   14276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 11:47:37.057103   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 11:47:37.060436   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 11:47:37.181401   14276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 11:47:37.563478   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 11:47:37.565727   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 11:47:37.672924   14276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 11:47:38.058944   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 11:47:38.062872   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 11:47:38.180849   14276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 11:47:38.564330   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 11:47:38.566665   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 11:47:38.672091   14276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 11:47:39.250267   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 11:47:39.250973   14276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 11:47:39.256033   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 11:47:39.562304   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 11:47:39.566768   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 11:47:39.671055   14276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 11:47:40.064809   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 11:47:40.074968   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 11:47:40.174709   14276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 11:47:40.552561   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 11:47:40.563652   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 11:47:40.677178   14276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 11:47:41.060654   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 11:47:41.064780   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 11:47:41.171331   14276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 11:47:41.552757   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 11:47:41.564548   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 11:47:41.681195   14276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 11:47:42.065069   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 11:47:42.074780   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 11:47:42.183054   14276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 11:47:42.562851   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 11:47:42.567325   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 11:47:42.671412   14276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 11:47:43.054416   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 11:47:43.067252   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 11:47:43.177437   14276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 11:47:43.559955   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 11:47:43.563359   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 11:47:43.708630   14276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 11:47:44.064643   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 11:47:44.068250   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 11:47:44.174821   14276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 11:47:44.556134   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 11:47:44.559514   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 11:47:44.680325   14276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 11:47:45.066885   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 11:47:45.068982   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 11:47:45.172325   14276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 11:47:45.553901   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 11:47:45.563878   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 11:47:45.678134   14276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 11:47:46.068579   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 11:47:46.078728   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 11:47:46.180541   14276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 11:47:46.567146   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 11:47:46.567466   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 11:47:46.676498   14276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 11:47:47.059545   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 11:47:47.062092   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 11:47:47.167422   14276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 11:47:47.568274   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 11:47:47.571884   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 11:47:47.673578   14276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 11:47:48.055413   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 11:47:48.063247   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 11:47:48.177191   14276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 11:47:48.555993   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 11:47:48.560389   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 11:47:48.665471   14276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 11:47:49.063439   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 11:47:49.066908   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 11:47:49.172446   14276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 11:47:49.558113   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 11:47:49.561724   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 11:47:49.679591   14276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 11:47:50.062531   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 11:47:50.065496   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 11:47:50.169911   14276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 11:47:50.562610   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 11:47:50.566056   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 11:47:50.667896   14276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 11:47:51.059883   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 11:47:51.063292   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 11:47:51.167653   14276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 11:47:51.563310   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 11:47:51.567244   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 11:47:51.671192   14276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 11:47:52.067310   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 11:47:52.073543   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 11:47:52.168925   14276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 11:47:52.561324   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 11:47:52.565204   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 11:47:52.670804   14276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 11:47:53.059681   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 11:47:53.063515   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 11:47:53.184431   14276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 11:47:53.560238   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 11:47:53.564231   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 11:47:53.670432   14276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 11:47:54.054311   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 11:47:54.066818   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 11:47:54.179734   14276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 11:47:54.558704   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 11:47:54.561231   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 11:47:54.667567   14276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 11:47:55.060352   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 11:47:55.063906   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 11:47:55.169231   14276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 11:47:55.553668   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 11:47:55.563945   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 11:47:55.674394   14276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 11:47:56.059615   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 11:47:56.062548   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 11:47:56.166106   14276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 11:47:56.560231   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 11:47:56.563316   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 11:47:56.667412   14276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 11:47:57.062295   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 11:47:57.066867   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 11:47:57.168029   14276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 11:47:57.568078   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 11:47:57.569269   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 11:47:57.677014   14276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 11:47:58.061339   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 11:47:58.065736   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 11:47:58.166950   14276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 11:47:58.565399   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 11:47:58.566000   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 11:47:58.672578   14276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 11:47:59.056140   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 11:47:59.063337   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 11:47:59.180978   14276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 11:47:59.564168   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 11:47:59.568192   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 11:47:59.672205   14276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 11:48:00.052166   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 11:48:00.065020   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 11:48:00.176353   14276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 11:48:00.559600   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 11:48:00.563040   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 11:48:00.669008   14276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 11:48:01.052873   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 11:48:01.063935   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 11:48:01.177361   14276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 11:48:01.556212   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 11:48:01.558905   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 11:48:01.679883   14276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 11:48:02.056505   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 11:48:02.060087   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 11:48:02.187491   14276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 11:48:02.559551   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 11:48:02.562980   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 11:48:02.666768   14276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 11:48:03.060326   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 11:48:03.064416   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 11:48:03.170100   14276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 11:48:03.561524   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 11:48:03.565327   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 11:48:03.672659   14276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 11:48:04.057499   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 11:48:04.061327   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 11:48:04.183290   14276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 11:48:04.564886   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 11:48:04.569157   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 11:48:04.672354   14276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 11:48:05.067437   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 11:48:05.068569   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 11:48:05.175461   14276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 11:48:05.555873   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 11:48:05.559698   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 11:48:05.677813   14276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 11:48:06.086617   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 11:48:06.091060   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 11:48:06.168264   14276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 11:48:06.563842   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 11:48:06.566978   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 11:48:06.680252   14276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 11:48:07.056422   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 11:48:07.061319   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 11:48:07.183253   14276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 11:48:07.560156   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 11:48:07.567769   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 11:48:07.681152   14276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 11:48:08.057658   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 11:48:08.078889   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 11:48:08.166867   14276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 11:48:08.565467   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 11:48:08.567661   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 11:48:08.677458   14276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 11:48:09.055204   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 11:48:09.065823   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 11:48:09.177290   14276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 11:48:09.555428   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 11:48:09.562074   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 11:48:09.681423   14276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 11:48:10.060862   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 11:48:10.064257   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 11:48:10.171085   14276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 11:48:10.557264   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 11:48:10.561454   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 11:48:10.665903   14276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 11:48:11.066683   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 11:48:11.067353   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 11:48:11.173364   14276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 11:48:11.555165   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 11:48:11.568316   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 11:48:11.680529   14276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 11:48:12.063891   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 11:48:12.066122   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 11:48:12.170555   14276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 11:48:12.567434   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 11:48:12.568435   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 11:48:12.673753   14276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 11:48:13.065081   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 11:48:13.066717   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 11:48:13.175180   14276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 11:48:13.556318   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 11:48:13.561266   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 11:48:13.682442   14276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 11:48:14.062997   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 11:48:14.066707   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 11:48:14.172075   14276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 11:48:14.563436   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 11:48:14.566326   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 11:48:14.673079   14276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 11:48:15.057900   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 11:48:15.060909   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 11:48:15.182973   14276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 11:48:15.566159   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 11:48:15.568010   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 11:48:15.673652   14276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 11:48:16.054585   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 11:48:16.067067   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 11:48:16.180044   14276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 11:48:16.563468   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 11:48:16.565473   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 11:48:16.671033   14276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 11:48:17.066105   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 11:48:17.068060   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 11:48:17.178059   14276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 11:48:17.555746   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 11:48:17.559813   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 11:48:17.680386   14276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 11:48:18.061543   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 11:48:18.065143   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 11:48:18.170177   14276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 11:48:18.555210   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 11:48:18.564898   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 11:48:18.678654   14276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 11:48:19.065830   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 11:48:19.067860   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 11:48:19.170847   14276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 11:48:19.555185   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 11:48:19.566212   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 11:48:19.681229   14276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 11:48:20.066857   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 11:48:20.067838   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 11:48:20.254513   14276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 11:48:20.556408   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 11:48:20.560880   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 11:48:20.680293   14276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 11:48:21.062884   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 11:48:21.066258   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 11:48:21.173901   14276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 11:48:21.565915   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 11:48:21.566607   14276 kapi.go:107] duration metric: took 1m53.5232265s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1218 11:48:21.671630   14276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 11:48:22.061274   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 11:48:22.172361   14276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 11:48:22.564758   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 11:48:22.675838   14276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 11:48:23.064994   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 11:48:23.175962   14276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 11:48:23.567041   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 11:48:23.676507   14276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 11:48:24.063860   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 11:48:24.172814   14276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 11:48:24.564242   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 11:48:24.675124   14276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 11:48:25.073756   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 11:48:25.166396   14276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 11:48:25.562267   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 11:48:25.672193   14276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 11:48:26.075580   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 11:48:26.170580   14276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 11:48:26.565103   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 11:48:26.673790   14276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 11:48:27.066180   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 11:48:27.175567   14276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 11:48:27.566560   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 11:48:27.675622   14276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 11:48:28.066435   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 11:48:28.176670   14276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 11:48:28.568745   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 11:48:28.676681   14276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 11:48:29.067521   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 11:48:29.179509   14276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 11:48:29.575564   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 11:48:29.670295   14276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 11:48:30.062455   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 11:48:30.174786   14276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 11:48:30.563212   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 11:48:30.673258   14276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 11:48:31.066603   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 11:48:31.175745   14276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 11:48:31.564178   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 11:48:31.673755   14276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 11:48:32.074949   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 11:48:32.169714   14276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 11:48:32.574719   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 11:48:32.669792   14276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 11:48:33.076532   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 11:48:33.171036   14276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 11:48:33.573308   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 11:48:33.667669   14276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 11:48:34.073449   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 11:48:34.181620   14276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 11:48:34.570615   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 11:48:34.679681   14276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 11:48:35.067832   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 11:48:35.177390   14276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 11:48:35.568919   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 11:48:35.674717   14276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 11:48:36.062585   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 11:48:36.172435   14276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 11:48:36.575230   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 11:48:36.670847   14276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 11:48:37.075442   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 11:48:37.169220   14276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 11:48:37.571822   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 11:48:37.668968   14276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 11:48:38.079127   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 11:48:38.169889   14276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 11:48:38.576409   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 11:48:38.670002   14276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 11:48:39.062168   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 11:48:39.171161   14276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 11:48:39.564985   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 11:48:39.676543   14276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 11:48:40.069816   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 11:48:40.177647   14276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 11:48:40.570354   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 11:48:40.682221   14276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 11:48:41.070976   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 11:48:41.180464   14276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 11:48:41.569642   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 11:48:41.680942   14276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 11:48:42.068210   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 11:48:42.179152   14276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 11:48:42.571255   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 11:48:42.668311   14276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 11:48:43.062062   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 11:48:43.173427   14276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 11:48:43.566190   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 11:48:43.677508   14276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 11:48:44.070991   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 11:48:44.182796   14276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 11:48:44.567490   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 11:48:44.675231   14276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 11:48:45.072157   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 11:48:45.167941   14276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 11:48:45.565644   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 11:48:45.676859   14276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 11:48:46.075728   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 11:48:46.174026   14276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 11:48:46.568917   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 11:48:46.679648   14276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 11:48:47.061182   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 11:48:47.172779   14276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 11:48:47.902791   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 11:48:47.903809   14276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 11:48:48.071549   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 11:48:48.208124   14276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 11:48:48.600570   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 11:48:48.702691   14276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 11:48:49.065869   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 11:48:49.179543   14276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 11:48:49.568181   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 11:48:49.676657   14276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 11:48:50.074894   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 11:48:50.169443   14276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 11:48:50.566742   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 11:48:50.681442   14276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 11:48:51.063374   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 11:48:51.180251   14276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 11:48:51.563687   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 11:48:51.675760   14276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 11:48:52.069557   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 11:48:52.180193   14276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 11:48:52.562129   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 11:48:52.674337   14276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 11:48:53.068561   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 11:48:53.178671   14276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 11:48:53.573686   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 11:48:53.669404   14276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 11:48:54.063856   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 11:48:54.174446   14276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 11:48:54.575390   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 11:48:54.675233   14276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 11:48:55.066256   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 11:48:55.179821   14276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 11:48:55.560926   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 11:48:55.670429   14276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 11:48:56.069585   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 11:48:56.166746   14276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 11:48:56.575134   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 11:48:56.752072   14276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 11:48:57.075068   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 11:48:57.172183   14276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 11:48:57.571728   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 11:48:57.667712   14276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 11:48:58.077410   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 11:48:58.173985   14276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 11:48:58.568192   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 11:48:58.682477   14276 kapi.go:107] duration metric: took 2m34.0227248s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1218 11:48:59.076531   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 11:48:59.571262   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 11:49:00.113658   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 11:49:00.568899   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 11:49:01.062349   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 11:49:01.568979   14276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 11:49:02.064633   14276 kapi.go:107] duration metric: took 2m31.0093122s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1218 11:49:02.065333   14276 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-922300 cluster.
	I1218 11:49:02.066163   14276 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1218 11:49:02.066761   14276 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1218 11:49:02.067527   14276 out.go:177] * Enabled addons: helm-tiller, nvidia-device-plugin, cloud-spanner, ingress-dns, metrics-server, inspektor-gadget, storage-provisioner, default-storageclass, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I1218 11:49:02.068189   14276 addons.go:502] enable addons completed in 3m9.8135716s: enabled=[helm-tiller nvidia-device-plugin cloud-spanner ingress-dns metrics-server inspektor-gadget storage-provisioner default-storageclass volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I1218 11:49:02.068310   14276 start.go:233] waiting for cluster config update ...
	I1218 11:49:02.068310   14276 start.go:242] writing updated cluster config ...
	I1218 11:49:02.082555   14276 ssh_runner.go:195] Run: rm -f paused
	I1218 11:49:02.325433   14276 start.go:600] kubectl: 1.29.0, cluster: 1.28.4 (minor skew: 1)
	I1218 11:49:02.326756   14276 out.go:177] * Done! kubectl is now configured to use "addons-922300" cluster and "default" namespace by default
	
	* 
	* ==> Docker <==
	* -- Journal begins at Mon 2023-12-18 11:43:43 UTC, ends at Mon 2023-12-18 11:50:10 UTC. --
	Dec 18 11:50:08 addons-922300 dockerd[1328]: time="2023-12-18T11:50:08.229125012Z" level=info msg="shim disconnected" id=4ca247059ac6113e637a88180757fd4f17e5e4bc8be7f49d5133d2350eb8bf62 namespace=moby
	Dec 18 11:50:08 addons-922300 dockerd[1328]: time="2023-12-18T11:50:08.229389511Z" level=warning msg="cleaning up after shim disconnected" id=4ca247059ac6113e637a88180757fd4f17e5e4bc8be7f49d5133d2350eb8bf62 namespace=moby
	Dec 18 11:50:08 addons-922300 dockerd[1328]: time="2023-12-18T11:50:08.229473110Z" level=info msg="cleaning up dead shim" namespace=moby
	Dec 18 11:50:08 addons-922300 dockerd[1322]: time="2023-12-18T11:50:08.247189741Z" level=info msg="ignoring event" container=4ca247059ac6113e637a88180757fd4f17e5e4bc8be7f49d5133d2350eb8bf62 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 18 11:50:08 addons-922300 dockerd[1328]: time="2023-12-18T11:50:08.503137440Z" level=info msg="shim disconnected" id=e3163bb02187e9ba09f24e293cd5930ccf5d2ab2d86d0dcca79e87596e260203 namespace=moby
	Dec 18 11:50:08 addons-922300 dockerd[1328]: time="2023-12-18T11:50:08.503557339Z" level=warning msg="cleaning up after shim disconnected" id=e3163bb02187e9ba09f24e293cd5930ccf5d2ab2d86d0dcca79e87596e260203 namespace=moby
	Dec 18 11:50:08 addons-922300 dockerd[1328]: time="2023-12-18T11:50:08.503705638Z" level=info msg="cleaning up dead shim" namespace=moby
	Dec 18 11:50:08 addons-922300 dockerd[1322]: time="2023-12-18T11:50:08.570452677Z" level=info msg="ignoring event" container=e3163bb02187e9ba09f24e293cd5930ccf5d2ab2d86d0dcca79e87596e260203 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 18 11:50:09 addons-922300 dockerd[1328]: time="2023-12-18T11:50:09.246095552Z" level=info msg="shim disconnected" id=7e46219cc79cf83fb03cb088f3f62d39541dc1f36e9445cb3ffa86b74196a9a5 namespace=moby
	Dec 18 11:50:09 addons-922300 dockerd[1328]: time="2023-12-18T11:50:09.246295451Z" level=warning msg="cleaning up after shim disconnected" id=7e46219cc79cf83fb03cb088f3f62d39541dc1f36e9445cb3ffa86b74196a9a5 namespace=moby
	Dec 18 11:50:09 addons-922300 dockerd[1328]: time="2023-12-18T11:50:09.246312851Z" level=info msg="cleaning up dead shim" namespace=moby
	Dec 18 11:50:09 addons-922300 dockerd[1322]: time="2023-12-18T11:50:09.250816534Z" level=info msg="ignoring event" container=7e46219cc79cf83fb03cb088f3f62d39541dc1f36e9445cb3ffa86b74196a9a5 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 18 11:50:09 addons-922300 dockerd[1328]: time="2023-12-18T11:50:09.319706970Z" level=info msg="shim disconnected" id=8c8b039ee5ef6f7ce6d48b11e5096ef3a74e6b77e2973e08511fdba06c20d680 namespace=moby
	Dec 18 11:50:09 addons-922300 dockerd[1328]: time="2023-12-18T11:50:09.325517647Z" level=warning msg="cleaning up after shim disconnected" id=8c8b039ee5ef6f7ce6d48b11e5096ef3a74e6b77e2973e08511fdba06c20d680 namespace=moby
	Dec 18 11:50:09 addons-922300 dockerd[1328]: time="2023-12-18T11:50:09.325741346Z" level=info msg="cleaning up dead shim" namespace=moby
	Dec 18 11:50:09 addons-922300 dockerd[1328]: time="2023-12-18T11:50:09.321413363Z" level=info msg="shim disconnected" id=c75ea2a38c9bc4c46b6b9aaa5904b977ad0d8a1120ef103e5566cd064ddefb32 namespace=moby
	Dec 18 11:50:09 addons-922300 dockerd[1328]: time="2023-12-18T11:50:09.326188145Z" level=warning msg="cleaning up after shim disconnected" id=c75ea2a38c9bc4c46b6b9aaa5904b977ad0d8a1120ef103e5566cd064ddefb32 namespace=moby
	Dec 18 11:50:09 addons-922300 dockerd[1328]: time="2023-12-18T11:50:09.326417444Z" level=info msg="cleaning up dead shim" namespace=moby
	Dec 18 11:50:09 addons-922300 dockerd[1322]: time="2023-12-18T11:50:09.335716008Z" level=info msg="ignoring event" container=8c8b039ee5ef6f7ce6d48b11e5096ef3a74e6b77e2973e08511fdba06c20d680 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 18 11:50:09 addons-922300 dockerd[1322]: time="2023-12-18T11:50:09.335797508Z" level=info msg="ignoring event" container=c75ea2a38c9bc4c46b6b9aaa5904b977ad0d8a1120ef103e5566cd064ddefb32 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 18 11:50:09 addons-922300 dockerd[1328]: time="2023-12-18T11:50:09.435954823Z" level=warning msg="cleanup warnings time=\"2023-12-18T11:50:09Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	Dec 18 11:50:09 addons-922300 dockerd[1328]: time="2023-12-18T11:50:09.988682101Z" level=info msg="shim disconnected" id=f8903dad34bc9c35a0413a4f512c1d7c45e691d35d60af89d615ebfe0803d4ff namespace=moby
	Dec 18 11:50:09 addons-922300 dockerd[1328]: time="2023-12-18T11:50:09.989147599Z" level=warning msg="cleaning up after shim disconnected" id=f8903dad34bc9c35a0413a4f512c1d7c45e691d35d60af89d615ebfe0803d4ff namespace=moby
	Dec 18 11:50:09 addons-922300 dockerd[1328]: time="2023-12-18T11:50:09.989292598Z" level=info msg="cleaning up dead shim" namespace=moby
	Dec 18 11:50:09 addons-922300 dockerd[1322]: time="2023-12-18T11:50:09.991948188Z" level=info msg="ignoring event" container=f8903dad34bc9c35a0413a4f512c1d7c45e691d35d60af89d615ebfe0803d4ff module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                                        CREATED              STATE               NAME                         ATTEMPT             POD ID              POD
	8fdf6e5c7d5cd       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:c77d8eb2b3dc6e9d60767f824b296e42d6d4fdc2f17f507492a2c981933db931            26 seconds ago       Exited              gadget                       4                   c89d543ce02ce       gadget-46nrg
	c6cb0a2f28f8c       a416a98b71e22                                                                                                                31 seconds ago       Exited              helper-pod                   0                   4871ddebc9c8b       helper-pod-delete-pvc-72013ae2-6033-4d78-8186-560dbbf121d3
	bba3552c3371a       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:d730651bb6584f969d95d8279a754cf9d8d31b5055c43dbdb8d7363a8c6371cf                 About a minute ago   Running             gcp-auth                     0                   4adcf7f53b7dd       gcp-auth-d4c87556c-psb2s
	0929f62e014b8       registry.k8s.io/ingress-nginx/controller@sha256:5b161f051d017e55d358435f295f5e9a297e66158f136321d9b04520ec6c48a3             About a minute ago   Running             controller                   0                   5376c7797f8fc       ingress-nginx-controller-7c6974c4d8-zkrpv
	703ae04f66959       registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b             2 minutes ago        Exited              csi-attacher                 0                   e3163bb02187e       csi-hostpath-attacher-0
	b63706cc6184f       registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7              2 minutes ago        Exited              csi-resizer                  0                   7e46219cc79cf       csi-hostpath-resizer-0
	5235e22db8f67       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a7943503b45d552785aa3b5e457f169a5661fb94d82b8a3373bcd9ebaf9aac80   2 minutes ago        Exited              patch                        0                   3c24525b443a4       ingress-nginx-admission-patch-xzzk4
	52073a4208268       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a7943503b45d552785aa3b5e457f169a5661fb94d82b8a3373bcd9ebaf9aac80   2 minutes ago        Exited              create                       0                   9117c2cbb6006       ingress-nginx-admission-create-nmhw5
	2817f1f1b2c0f       rancher/local-path-provisioner@sha256:e34c88ae0affb1cdefbb874140d6339d4a27ec4ee420ae8199cd839997b05246                       2 minutes ago        Running             local-path-provisioner       0                   ffa4b4d345a5a       local-path-provisioner-78b46b4d5c-bm5wp
	55f50fda26877       registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280      2 minutes ago        Running             volume-snapshot-controller   0                   3a3d53be9ad04       snapshot-controller-58dbcc7b99-ptqqj
	fbcec655ccefd       registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280      2 minutes ago        Running             volume-snapshot-controller   0                   70535a9c66726       snapshot-controller-58dbcc7b99-wxjmq
	03765ac1e4a9f       gcr.io/k8s-minikube/minikube-ingress-dns@sha256:4abe27f9fc03fedab1d655e2020e6b165faf3bf6de1088ce6cf215a75b78f05f             2 minutes ago        Running             minikube-ingress-dns         0                   37cd0fd63bcdc       kube-ingress-dns-minikube
	9779d406c9236       ghcr.io/helm/tiller@sha256:4c43eb385032945cad047d2350e4945d913b90b3ab43ee61cecb32a495c6df0f                                  3 minutes ago        Running             tiller                       0                   7ac92bd6c3cbc       tiller-deploy-7b677967b9-btzrr
	8c8b039ee5ef6       gcr.io/cloud-spanner-emulator/emulator@sha256:9ded3fac22d4d1c85ae51473e3876e2377f5179192fea664409db0fe87e05ece               3 minutes ago        Exited              cloud-spanner-emulator       0                   f8903dad34bc9       cloud-spanner-emulator-5649c69bf6-ggs97
	98132f8e72d9f       6e38f40d628db                                                                                                                3 minutes ago        Running             storage-provisioner          0                   a020bfb7b5fff       storage-provisioner
	c6eeb77c2fa92       ead0a4a53df89                                                                                                                4 minutes ago        Running             coredns                      0                   dccd84df70c9a       coredns-5dd5756b68-fqjjr
	d8f85c8940614       83f6cc407eed8                                                                                                                4 minutes ago        Running             kube-proxy                   0                   27dfe74b65d96       kube-proxy-crmjb
	8af1b6504353b       73deb9a3f7025                                                                                                                4 minutes ago        Running             etcd                         0                   a219ab72e259b       etcd-addons-922300
	faa8d2b416783       e3db313c6dbc0                                                                                                                4 minutes ago        Running             kube-scheduler               0                   fe53f8d1f164e       kube-scheduler-addons-922300
	77180e5bd6192       7fe0e6f37db33                                                                                                                4 minutes ago        Running             kube-apiserver               0                   5be6094615d9c       kube-apiserver-addons-922300
	26071d27711ef       d058aa5ab969c                                                                                                                4 minutes ago        Running             kube-controller-manager      0                   13c0b1c612098       kube-controller-manager-addons-922300
	
	* 
	* ==> controller_ingress [0929f62e014b] <==
	* W1218 11:48:58.132651       7 client_config.go:618] Neither --kubeconfig nor --master was specified.  Using the inClusterConfig.  This might not work.
	I1218 11:48:58.132946       7 main.go:205] "Creating API client" host="https://10.96.0.1:443"
	I1218 11:48:58.151431       7 main.go:249] "Running in Kubernetes cluster" major="1" minor="28" git="v1.28.4" state="clean" commit="bae2c62678db2b5053817bc97181fcc2e8388103" platform="linux/amd64"
	I1218 11:48:58.690875       7 main.go:101] "SSL fake certificate created" file="/etc/ingress-controller/ssl/default-fake-certificate.pem"
	I1218 11:48:58.719168       7 ssl.go:536] "loading tls certificate" path="/usr/local/certificates/cert" key="/usr/local/certificates/key"
	I1218 11:48:58.735681       7 nginx.go:260] "Starting NGINX Ingress controller"
	I1218 11:48:58.759762       7 event.go:298] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"ingress-nginx-controller", UID:"5cb18aff-d465-4224-90cf-65767d647ee8", APIVersion:"v1", ResourceVersion:"689", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/ingress-nginx-controller
	I1218 11:48:58.770360       7 event.go:298] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"tcp-services", UID:"24051407-703c-4ff2-a0fd-7db519c20632", APIVersion:"v1", ResourceVersion:"690", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/tcp-services
	I1218 11:48:58.770614       7 event.go:298] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"udp-services", UID:"b7b84f5e-6138-48cf-878e-acf4a5499fe0", APIVersion:"v1", ResourceVersion:"691", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/udp-services
	I1218 11:48:59.938374       7 nginx.go:303] "Starting NGINX process"
	I1218 11:48:59.939416       7 leaderelection.go:245] attempting to acquire leader lease ingress-nginx/ingress-nginx-leader...
	I1218 11:48:59.942193       7 nginx.go:323] "Starting validation webhook" address=":8443" certPath="/usr/local/certificates/cert" keyPath="/usr/local/certificates/key"
	I1218 11:48:59.942824       7 controller.go:190] "Configuration changes detected, backend reload required"
	I1218 11:49:00.054127       7 leaderelection.go:255] successfully acquired lease ingress-nginx/ingress-nginx-leader
	I1218 11:49:00.055277       7 status.go:84] "New leader elected" identity="ingress-nginx-controller-7c6974c4d8-zkrpv"
	I1218 11:49:00.070422       7 controller.go:210] "Backend successfully reloaded"
	I1218 11:49:00.070499       7 controller.go:221] "Initial sync, sleeping for 1 second"
	I1218 11:49:00.070694       7 event.go:298] Event(v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7c6974c4d8-zkrpv", UID:"699d2a0e-72d3-4016-9ac5-e3562b186e32", APIVersion:"v1", ResourceVersion:"1239", FieldPath:""}): type: 'Normal' reason: 'RELOAD' NGINX reload triggered due to a change in configuration
	I1218 11:49:00.125579       7 status.go:219] "POD is not ready" pod="ingress-nginx/ingress-nginx-controller-7c6974c4d8-zkrpv" node="addons-922300"
	  Build:         846d251814a09d8a5d8d28e2e604bfc7749bcb49
	  Repository:    https://github.com/kubernetes/ingress-nginx
	  nginx version: nginx/1.21.6
	
	-------------------------------------------------------------------------------
	
	
	* 
	* ==> coredns [c6eeb77c2fa9] <==
	* [INFO] plugin/reload: Running configuration SHA512 = e48cc74d4d4792b6e037fc6364095f03dd97c499e20d6def56cab70b374eb190d7fd9d3720ca48b7382edb6d6fbe7d631f96f64e38a41e6bd8617ab8ab6ece2c
	[INFO] Reloading complete
	[INFO] 127.0.0.1:57315 - 50055 "HINFO IN 8451967316064033127.3258610574365984610. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.073787032s
	[INFO] 10.244.0.8:48316 - 27448 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000249602s
	[INFO] 10.244.0.8:48316 - 48191 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.0000718s
	[INFO] 10.244.0.8:60990 - 2600 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000311102s
	[INFO] 10.244.0.8:60990 - 25895 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000308702s
	[INFO] 10.244.0.8:46655 - 30204 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000255402s
	[INFO] 10.244.0.8:46655 - 46843 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000098001s
	[INFO] 10.244.0.8:54851 - 13228 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000413203s
	[INFO] 10.244.0.8:54851 - 38226 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000137401s
	[INFO] 10.244.0.8:36341 - 20287 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000174301s
	[INFO] 10.244.0.8:45426 - 25605 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000131801s
	[INFO] 10.244.0.8:33330 - 39439 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000262902s
	[INFO] 10.244.0.8:52356 - 2765 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.0000841s
	[INFO] 10.244.0.21:35038 - 33168 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000426795s
	[INFO] 10.244.0.21:45107 - 15160 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000099799s
	[INFO] 10.244.0.21:52067 - 24889 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000107399s
	[INFO] 10.244.0.21:58009 - 18312 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000143199s
	[INFO] 10.244.0.21:47313 - 35445 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000089499s
	[INFO] 10.244.0.21:50135 - 31728 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000226197s
	[INFO] 10.244.0.21:33709 - 56447 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd 240 0.00162868s
	[INFO] 10.244.0.21:44787 - 19057 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd 192 0.001488981s
	[INFO] 10.244.0.25:38103 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000274798s
	[INFO] 10.244.0.25:33637 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000201298s
	
	* 
	* ==> describe nodes <==
	* Name:               addons-922300
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-922300
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=30d8ecd1811578f7b9db580c501c654c189f68d4
	                    minikube.k8s.io/name=addons-922300
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_12_18T11_45_40_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-922300
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 18 Dec 2023 11:45:36 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-922300
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 18 Dec 2023 11:50:06 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 18 Dec 2023 11:49:45 +0000   Mon, 18 Dec 2023 11:45:34 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 18 Dec 2023 11:49:45 +0000   Mon, 18 Dec 2023 11:45:34 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 18 Dec 2023 11:49:45 +0000   Mon, 18 Dec 2023 11:45:34 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 18 Dec 2023 11:49:45 +0000   Mon, 18 Dec 2023 11:45:42 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.238.87
	  Hostname:    addons-922300
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             3914580Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             3914580Ki
	  pods:               110
	System Info:
	  Machine ID:                 2c205e29e6244ba2a34e594f3b4561f2
	  System UUID:                e785be98-088a-874c-a2d0-a8827b4fbe54
	  Boot ID:                    077a26fd-1b50-4009-b1e3-0d81367fb7df
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://24.0.7
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (16 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  gadget                      gadget-46nrg                                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m50s
	  gcp-auth                    gcp-auth-d4c87556c-psb2s                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m41s
	  headlamp                    headlamp-777fd4b855-mcx44                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7s
	  ingress-nginx               ingress-nginx-controller-7c6974c4d8-zkrpv    100m (5%!)(MISSING)     0 (0%!)(MISSING)      90Mi (2%!)(MISSING)        0 (0%!)(MISSING)         3m47s
	  kube-system                 coredns-5dd5756b68-fqjjr                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     4m19s
	  kube-system                 etcd-addons-922300                           100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         4m34s
	  kube-system                 kube-apiserver-addons-922300                 250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m31s
	  kube-system                 kube-controller-manager-addons-922300        200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m31s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m54s
	  kube-system                 kube-proxy-crmjb                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m19s
	  kube-system                 kube-scheduler-addons-922300                 100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m31s
	  kube-system                 snapshot-controller-58dbcc7b99-ptqqj         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m51s
	  kube-system                 snapshot-controller-58dbcc7b99-wxjmq         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m51s
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m51s
	  kube-system                 tiller-deploy-7b677967b9-btzrr               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m55s
	  local-path-storage          local-path-provisioner-78b46b4d5c-bm5wp      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m48s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%!)(MISSING)  0 (0%!)(MISSING)
	  memory             260Mi (6%!)(MISSING)  170Mi (4%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m5s                   kube-proxy       
	  Normal  NodeHasSufficientMemory  4m40s (x8 over 4m40s)  kubelet          Node addons-922300 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m40s (x8 over 4m40s)  kubelet          Node addons-922300 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m40s (x7 over 4m40s)  kubelet          Node addons-922300 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m40s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 4m31s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m31s                  kubelet          Node addons-922300 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m31s                  kubelet          Node addons-922300 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m31s                  kubelet          Node addons-922300 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m31s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                4m29s                  kubelet          Node addons-922300 status is now: NodeReady
	  Normal  RegisteredNode           4m19s                  node-controller  Node addons-922300 event: Registered Node addons-922300 in Controller
	
	* 
	* ==> dmesg <==
	* [  +0.195775] systemd-fstab-generator[1172]: Ignoring "noauto" for root device
	[  +0.191065] systemd-fstab-generator[1183]: Ignoring "noauto" for root device
	[  +0.208788] systemd-fstab-generator[1194]: Ignoring "noauto" for root device
	[  +0.242299] systemd-fstab-generator[1208]: Ignoring "noauto" for root device
	[  +9.081205] systemd-fstab-generator[1313]: Ignoring "noauto" for root device
	[  +6.202492] kauditd_printk_skb: 29 callbacks suppressed
	[  +7.468596] systemd-fstab-generator[1682]: Ignoring "noauto" for root device
	[  +0.785525] kauditd_printk_skb: 29 callbacks suppressed
	[  +8.548463] systemd-fstab-generator[2633]: Ignoring "noauto" for root device
	[Dec18 11:46] kauditd_printk_skb: 24 callbacks suppressed
	[ +10.678170] kauditd_printk_skb: 21 callbacks suppressed
	[  +6.458372] kauditd_printk_skb: 23 callbacks suppressed
	[  +5.642795] kauditd_printk_skb: 45 callbacks suppressed
	[Dec18 11:47] kauditd_printk_skb: 22 callbacks suppressed
	[Dec18 11:48] kauditd_printk_skb: 26 callbacks suppressed
	[ +29.929010] kauditd_printk_skb: 18 callbacks suppressed
	[  +8.002583] kauditd_printk_skb: 3 callbacks suppressed
	[Dec18 11:49] kauditd_printk_skb: 24 callbacks suppressed
	[  +7.576855] kauditd_printk_skb: 4 callbacks suppressed
	[ +10.894781] kauditd_printk_skb: 4 callbacks suppressed
	[  +2.683749] hrtimer: interrupt took 1186390 ns
	[  +3.828516] kauditd_printk_skb: 8 callbacks suppressed
	[ +16.034702] kauditd_printk_skb: 4 callbacks suppressed
	[Dec18 11:50] kauditd_printk_skb: 44 callbacks suppressed
	[  +5.055618] kauditd_printk_skb: 21 callbacks suppressed
	
	* 
	* ==> etcd [8af1b6504353] <==
	* {"level":"info","ts":"2023-12-18T11:47:27.308496Z","caller":"traceutil/trace.go:171","msg":"trace[1137937349] range","detail":"{range_begin:/registry/pods/ingress-nginx/; range_end:/registry/pods/ingress-nginx0; response_count:3; response_revision:960; }","duration":"127.209474ms","start":"2023-12-18T11:47:27.181279Z","end":"2023-12-18T11:47:27.308489Z","steps":["trace[1137937349] 'range keys from in-memory index tree'  (duration: 126.585269ms)"],"step_count":1}
	{"level":"info","ts":"2023-12-18T11:47:27.607952Z","caller":"traceutil/trace.go:171","msg":"trace[326448885] transaction","detail":"{read_only:false; response_revision:961; number_of_response:1; }","duration":"146.434206ms","start":"2023-12-18T11:47:27.461479Z","end":"2023-12-18T11:47:27.607913Z","steps":["trace[326448885] 'process raft request'  (duration: 145.687801ms)"],"step_count":1}
	{"level":"warn","ts":"2023-12-18T11:47:27.850883Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"183.32026ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/ingress-nginx/\" range_end:\"/registry/pods/ingress-nginx0\" ","response":"range_response_count:3 size:13488"}
	{"level":"info","ts":"2023-12-18T11:47:27.850954Z","caller":"traceutil/trace.go:171","msg":"trace[1663017497] range","detail":"{range_begin:/registry/pods/ingress-nginx/; range_end:/registry/pods/ingress-nginx0; response_count:3; response_revision:961; }","duration":"183.41696ms","start":"2023-12-18T11:47:27.667526Z","end":"2023-12-18T11:47:27.850943Z","steps":["trace[1663017497] 'range keys from in-memory index tree'  (duration: 183.106058ms)"],"step_count":1}
	{"level":"warn","ts":"2023-12-18T11:47:27.851456Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"168.631559ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" ","response":"range_response_count:18 size:81911"}
	{"level":"info","ts":"2023-12-18T11:47:27.851482Z","caller":"traceutil/trace.go:171","msg":"trace[2034200845] range","detail":"{range_begin:/registry/pods/kube-system/; range_end:/registry/pods/kube-system0; response_count:18; response_revision:961; }","duration":"168.660359ms","start":"2023-12-18T11:47:27.682815Z","end":"2023-12-18T11:47:27.851475Z","steps":["trace[2034200845] 'range keys from in-memory index tree'  (duration: 168.301556ms)"],"step_count":1}
	{"level":"info","ts":"2023-12-18T11:47:35.104092Z","caller":"traceutil/trace.go:171","msg":"trace[184977368] transaction","detail":"{read_only:false; response_revision:975; number_of_response:1; }","duration":"186.907227ms","start":"2023-12-18T11:47:34.917151Z","end":"2023-12-18T11:47:35.104058Z","steps":["trace[184977368] 'process raft request'  (duration: 186.724426ms)"],"step_count":1}
	{"level":"info","ts":"2023-12-18T11:47:37.801735Z","caller":"traceutil/trace.go:171","msg":"trace[729444590] transaction","detail":"{read_only:false; response_revision:977; number_of_response:1; }","duration":"120.135396ms","start":"2023-12-18T11:47:37.681584Z","end":"2023-12-18T11:47:37.801719Z","steps":["trace[729444590] 'process raft request'  (duration: 119.956995ms)"],"step_count":1}
	{"level":"info","ts":"2023-12-18T11:47:39.250563Z","caller":"traceutil/trace.go:171","msg":"trace[2083851294] linearizableReadLoop","detail":"{readStateIndex:1019; appliedIndex:1018; }","duration":"188.429558ms","start":"2023-12-18T11:47:39.062117Z","end":"2023-12-18T11:47:39.250546Z","steps":["trace[2083851294] 'read index received'  (duration: 188.304257ms)","trace[2083851294] 'applied index is now lower than readState.Index'  (duration: 124.601µs)"],"step_count":2}
	{"level":"info","ts":"2023-12-18T11:47:39.25093Z","caller":"traceutil/trace.go:171","msg":"trace[292096824] transaction","detail":"{read_only:false; response_revision:980; number_of_response:1; }","duration":"208.395071ms","start":"2023-12-18T11:47:39.042523Z","end":"2023-12-18T11:47:39.250918Z","steps":["trace[292096824] 'process raft request'  (duration: 207.939769ms)"],"step_count":1}
	{"level":"warn","ts":"2023-12-18T11:47:39.251284Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"173.745376ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/gcp-auth/\" range_end:\"/registry/pods/gcp-auth0\" ","response":"range_response_count:3 size:10575"}
	{"level":"info","ts":"2023-12-18T11:47:39.251322Z","caller":"traceutil/trace.go:171","msg":"trace[151747474] range","detail":"{range_begin:/registry/pods/gcp-auth/; range_end:/registry/pods/gcp-auth0; response_count:3; response_revision:980; }","duration":"173.798776ms","start":"2023-12-18T11:47:39.077515Z","end":"2023-12-18T11:47:39.251314Z","steps":["trace[151747474] 'agreement among raft nodes before linearized reading'  (duration: 173.670776ms)"],"step_count":1}
	{"level":"warn","ts":"2023-12-18T11:47:39.251645Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"189.536865ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" ","response":"range_response_count:18 size:82022"}
	{"level":"info","ts":"2023-12-18T11:47:39.25172Z","caller":"traceutil/trace.go:171","msg":"trace[2027347045] range","detail":"{range_begin:/registry/pods/kube-system/; range_end:/registry/pods/kube-system0; response_count:18; response_revision:980; }","duration":"189.614166ms","start":"2023-12-18T11:47:39.062094Z","end":"2023-12-18T11:47:39.251708Z","steps":["trace[2027347045] 'agreement among raft nodes before linearized reading'  (duration: 189.316364ms)"],"step_count":1}
	{"level":"info","ts":"2023-12-18T11:48:20.255179Z","caller":"traceutil/trace.go:171","msg":"trace[272122875] transaction","detail":"{read_only:false; response_revision:1160; number_of_response:1; }","duration":"133.114687ms","start":"2023-12-18T11:48:20.122028Z","end":"2023-12-18T11:48:20.255142Z","steps":["trace[272122875] 'process raft request'  (duration: 132.447784ms)"],"step_count":1}
	{"level":"info","ts":"2023-12-18T11:48:47.903067Z","caller":"traceutil/trace.go:171","msg":"trace[1604455249] linearizableReadLoop","detail":"{readStateIndex:1274; appliedIndex:1273; }","duration":"337.69463ms","start":"2023-12-18T11:48:47.565353Z","end":"2023-12-18T11:48:47.903048Z","steps":["trace[1604455249] 'read index received'  (duration: 337.518533ms)","trace[1604455249] 'applied index is now lower than readState.Index'  (duration: 175.297µs)"],"step_count":2}
	{"level":"info","ts":"2023-12-18T11:48:47.90339Z","caller":"traceutil/trace.go:171","msg":"trace[1388612167] transaction","detail":"{read_only:false; response_revision:1218; number_of_response:1; }","duration":"372.30619ms","start":"2023-12-18T11:48:47.531073Z","end":"2023-12-18T11:48:47.903379Z","steps":["trace[1388612167] 'process raft request'  (duration: 371.886397ms)"],"step_count":1}
	{"level":"warn","ts":"2023-12-18T11:48:47.90348Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-12-18T11:48:47.531059Z","time spent":"372.356689ms","remote":"127.0.0.1:43206","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":539,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io\" mod_revision:1211 > success:<request_put:<key:\"/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io\" value_size:452 >> failure:<request_range:<key:\"/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io\" > >"}
	{"level":"warn","ts":"2023-12-18T11:48:47.903691Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"338.34732ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/gcp-auth/\" range_end:\"/registry/pods/gcp-auth0\" ","response":"range_response_count:1 size:4149"}
	{"level":"info","ts":"2023-12-18T11:48:47.903786Z","caller":"traceutil/trace.go:171","msg":"trace[1217785129] range","detail":"{range_begin:/registry/pods/gcp-auth/; range_end:/registry/pods/gcp-auth0; response_count:1; response_revision:1218; }","duration":"338.444019ms","start":"2023-12-18T11:48:47.565333Z","end":"2023-12-18T11:48:47.903777Z","steps":["trace[1217785129] 'agreement among raft nodes before linearized reading'  (duration: 338.317421ms)"],"step_count":1}
	{"level":"warn","ts":"2023-12-18T11:48:47.903816Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-12-18T11:48:47.565324Z","time spent":"338.483718ms","remote":"127.0.0.1:43188","response type":"/etcdserverpb.KV/Range","request count":0,"request size":52,"response count":1,"response size":4172,"request content":"key:\"/registry/pods/gcp-auth/\" range_end:\"/registry/pods/gcp-auth0\" "}
	{"level":"warn","ts":"2023-12-18T11:48:47.903973Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"226.280269ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/ingress-nginx/\" range_end:\"/registry/pods/ingress-nginx0\" ","response":"range_response_count:3 size:13906"}
	{"level":"info","ts":"2023-12-18T11:48:47.903996Z","caller":"traceutil/trace.go:171","msg":"trace[820185118] range","detail":"{range_begin:/registry/pods/ingress-nginx/; range_end:/registry/pods/ingress-nginx0; response_count:3; response_revision:1218; }","duration":"226.303668ms","start":"2023-12-18T11:48:47.677686Z","end":"2023-12-18T11:48:47.90399Z","steps":["trace[820185118] 'agreement among raft nodes before linearized reading'  (duration: 226.240769ms)"],"step_count":1}
	{"level":"warn","ts":"2023-12-18T11:48:48.553663Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"126.717654ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1113"}
	{"level":"info","ts":"2023-12-18T11:48:48.553867Z","caller":"traceutil/trace.go:171","msg":"trace[1583967337] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:1220; }","duration":"126.930351ms","start":"2023-12-18T11:48:48.426903Z","end":"2023-12-18T11:48:48.553834Z","steps":["trace[1583967337] 'range keys from in-memory index tree'  (duration: 126.626156ms)"],"step_count":1}
	
	* 
	* ==> gcp-auth [bba3552c3371] <==
	* 2023/12/18 11:49:00 GCP Auth Webhook started!
	2023/12/18 11:49:03 Ready to marshal response ...
	2023/12/18 11:49:03 Ready to write response ...
	2023/12/18 11:49:03 Ready to marshal response ...
	2023/12/18 11:49:03 Ready to write response ...
	2023/12/18 11:49:04 Ready to marshal response ...
	2023/12/18 11:49:04 Ready to write response ...
	2023/12/18 11:49:13 Ready to marshal response ...
	2023/12/18 11:49:13 Ready to write response ...
	2023/12/18 11:49:37 Ready to marshal response ...
	2023/12/18 11:49:37 Ready to write response ...
	2023/12/18 11:49:37 Ready to marshal response ...
	2023/12/18 11:49:37 Ready to write response ...
	2023/12/18 11:50:04 Ready to marshal response ...
	2023/12/18 11:50:04 Ready to write response ...
	2023/12/18 11:50:04 Ready to marshal response ...
	2023/12/18 11:50:04 Ready to write response ...
	2023/12/18 11:50:04 Ready to marshal response ...
	2023/12/18 11:50:04 Ready to write response ...
	
	* 
	* ==> kernel <==
	*  11:50:11 up 6 min,  0 users,  load average: 3.33, 2.63, 1.21
	Linux addons-922300 5.10.57 #1 SMP Wed Dec 13 22:38:26 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [77180e5bd619] <==
	* W1218 11:46:29.305058       1 aggregator.go:166] failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1218 11:46:30.722965       1 alloc.go:330] "allocated clusterIPs" service="gcp-auth/gcp-auth" clusterIPs={"IPv4":"10.96.17.47"}
	I1218 11:46:36.090133       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W1218 11:47:21.598405       1 handler_proxy.go:93] no RequestInfo found in the context
	E1218 11:47:21.598436       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I1218 11:47:21.598445       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1218 11:47:21.600061       1 handler_proxy.go:93] no RequestInfo found in the context
	E1218 11:47:21.600180       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1218 11:47:21.600192       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1218 11:47:36.091038       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1218 11:47:46.092820       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.107.92.174:443/apis/metrics.k8s.io/v1beta1: Get "https://10.107.92.174:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.107.92.174:443: connect: connection refused
	W1218 11:47:46.093029       1 handler_proxy.go:93] no RequestInfo found in the context
	E1218 11:47:46.093725       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1218 11:47:46.094553       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1218 11:47:46.094656       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.107.92.174:443/apis/metrics.k8s.io/v1beta1: Get "https://10.107.92.174:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.107.92.174:443: connect: connection refused
	E1218 11:47:46.098925       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.107.92.174:443/apis/metrics.k8s.io/v1beta1: Get "https://10.107.92.174:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.107.92.174:443: connect: connection refused
	E1218 11:47:46.119818       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.107.92.174:443/apis/metrics.k8s.io/v1beta1: Get "https://10.107.92.174:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.107.92.174:443: connect: connection refused
	I1218 11:47:46.235351       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I1218 11:48:36.097433       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I1218 11:49:27.692691       1 controller.go:624] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I1218 11:49:47.113614       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	I1218 11:50:04.604833       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.104.71.178"}
	
	* 
	* ==> kube-controller-manager [26071d27711e] <==
	* I1218 11:48:38.087591       1 job_controller.go:562] "enqueueing job" key="gcp-auth/gcp-auth-certs-patch"
	I1218 11:48:58.196406       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-7c6974c4d8" duration="63.399µs"
	I1218 11:49:01.687861       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="gcp-auth/gcp-auth-d4c87556c" duration="32.103703ms"
	I1218 11:49:01.687931       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="gcp-auth/gcp-auth-d4c87556c" duration="36.4µs"
	I1218 11:49:02.793633       1 event.go:307] "Event occurred" object="default/test-pvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="WaitForFirstConsumer" message="waiting for first consumer to be created before binding"
	I1218 11:49:02.861582       1 event.go:307] "Event occurred" object="default/hpvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="Waiting for a volume to be created either by the external provisioner 'hostpath.csi.k8s.io' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered."
	I1218 11:49:03.210318       1 event.go:307] "Event occurred" object="default/test-pvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="Waiting for a volume to be created either by the external provisioner 'rancher.io/local-path' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered."
	I1218 11:49:03.673155       1 event.go:307] "Event occurred" object="default/hpvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="Waiting for a volume to be created either by the external provisioner 'hostpath.csi.k8s.io' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered."
	I1218 11:49:07.121364       1 event.go:307] "Event occurred" object="default/test-pvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="Waiting for a volume to be created either by the external provisioner 'rancher.io/local-path' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered."
	I1218 11:49:12.361900       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-7c6974c4d8" duration="31.149679ms"
	I1218 11:49:12.362563       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-7c6974c4d8" duration="463.896µs"
	I1218 11:49:22.955845       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-7c66d45ddc" duration="9.8µs"
	I1218 11:49:30.926001       1 event.go:307] "Event occurred" object="default/hpvc-restore" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="Waiting for a volume to be created either by the external provisioner 'hostpath.csi.k8s.io' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered."
	I1218 11:49:30.926670       1 event.go:307] "Event occurred" object="default/hpvc-restore" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="Waiting for a volume to be created either by the external provisioner 'hostpath.csi.k8s.io' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered."
	I1218 11:49:36.384239       1 event.go:307] "Event occurred" object="default/hpvc-restore" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="Waiting for a volume to be created either by the external provisioner 'hostpath.csi.k8s.io' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered."
	I1218 11:49:46.517128       1 replica_set.go:676] "Finished syncing" kind="ReplicationController" key="kube-system/registry" duration="7.9µs"
	I1218 11:50:04.689196       1 event.go:307] "Event occurred" object="headlamp/headlamp" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set headlamp-777fd4b855 to 1"
	I1218 11:50:04.752049       1 event.go:307] "Event occurred" object="headlamp/headlamp-777fd4b855" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: headlamp-777fd4b855-mcx44"
	I1218 11:50:04.805779       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="headlamp/headlamp-777fd4b855" duration="115.379315ms"
	I1218 11:50:04.818330       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="headlamp/headlamp-777fd4b855" duration="12.384648ms"
	I1218 11:50:04.820521       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="headlamp/headlamp-777fd4b855" duration="55.5µs"
	I1218 11:50:04.832637       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="headlamp/headlamp-777fd4b855" duration="104.2µs"
	I1218 11:50:07.294747       1 stateful_set.go:458] "StatefulSet has been deleted" key="kube-system/csi-hostpath-attacher"
	I1218 11:50:07.431809       1 stateful_set.go:458] "StatefulSet has been deleted" key="kube-system/csi-hostpath-resizer"
	I1218 11:50:08.789335       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/cloud-spanner-emulator-5649c69bf6" duration="4.8µs"
	
	* 
	* ==> kube-proxy [d8f85c894061] <==
	* I1218 11:46:05.139350       1 server_others.go:69] "Using iptables proxy"
	I1218 11:46:05.428590       1 node.go:141] Successfully retrieved node IP: 192.168.238.87
	I1218 11:46:05.623526       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I1218 11:46:05.623720       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1218 11:46:05.650036       1 server_others.go:152] "Using iptables Proxier"
	I1218 11:46:05.650118       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1218 11:46:05.651323       1 server.go:846] "Version info" version="v1.28.4"
	I1218 11:46:05.651378       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1218 11:46:05.654482       1 config.go:315] "Starting node config controller"
	I1218 11:46:05.654571       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1218 11:46:05.655372       1 config.go:97] "Starting endpoint slice config controller"
	I1218 11:46:05.655423       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1218 11:46:05.666134       1 config.go:188] "Starting service config controller"
	I1218 11:46:05.666224       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1218 11:46:05.755773       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1218 11:46:05.755841       1 shared_informer.go:318] Caches are synced for node config
	I1218 11:46:05.766364       1 shared_informer.go:318] Caches are synced for service config
	
	* 
	* ==> kube-scheduler [faa8d2b41678] <==
	* W1218 11:45:37.217618       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1218 11:45:37.217676       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1218 11:45:37.249693       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1218 11:45:37.249717       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W1218 11:45:37.307479       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1218 11:45:37.308402       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W1218 11:45:37.424640       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1218 11:45:37.424840       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W1218 11:45:37.428388       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1218 11:45:37.428429       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W1218 11:45:37.432800       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1218 11:45:37.433195       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1218 11:45:37.447674       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1218 11:45:37.447785       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1218 11:45:37.467190       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1218 11:45:37.467525       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W1218 11:45:37.490557       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1218 11:45:37.490581       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W1218 11:45:37.492546       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1218 11:45:37.492575       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W1218 11:45:37.498409       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1218 11:45:37.498430       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1218 11:45:37.547782       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1218 11:45:37.548100       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I1218 11:45:39.040504       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Mon 2023-12-18 11:43:43 UTC, ends at Mon 2023-12-18 11:50:11 UTC. --
	Dec 18 11:50:10 addons-922300 kubelet[2666]: I1218 11:50:10.932734    2666 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"d58179c2a2eafe14ce16e461f3217f271025615ec3af8daf2b52fcac2026ae2f"} err="failed to get container status \"d58179c2a2eafe14ce16e461f3217f271025615ec3af8daf2b52fcac2026ae2f\": rpc error: code = Unknown desc = Error response from daemon: No such container: d58179c2a2eafe14ce16e461f3217f271025615ec3af8daf2b52fcac2026ae2f"
	Dec 18 11:50:10 addons-922300 kubelet[2666]: I1218 11:50:10.932757    2666 scope.go:117] "RemoveContainer" containerID="0a7067b2a4f9bf47fd7f6c864215eb24e75314e3f644bda5cec3117404cfbbb9"
	Dec 18 11:50:10 addons-922300 kubelet[2666]: I1218 11:50:10.933671    2666 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"0a7067b2a4f9bf47fd7f6c864215eb24e75314e3f644bda5cec3117404cfbbb9"} err="failed to get container status \"0a7067b2a4f9bf47fd7f6c864215eb24e75314e3f644bda5cec3117404cfbbb9\": rpc error: code = Unknown desc = Error response from daemon: No such container: 0a7067b2a4f9bf47fd7f6c864215eb24e75314e3f644bda5cec3117404cfbbb9"
	Dec 18 11:50:10 addons-922300 kubelet[2666]: I1218 11:50:10.933695    2666 scope.go:117] "RemoveContainer" containerID="c8b028f63ded8b6c421738de6c69cff52b6e5a932d10c160ddba39c531a63218"
	Dec 18 11:50:10 addons-922300 kubelet[2666]: I1218 11:50:10.936222    2666 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"c8b028f63ded8b6c421738de6c69cff52b6e5a932d10c160ddba39c531a63218"} err="failed to get container status \"c8b028f63ded8b6c421738de6c69cff52b6e5a932d10c160ddba39c531a63218\": rpc error: code = Unknown desc = Error response from daemon: No such container: c8b028f63ded8b6c421738de6c69cff52b6e5a932d10c160ddba39c531a63218"
	Dec 18 11:50:10 addons-922300 kubelet[2666]: I1218 11:50:10.936318    2666 scope.go:117] "RemoveContainer" containerID="cad82ce5f603fd7c9c1453b2bce2349df7010b3418abfb55a60f699a409255fe"
	Dec 18 11:50:10 addons-922300 kubelet[2666]: I1218 11:50:10.938836    2666 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"cad82ce5f603fd7c9c1453b2bce2349df7010b3418abfb55a60f699a409255fe"} err="failed to get container status \"cad82ce5f603fd7c9c1453b2bce2349df7010b3418abfb55a60f699a409255fe\": rpc error: code = Unknown desc = Error response from daemon: No such container: cad82ce5f603fd7c9c1453b2bce2349df7010b3418abfb55a60f699a409255fe"
	Dec 18 11:50:10 addons-922300 kubelet[2666]: I1218 11:50:10.938888    2666 scope.go:117] "RemoveContainer" containerID="4ca247059ac6113e637a88180757fd4f17e5e4bc8be7f49d5133d2350eb8bf62"
	Dec 18 11:50:10 addons-922300 kubelet[2666]: I1218 11:50:10.940330    2666 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"4ca247059ac6113e637a88180757fd4f17e5e4bc8be7f49d5133d2350eb8bf62"} err="failed to get container status \"4ca247059ac6113e637a88180757fd4f17e5e4bc8be7f49d5133d2350eb8bf62\": rpc error: code = Unknown desc = Error response from daemon: No such container: 4ca247059ac6113e637a88180757fd4f17e5e4bc8be7f49d5133d2350eb8bf62"
	Dec 18 11:50:10 addons-922300 kubelet[2666]: I1218 11:50:10.940354    2666 scope.go:117] "RemoveContainer" containerID="53895aaf3b1cba6a8180adf7a6c588db6c877e07757ef21b04fbf5bccaa03244"
	Dec 18 11:50:10 addons-922300 kubelet[2666]: I1218 11:50:10.941724    2666 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"53895aaf3b1cba6a8180adf7a6c588db6c877e07757ef21b04fbf5bccaa03244"} err="failed to get container status \"53895aaf3b1cba6a8180adf7a6c588db6c877e07757ef21b04fbf5bccaa03244\": rpc error: code = Unknown desc = Error response from daemon: No such container: 53895aaf3b1cba6a8180adf7a6c588db6c877e07757ef21b04fbf5bccaa03244"
	Dec 18 11:50:10 addons-922300 kubelet[2666]: I1218 11:50:10.941779    2666 scope.go:117] "RemoveContainer" containerID="d58179c2a2eafe14ce16e461f3217f271025615ec3af8daf2b52fcac2026ae2f"
	Dec 18 11:50:10 addons-922300 kubelet[2666]: I1218 11:50:10.943014    2666 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"d58179c2a2eafe14ce16e461f3217f271025615ec3af8daf2b52fcac2026ae2f"} err="failed to get container status \"d58179c2a2eafe14ce16e461f3217f271025615ec3af8daf2b52fcac2026ae2f\": rpc error: code = Unknown desc = Error response from daemon: No such container: d58179c2a2eafe14ce16e461f3217f271025615ec3af8daf2b52fcac2026ae2f"
	Dec 18 11:50:10 addons-922300 kubelet[2666]: I1218 11:50:10.943041    2666 scope.go:117] "RemoveContainer" containerID="0a7067b2a4f9bf47fd7f6c864215eb24e75314e3f644bda5cec3117404cfbbb9"
	Dec 18 11:50:10 addons-922300 kubelet[2666]: I1218 11:50:10.944237    2666 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"0a7067b2a4f9bf47fd7f6c864215eb24e75314e3f644bda5cec3117404cfbbb9"} err="failed to get container status \"0a7067b2a4f9bf47fd7f6c864215eb24e75314e3f644bda5cec3117404cfbbb9\": rpc error: code = Unknown desc = Error response from daemon: No such container: 0a7067b2a4f9bf47fd7f6c864215eb24e75314e3f644bda5cec3117404cfbbb9"
	Dec 18 11:50:10 addons-922300 kubelet[2666]: I1218 11:50:10.944325    2666 scope.go:117] "RemoveContainer" containerID="c8b028f63ded8b6c421738de6c69cff52b6e5a932d10c160ddba39c531a63218"
	Dec 18 11:50:10 addons-922300 kubelet[2666]: I1218 11:50:10.944995    2666 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"c8b028f63ded8b6c421738de6c69cff52b6e5a932d10c160ddba39c531a63218"} err="failed to get container status \"c8b028f63ded8b6c421738de6c69cff52b6e5a932d10c160ddba39c531a63218\": rpc error: code = Unknown desc = Error response from daemon: No such container: c8b028f63ded8b6c421738de6c69cff52b6e5a932d10c160ddba39c531a63218"
	Dec 18 11:50:10 addons-922300 kubelet[2666]: I1218 11:50:10.945021    2666 scope.go:117] "RemoveContainer" containerID="b63706cc6184f282e719f2a2dff6f24e895d50102449ea796d5cac98059ebc30"
	Dec 18 11:50:10 addons-922300 kubelet[2666]: I1218 11:50:10.991351    2666 scope.go:117] "RemoveContainer" containerID="b63706cc6184f282e719f2a2dff6f24e895d50102449ea796d5cac98059ebc30"
	Dec 18 11:50:11 addons-922300 kubelet[2666]: E1218 11:50:11.002392    2666 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error response from daemon: No such container: b63706cc6184f282e719f2a2dff6f24e895d50102449ea796d5cac98059ebc30" containerID="b63706cc6184f282e719f2a2dff6f24e895d50102449ea796d5cac98059ebc30"
	Dec 18 11:50:11 addons-922300 kubelet[2666]: I1218 11:50:11.002477    2666 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"b63706cc6184f282e719f2a2dff6f24e895d50102449ea796d5cac98059ebc30"} err="failed to get container status \"b63706cc6184f282e719f2a2dff6f24e895d50102449ea796d5cac98059ebc30\": rpc error: code = Unknown desc = Error response from daemon: No such container: b63706cc6184f282e719f2a2dff6f24e895d50102449ea796d5cac98059ebc30"
	Dec 18 11:50:11 addons-922300 kubelet[2666]: I1218 11:50:11.002498    2666 scope.go:117] "RemoveContainer" containerID="703ae04f669591aec8eff6d3fedfbbf525279d7d21a9f14ca46aedd9035587d5"
	Dec 18 11:50:11 addons-922300 kubelet[2666]: I1218 11:50:11.283093    2666 scope.go:117] "RemoveContainer" containerID="8fdf6e5c7d5cd64cfcf6113b76b0101546440e4768097d496bb986c555959154"
	Dec 18 11:50:11 addons-922300 kubelet[2666]: E1218 11:50:11.283690    2666 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=gadget pod=gadget-46nrg_gadget(dc076148-5aff-4283-a788-f9cc8dc92a19)\"" pod="gadget/gadget-46nrg" podUID="dc076148-5aff-4283-a788-f9cc8dc92a19"
	Dec 18 11:50:11 addons-922300 kubelet[2666]: I1218 11:50:11.632968    2666 scope.go:117] "RemoveContainer" containerID="8c8b039ee5ef6f7ce6d48b11e5096ef3a74e6b77e2973e08511fdba06c20d680"
	
	* 
	* ==> storage-provisioner [98132f8e72d9] <==
	* I1218 11:46:30.557514       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1218 11:46:30.575747       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1218 11:46:30.575793       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1218 11:46:30.607020       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1218 11:46:30.607483       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-922300_f8790341-81c1-483f-a222-7afe00f491c1!
	I1218 11:46:30.610094       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"72c27182-13dd-4923-82be-225a7115af90", APIVersion:"v1", ResourceVersion:"814", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-922300_f8790341-81c1-483f-a222-7afe00f491c1 became leader
	I1218 11:46:30.707637       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-922300_f8790341-81c1-483f-a222-7afe00f491c1!
	

                                                
                                                
-- /stdout --
** stderr ** 
	W1218 11:50:01.263805    1860 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p addons-922300 -n addons-922300
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p addons-922300 -n addons-922300: (13.5851066s)
helpers_test.go:261: (dbg) Run:  kubectl --context addons-922300 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: ingress-nginx-admission-create-nmhw5 ingress-nginx-admission-patch-xzzk4
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Registry]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-922300 describe pod ingress-nginx-admission-create-nmhw5 ingress-nginx-admission-patch-xzzk4
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-922300 describe pod ingress-nginx-admission-create-nmhw5 ingress-nginx-admission-patch-xzzk4: exit status 1 (210.0554ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-nmhw5" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-xzzk4" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-922300 describe pod ingress-nginx-admission-create-nmhw5 ingress-nginx-admission-patch-xzzk4: exit status 1
--- FAIL: TestAddons/parallel/Registry (84.61s)

                                                
                                    
x
+
TestForceSystemdFlag (632.06s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-windows-amd64.exe start -p force-systemd-flag-455900 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=hyperv
E1218 13:41:45.572703   14928 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-806500\client.crt: The system cannot find the path specified.
docker_test.go:91: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p force-systemd-flag-455900 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=hyperv: exit status 90 (8m17.2182239s)

                                                
                                                
-- stdout --
	* [force-systemd-flag-455900] minikube v1.32.0 on Microsoft Windows 10 Enterprise N 10.0.19045.3803 Build 19045.3803
	  - KUBECONFIG=C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube7\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=17824
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the hyperv driver based on user configuration
	* Starting control plane node force-systemd-flag-455900 in cluster force-systemd-flag-455900
	* Creating hyperv VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	W1218 13:40:06.838863   13916 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I1218 13:40:06.916402   13916 out.go:296] Setting OutFile to fd 1904 ...
	I1218 13:40:06.917074   13916 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1218 13:40:06.917074   13916 out.go:309] Setting ErrFile to fd 1908...
	I1218 13:40:06.917074   13916 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1218 13:40:06.941889   13916 out.go:303] Setting JSON to false
	I1218 13:40:06.945900   13916 start.go:128] hostinfo: {"hostname":"minikube7","uptime":7281,"bootTime":1702899525,"procs":203,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.3803 Build 19045.3803","kernelVersion":"10.0.19045.3803 Build 19045.3803","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f66ed2ea-6c04-4a6b-8eea-b2fb0953e990"}
	W1218 13:40:06.946539   13916 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1218 13:40:06.947853   13916 out.go:177] * [force-systemd-flag-455900] minikube v1.32.0 on Microsoft Windows 10 Enterprise N 10.0.19045.3803 Build 19045.3803
	I1218 13:40:06.948592   13916 notify.go:220] Checking for updates...
	I1218 13:40:06.948592   13916 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	I1218 13:40:06.950536   13916 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1218 13:40:06.951324   13916 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube7\minikube-integration\.minikube
	I1218 13:40:06.952278   13916 out.go:177]   - MINIKUBE_LOCATION=17824
	I1218 13:40:06.953105   13916 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1218 13:40:06.956147   13916 config.go:182] Loaded profile config "docker-flags-904000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1218 13:40:06.956147   13916 config.go:182] Loaded profile config "force-systemd-env-915900": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1218 13:40:06.957401   13916 config.go:182] Loaded profile config "multinode-015900-m01": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1218 13:40:06.957772   13916 config.go:182] Loaded profile config "pause-984000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1218 13:40:06.957772   13916 driver.go:392] Setting default libvirt URI to qemu:///system
	I1218 13:40:13.123368   13916 out.go:177] * Using the hyperv driver based on user configuration
	I1218 13:40:13.124369   13916 start.go:298] selected driver: hyperv
	I1218 13:40:13.124369   13916 start.go:902] validating driver "hyperv" against <nil>
	I1218 13:40:13.124369   13916 start.go:913] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1218 13:40:13.180848   13916 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1218 13:40:13.182515   13916 start_flags.go:913] Wait components to verify : map[apiserver:true system_pods:true]
	I1218 13:40:13.182515   13916 cni.go:84] Creating CNI manager for ""
	I1218 13:40:13.182515   13916 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1218 13:40:13.182723   13916 start_flags.go:318] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1218 13:40:13.182723   13916 start_flags.go:323] config:
	{Name:force-systemd-flag-455900 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702585004-17765@sha256:ba2fbb9efd5b81a443834ba0800f3bc13feea942ce199df74b0054a9bdb32bbd Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:force-systemd-flag-455900 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1218 13:40:13.183112   13916 iso.go:125] acquiring lock: {Name:mka7fdf4332632f41b99a369cc525e40499a0030 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1218 13:40:13.184375   13916 out.go:177] * Starting control plane node force-systemd-flag-455900 in cluster force-systemd-flag-455900
	I1218 13:40:13.185691   13916 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I1218 13:40:13.185691   13916 preload.go:148] Found local preload: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4
	I1218 13:40:13.185691   13916 cache.go:56] Caching tarball of preloaded images
	I1218 13:40:13.186228   13916 preload.go:174] Found C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1218 13:40:13.186425   13916 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on docker
	I1218 13:40:13.186578   13916 profile.go:148] Saving config to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\force-systemd-flag-455900\config.json ...
	I1218 13:40:13.186578   13916 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\force-systemd-flag-455900\config.json: {Name:mke63a8db9b67f8a075bd523b240e4a2ecd2d272 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1218 13:40:13.188401   13916 start.go:365] acquiring machines lock for force-systemd-flag-455900: {Name:mk814f158b6187cc9297257c36fdbe0d2871c950 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1218 13:44:58.861384   13916 start.go:369] acquired machines lock for "force-systemd-flag-455900" in 4m45.6717019s
	I1218 13:44:58.861384   13916 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-455900 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17765/minikube-v1.32.1-1702490427-17765-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702585004-17765@sha256:ba2fbb9efd5b81a443834ba0800f3bc13feea942ce199df74b0054a9bdb32bbd Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Ku
bernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:force-systemd-flag-455900 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1218 13:44:58.862149   13916 start.go:125] createHost starting for "" (driver="hyperv")
	I1218 13:44:58.862963   13916 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I1218 13:44:58.863396   13916 start.go:159] libmachine.API.Create for "force-systemd-flag-455900" (driver="hyperv")
	I1218 13:44:58.863488   13916 client.go:168] LocalClient.Create starting
	I1218 13:44:58.864074   13916 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem
	I1218 13:44:58.864483   13916 main.go:141] libmachine: Decoding PEM data...
	I1218 13:44:58.864483   13916 main.go:141] libmachine: Parsing certificate...
	I1218 13:44:58.864696   13916 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem
	I1218 13:44:58.864978   13916 main.go:141] libmachine: Decoding PEM data...
	I1218 13:44:58.865042   13916 main.go:141] libmachine: Parsing certificate...
	I1218 13:44:58.865153   13916 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I1218 13:45:00.897259   13916 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I1218 13:45:00.897259   13916 main.go:141] libmachine: [stderr =====>] : 
	I1218 13:45:00.897426   13916 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I1218 13:45:02.780313   13916 main.go:141] libmachine: [stdout =====>] : False
	
	I1218 13:45:02.780591   13916 main.go:141] libmachine: [stderr =====>] : 
	I1218 13:45:02.780591   13916 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I1218 13:45:04.514280   13916 main.go:141] libmachine: [stdout =====>] : True
	
	I1218 13:45:04.514641   13916 main.go:141] libmachine: [stderr =====>] : 
	I1218 13:45:04.514641   13916 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I1218 13:45:08.882518   13916 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I1218 13:45:08.882697   13916 main.go:141] libmachine: [stderr =====>] : 
	I1218 13:45:08.885666   13916 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube7/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.32.1-1702490427-17765-amd64.iso...
	I1218 13:45:09.340838   13916 main.go:141] libmachine: Creating SSH key...
	I1218 13:45:09.955844   13916 main.go:141] libmachine: Creating VM...
	I1218 13:45:09.955844   13916 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I1218 13:45:13.106799   13916 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I1218 13:45:13.106799   13916 main.go:141] libmachine: [stderr =====>] : 
	I1218 13:45:13.106799   13916 main.go:141] libmachine: Using switch "Default Switch"
	I1218 13:45:13.106799   13916 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I1218 13:45:14.949997   13916 main.go:141] libmachine: [stdout =====>] : True
	
	I1218 13:45:14.949997   13916 main.go:141] libmachine: [stderr =====>] : 
	I1218 13:45:14.949997   13916 main.go:141] libmachine: Creating VHD
	I1218 13:45:14.950133   13916 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\force-systemd-flag-455900\fixed.vhd' -SizeBytes 10MB -Fixed
	I1218 13:45:18.801809   13916 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube7
	Path                    : C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\force-systemd-flag-455900\
	                          fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 63A2B2EA-6E4F-4F5B-9FDC-10160B51F824
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I1218 13:45:18.802172   13916 main.go:141] libmachine: [stderr =====>] : 
	I1218 13:45:18.802287   13916 main.go:141] libmachine: Writing magic tar header
	I1218 13:45:18.802359   13916 main.go:141] libmachine: Writing SSH key tar header
	I1218 13:45:18.811831   13916 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\force-systemd-flag-455900\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\force-systemd-flag-455900\disk.vhd' -VHDType Dynamic -DeleteSource
	I1218 13:45:22.113129   13916 main.go:141] libmachine: [stdout =====>] : 
	I1218 13:45:22.113129   13916 main.go:141] libmachine: [stderr =====>] : 
	I1218 13:45:22.113129   13916 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\force-systemd-flag-455900\disk.vhd' -SizeBytes 20000MB
	I1218 13:45:24.839627   13916 main.go:141] libmachine: [stdout =====>] : 
	I1218 13:45:24.839835   13916 main.go:141] libmachine: [stderr =====>] : 
	I1218 13:45:24.839937   13916 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM force-systemd-flag-455900 -Path 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\force-systemd-flag-455900' -SwitchName 'Default Switch' -MemoryStartupBytes 2048MB
	I1218 13:45:28.544474   13916 main.go:141] libmachine: [stdout =====>] : 
	Name                      State CPUUsage(%) MemoryAssigned(M) Uptime   Status             Version
	----                      ----- ----------- ----------------- ------   ------             -------
	force-systemd-flag-455900 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I1218 13:45:28.544474   13916 main.go:141] libmachine: [stderr =====>] : 
	I1218 13:45:28.544570   13916 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName force-systemd-flag-455900 -DynamicMemoryEnabled $false
	I1218 13:45:30.856833   13916 main.go:141] libmachine: [stdout =====>] : 
	I1218 13:45:30.856833   13916 main.go:141] libmachine: [stderr =====>] : 
	I1218 13:45:30.856971   13916 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor force-systemd-flag-455900 -Count 2
	I1218 13:45:33.004985   13916 main.go:141] libmachine: [stdout =====>] : 
	I1218 13:45:33.005209   13916 main.go:141] libmachine: [stderr =====>] : 
	I1218 13:45:33.005209   13916 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName force-systemd-flag-455900 -Path 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\force-systemd-flag-455900\boot2docker.iso'
	I1218 13:45:35.602456   13916 main.go:141] libmachine: [stdout =====>] : 
	I1218 13:45:35.602456   13916 main.go:141] libmachine: [stderr =====>] : 
	I1218 13:45:35.602456   13916 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName force-systemd-flag-455900 -Path 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\force-systemd-flag-455900\disk.vhd'
	I1218 13:45:38.223610   13916 main.go:141] libmachine: [stdout =====>] : 
	I1218 13:45:38.223610   13916 main.go:141] libmachine: [stderr =====>] : 
	I1218 13:45:38.223610   13916 main.go:141] libmachine: Starting VM...
	I1218 13:45:38.223711   13916 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM force-systemd-flag-455900
	I1218 13:45:41.138024   13916 main.go:141] libmachine: [stdout =====>] : 
	I1218 13:45:41.138074   13916 main.go:141] libmachine: [stderr =====>] : 
	I1218 13:45:41.138162   13916 main.go:141] libmachine: Waiting for host to start...
	I1218 13:45:41.138162   13916 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM force-systemd-flag-455900 ).state
	I1218 13:45:43.436123   13916 main.go:141] libmachine: [stdout =====>] : Running
	
	I1218 13:45:43.436190   13916 main.go:141] libmachine: [stderr =====>] : 
	I1218 13:45:43.436261   13916 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM force-systemd-flag-455900 ).networkadapters[0]).ipaddresses[0]
	I1218 13:45:45.962954   13916 main.go:141] libmachine: [stdout =====>] : 
	I1218 13:45:45.962954   13916 main.go:141] libmachine: [stderr =====>] : 
	I1218 13:45:46.964264   13916 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM force-systemd-flag-455900 ).state
	I1218 13:45:49.181262   13916 main.go:141] libmachine: [stdout =====>] : Running
	
	I1218 13:45:49.181262   13916 main.go:141] libmachine: [stderr =====>] : 
	I1218 13:45:49.181462   13916 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM force-systemd-flag-455900 ).networkadapters[0]).ipaddresses[0]
	I1218 13:45:51.743607   13916 main.go:141] libmachine: [stdout =====>] : 
	I1218 13:45:51.743607   13916 main.go:141] libmachine: [stderr =====>] : 
	I1218 13:45:52.758836   13916 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM force-systemd-flag-455900 ).state
	I1218 13:45:54.947477   13916 main.go:141] libmachine: [stdout =====>] : Running
	
	I1218 13:45:54.947560   13916 main.go:141] libmachine: [stderr =====>] : 
	I1218 13:45:54.947560   13916 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM force-systemd-flag-455900 ).networkadapters[0]).ipaddresses[0]
	I1218 13:45:57.479995   13916 main.go:141] libmachine: [stdout =====>] : 
	I1218 13:45:57.479995   13916 main.go:141] libmachine: [stderr =====>] : 
	I1218 13:45:58.484040   13916 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM force-systemd-flag-455900 ).state
	I1218 13:46:00.753839   13916 main.go:141] libmachine: [stdout =====>] : Running
	
	I1218 13:46:00.753988   13916 main.go:141] libmachine: [stderr =====>] : 
	I1218 13:46:00.753988   13916 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM force-systemd-flag-455900 ).networkadapters[0]).ipaddresses[0]
	I1218 13:46:03.287270   13916 main.go:141] libmachine: [stdout =====>] : 
	I1218 13:46:03.287270   13916 main.go:141] libmachine: [stderr =====>] : 
	I1218 13:46:04.301001   13916 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM force-systemd-flag-455900 ).state
	I1218 13:46:06.532284   13916 main.go:141] libmachine: [stdout =====>] : Running
	
	I1218 13:46:06.532284   13916 main.go:141] libmachine: [stderr =====>] : 
	I1218 13:46:06.532284   13916 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM force-systemd-flag-455900 ).networkadapters[0]).ipaddresses[0]
	I1218 13:46:09.100917   13916 main.go:141] libmachine: [stdout =====>] : 192.168.225.240
	
	I1218 13:46:09.101071   13916 main.go:141] libmachine: [stderr =====>] : 
	I1218 13:46:09.101128   13916 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM force-systemd-flag-455900 ).state
	I1218 13:46:11.230586   13916 main.go:141] libmachine: [stdout =====>] : Running
	
	I1218 13:46:11.230679   13916 main.go:141] libmachine: [stderr =====>] : 
	I1218 13:46:11.230679   13916 machine.go:88] provisioning docker machine ...
	I1218 13:46:11.230679   13916 buildroot.go:166] provisioning hostname "force-systemd-flag-455900"
	I1218 13:46:11.230760   13916 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM force-systemd-flag-455900 ).state
	I1218 13:46:13.508980   13916 main.go:141] libmachine: [stdout =====>] : Running
	
	I1218 13:46:13.509087   13916 main.go:141] libmachine: [stderr =====>] : 
	I1218 13:46:13.509184   13916 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM force-systemd-flag-455900 ).networkadapters[0]).ipaddresses[0]
	I1218 13:46:16.147057   13916 main.go:141] libmachine: [stdout =====>] : 192.168.225.240
	
	I1218 13:46:16.147057   13916 main.go:141] libmachine: [stderr =====>] : 
	I1218 13:46:16.152101   13916 main.go:141] libmachine: Using SSH client type: native
	I1218 13:46:16.152846   13916 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xdb4f40] 0xdb7a80 <nil>  [] 0s} 192.168.225.240 22 <nil> <nil>}
	I1218 13:46:16.152846   13916 main.go:141] libmachine: About to run SSH command:
	sudo hostname force-systemd-flag-455900 && echo "force-systemd-flag-455900" | sudo tee /etc/hostname
	I1218 13:46:16.323287   13916 main.go:141] libmachine: SSH cmd err, output: <nil>: force-systemd-flag-455900
	
	I1218 13:46:16.323287   13916 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM force-systemd-flag-455900 ).state
	I1218 13:46:18.540458   13916 main.go:141] libmachine: [stdout =====>] : Running
	
	I1218 13:46:18.540458   13916 main.go:141] libmachine: [stderr =====>] : 
	I1218 13:46:18.540458   13916 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM force-systemd-flag-455900 ).networkadapters[0]).ipaddresses[0]
	I1218 13:46:21.186739   13916 main.go:141] libmachine: [stdout =====>] : 192.168.225.240
	
	I1218 13:46:21.186739   13916 main.go:141] libmachine: [stderr =====>] : 
	I1218 13:46:21.193575   13916 main.go:141] libmachine: Using SSH client type: native
	I1218 13:46:21.194272   13916 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xdb4f40] 0xdb7a80 <nil>  [] 0s} 192.168.225.240 22 <nil> <nil>}
	I1218 13:46:21.194272   13916 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sforce-systemd-flag-455900' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 force-systemd-flag-455900/g' /etc/hosts;
				else 
					echo '127.0.1.1 force-systemd-flag-455900' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1218 13:46:21.363112   13916 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1218 13:46:21.363112   13916 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube7\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube7\minikube-integration\.minikube}
	I1218 13:46:21.363112   13916 buildroot.go:174] setting up certificates
	I1218 13:46:21.363112   13916 provision.go:83] configureAuth start
	I1218 13:46:21.363112   13916 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM force-systemd-flag-455900 ).state
	I1218 13:46:23.533020   13916 main.go:141] libmachine: [stdout =====>] : Running
	
	I1218 13:46:23.533020   13916 main.go:141] libmachine: [stderr =====>] : 
	I1218 13:46:23.533166   13916 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM force-systemd-flag-455900 ).networkadapters[0]).ipaddresses[0]
	I1218 13:46:26.106788   13916 main.go:141] libmachine: [stdout =====>] : 192.168.225.240
	
	I1218 13:46:26.107005   13916 main.go:141] libmachine: [stderr =====>] : 
	I1218 13:46:26.107005   13916 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM force-systemd-flag-455900 ).state
	I1218 13:46:28.252783   13916 main.go:141] libmachine: [stdout =====>] : Running
	
	I1218 13:46:28.252783   13916 main.go:141] libmachine: [stderr =====>] : 
	I1218 13:46:28.252783   13916 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM force-systemd-flag-455900 ).networkadapters[0]).ipaddresses[0]
	I1218 13:46:30.798179   13916 main.go:141] libmachine: [stdout =====>] : 192.168.225.240
	
	I1218 13:46:30.798579   13916 main.go:141] libmachine: [stderr =====>] : 
	I1218 13:46:30.798579   13916 provision.go:138] copyHostCerts
	I1218 13:46:30.798663   13916 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube7\minikube-integration\.minikube/ca.pem
	I1218 13:46:30.798663   13916 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/ca.pem, removing ...
	I1218 13:46:30.798663   13916 exec_runner.go:203] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.pem
	I1218 13:46:30.799531   13916 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/ca.pem (1078 bytes)
	I1218 13:46:30.800771   13916 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube7\minikube-integration\.minikube/cert.pem
	I1218 13:46:30.801263   13916 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/cert.pem, removing ...
	I1218 13:46:30.801345   13916 exec_runner.go:203] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cert.pem
	I1218 13:46:30.801682   13916 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/cert.pem (1123 bytes)
	I1218 13:46:30.802862   13916 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube7\minikube-integration\.minikube/key.pem
	I1218 13:46:30.803153   13916 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/key.pem, removing ...
	I1218 13:46:30.803216   13916 exec_runner.go:203] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\key.pem
	I1218 13:46:30.803590   13916 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/key.pem (1679 bytes)
	I1218 13:46:30.804230   13916 provision.go:112] generating server cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.force-systemd-flag-455900 san=[192.168.225.240 192.168.225.240 localhost 127.0.0.1 minikube force-systemd-flag-455900]
	I1218 13:46:31.047247   13916 provision.go:172] copyRemoteCerts
	I1218 13:46:31.060781   13916 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1218 13:46:31.060781   13916 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM force-systemd-flag-455900 ).state
	I1218 13:46:33.181703   13916 main.go:141] libmachine: [stdout =====>] : Running
	
	I1218 13:46:33.181703   13916 main.go:141] libmachine: [stderr =====>] : 
	I1218 13:46:33.181809   13916 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM force-systemd-flag-455900 ).networkadapters[0]).ipaddresses[0]
	I1218 13:46:35.689253   13916 main.go:141] libmachine: [stdout =====>] : 192.168.225.240
	
	I1218 13:46:35.689460   13916 main.go:141] libmachine: [stderr =====>] : 
	I1218 13:46:35.689599   13916 sshutil.go:53] new ssh client: &{IP:192.168.225.240 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\force-systemd-flag-455900\id_rsa Username:docker}
	I1218 13:46:35.803008   13916 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.7422063s)
	I1218 13:46:35.803137   13916 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I1218 13:46:35.803423   13916 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1218 13:46:35.841942   13916 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I1218 13:46:35.842231   13916 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1249 bytes)
	I1218 13:46:35.879457   13916 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I1218 13:46:35.879849   13916 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1218 13:46:35.917620   13916 provision.go:86] duration metric: configureAuth took 14.5544438s
	I1218 13:46:35.917620   13916 buildroot.go:189] setting minikube options for container-runtime
	I1218 13:46:35.918470   13916 config.go:182] Loaded profile config "force-systemd-flag-455900": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1218 13:46:35.918555   13916 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM force-systemd-flag-455900 ).state
	I1218 13:46:38.087895   13916 main.go:141] libmachine: [stdout =====>] : Running
	
	I1218 13:46:38.088143   13916 main.go:141] libmachine: [stderr =====>] : 
	I1218 13:46:38.088300   13916 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM force-systemd-flag-455900 ).networkadapters[0]).ipaddresses[0]
	I1218 13:46:40.627776   13916 main.go:141] libmachine: [stdout =====>] : 192.168.225.240
	
	I1218 13:46:40.628033   13916 main.go:141] libmachine: [stderr =====>] : 
	I1218 13:46:40.633256   13916 main.go:141] libmachine: Using SSH client type: native
	I1218 13:46:40.633564   13916 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xdb4f40] 0xdb7a80 <nil>  [] 0s} 192.168.225.240 22 <nil> <nil>}
	I1218 13:46:40.633564   13916 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1218 13:46:40.776645   13916 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1218 13:46:40.776645   13916 buildroot.go:70] root file system type: tmpfs
	I1218 13:46:40.776645   13916 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I1218 13:46:40.776645   13916 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM force-systemd-flag-455900 ).state
	I1218 13:46:42.897686   13916 main.go:141] libmachine: [stdout =====>] : Running
	
	I1218 13:46:42.897850   13916 main.go:141] libmachine: [stderr =====>] : 
	I1218 13:46:42.897850   13916 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM force-systemd-flag-455900 ).networkadapters[0]).ipaddresses[0]
	I1218 13:46:45.468282   13916 main.go:141] libmachine: [stdout =====>] : 192.168.225.240
	
	I1218 13:46:45.468456   13916 main.go:141] libmachine: [stderr =====>] : 
	I1218 13:46:45.473123   13916 main.go:141] libmachine: Using SSH client type: native
	I1218 13:46:45.473820   13916 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xdb4f40] 0xdb7a80 <nil>  [] 0s} 192.168.225.240 22 <nil> <nil>}
	I1218 13:46:45.473820   13916 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1218 13:46:45.640377   13916 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1218 13:46:45.640573   13916 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM force-systemd-flag-455900 ).state
	I1218 13:46:47.771546   13916 main.go:141] libmachine: [stdout =====>] : Running
	
	I1218 13:46:47.771719   13916 main.go:141] libmachine: [stderr =====>] : 
	I1218 13:46:47.771771   13916 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM force-systemd-flag-455900 ).networkadapters[0]).ipaddresses[0]
	I1218 13:46:50.278374   13916 main.go:141] libmachine: [stdout =====>] : 192.168.225.240
	
	I1218 13:46:50.278374   13916 main.go:141] libmachine: [stderr =====>] : 
	I1218 13:46:50.283632   13916 main.go:141] libmachine: Using SSH client type: native
	I1218 13:46:50.285576   13916 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xdb4f40] 0xdb7a80 <nil>  [] 0s} 192.168.225.240 22 <nil> <nil>}
	I1218 13:46:50.285576   13916 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1218 13:46:51.261773   13916 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I1218 13:46:51.261865   13916 machine.go:91] provisioned docker machine in 40.0310097s
	I1218 13:46:51.261965   13916 client.go:171] LocalClient.Create took 1m52.3979519s
	I1218 13:46:51.261965   13916 start.go:167] duration metric: libmachine.API.Create for "force-systemd-flag-455900" took 1m52.398083s
	I1218 13:46:51.262076   13916 start.go:300] post-start starting for "force-systemd-flag-455900" (driver="hyperv")
	I1218 13:46:51.262076   13916 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1218 13:46:51.277494   13916 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1218 13:46:51.277494   13916 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM force-systemd-flag-455900 ).state
	I1218 13:46:53.408505   13916 main.go:141] libmachine: [stdout =====>] : Running
	
	I1218 13:46:53.408700   13916 main.go:141] libmachine: [stderr =====>] : 
	I1218 13:46:53.408700   13916 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM force-systemd-flag-455900 ).networkadapters[0]).ipaddresses[0]
	I1218 13:46:55.917437   13916 main.go:141] libmachine: [stdout =====>] : 192.168.225.240
	
	I1218 13:46:55.917509   13916 main.go:141] libmachine: [stderr =====>] : 
	I1218 13:46:55.917565   13916 sshutil.go:53] new ssh client: &{IP:192.168.225.240 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\force-systemd-flag-455900\id_rsa Username:docker}
	I1218 13:46:56.026072   13916 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.748557s)
	I1218 13:46:56.038580   13916 ssh_runner.go:195] Run: cat /etc/os-release
	I1218 13:46:56.044410   13916 info.go:137] Remote host: Buildroot 2021.02.12
	I1218 13:46:56.044410   13916 filesync.go:126] Scanning C:\Users\jenkins.minikube7\minikube-integration\.minikube\addons for local assets ...
	I1218 13:46:56.045459   13916 filesync.go:126] Scanning C:\Users\jenkins.minikube7\minikube-integration\.minikube\files for local assets ...
	I1218 13:46:56.046507   13916 filesync.go:149] local asset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\149282.pem -> 149282.pem in /etc/ssl/certs
	I1218 13:46:56.046507   13916 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\149282.pem -> /etc/ssl/certs/149282.pem
	I1218 13:46:56.059152   13916 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1218 13:46:56.075126   13916 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\149282.pem --> /etc/ssl/certs/149282.pem (1708 bytes)
	I1218 13:46:56.114695   13916 start.go:303] post-start completed in 4.852598s
	I1218 13:46:56.116995   13916 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM force-systemd-flag-455900 ).state
	I1218 13:46:58.213328   13916 main.go:141] libmachine: [stdout =====>] : Running
	
	I1218 13:46:58.213569   13916 main.go:141] libmachine: [stderr =====>] : 
	I1218 13:46:58.213569   13916 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM force-systemd-flag-455900 ).networkadapters[0]).ipaddresses[0]
	I1218 13:47:00.748961   13916 main.go:141] libmachine: [stdout =====>] : 192.168.225.240
	
	I1218 13:47:00.748961   13916 main.go:141] libmachine: [stderr =====>] : 
	I1218 13:47:00.749281   13916 profile.go:148] Saving config to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\force-systemd-flag-455900\config.json ...
	I1218 13:47:00.752126   13916 start.go:128] duration metric: createHost completed in 2m1.8894484s
	I1218 13:47:00.752362   13916 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM force-systemd-flag-455900 ).state
	I1218 13:47:02.863668   13916 main.go:141] libmachine: [stdout =====>] : Running
	
	I1218 13:47:02.863668   13916 main.go:141] libmachine: [stderr =====>] : 
	I1218 13:47:02.863759   13916 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM force-systemd-flag-455900 ).networkadapters[0]).ipaddresses[0]
	I1218 13:47:05.369283   13916 main.go:141] libmachine: [stdout =====>] : 192.168.225.240
	
	I1218 13:47:05.369283   13916 main.go:141] libmachine: [stderr =====>] : 
	I1218 13:47:05.375040   13916 main.go:141] libmachine: Using SSH client type: native
	I1218 13:47:05.375366   13916 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xdb4f40] 0xdb7a80 <nil>  [] 0s} 192.168.225.240 22 <nil> <nil>}
	I1218 13:47:05.375366   13916 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1218 13:47:05.516846   13916 main.go:141] libmachine: SSH cmd err, output: <nil>: 1702907225.527498945
	
	I1218 13:47:05.516908   13916 fix.go:206] guest clock: 1702907225.527498945
	I1218 13:47:05.516908   13916 fix.go:219] Guest: 2023-12-18 13:47:05.527498945 +0000 UTC Remote: 2023-12-18 13:47:00.752126 +0000 UTC m=+414.016904601 (delta=4.775372945s)
	I1218 13:47:05.516908   13916 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM force-systemd-flag-455900 ).state
	I1218 13:47:07.612157   13916 main.go:141] libmachine: [stdout =====>] : Running
	
	I1218 13:47:07.612157   13916 main.go:141] libmachine: [stderr =====>] : 
	I1218 13:47:07.612157   13916 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM force-systemd-flag-455900 ).networkadapters[0]).ipaddresses[0]
	I1218 13:47:10.161822   13916 main.go:141] libmachine: [stdout =====>] : 192.168.225.240
	
	I1218 13:47:10.162040   13916 main.go:141] libmachine: [stderr =====>] : 
	I1218 13:47:10.167664   13916 main.go:141] libmachine: Using SSH client type: native
	I1218 13:47:10.168374   13916 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xdb4f40] 0xdb7a80 <nil>  [] 0s} 192.168.225.240 22 <nil> <nil>}
	I1218 13:47:10.168374   13916 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1702907225
	I1218 13:47:10.333525   13916 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Dec 18 13:47:05 UTC 2023
	
	I1218 13:47:10.333525   13916 fix.go:226] clock set: Mon Dec 18 13:47:05 UTC 2023
	 (err=<nil>)
	I1218 13:47:10.333525   13916 start.go:83] releasing machines lock for "force-systemd-flag-455900", held for 2m11.4715718s
	I1218 13:47:10.334144   13916 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM force-systemd-flag-455900 ).state
	I1218 13:47:12.526994   13916 main.go:141] libmachine: [stdout =====>] : Running
	
	I1218 13:47:12.526994   13916 main.go:141] libmachine: [stderr =====>] : 
	I1218 13:47:12.526994   13916 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM force-systemd-flag-455900 ).networkadapters[0]).ipaddresses[0]
	I1218 13:47:15.308456   13916 main.go:141] libmachine: [stdout =====>] : 192.168.225.240
	
	I1218 13:47:15.308456   13916 main.go:141] libmachine: [stderr =====>] : 
	I1218 13:47:15.318102   13916 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1218 13:47:15.319102   13916 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM force-systemd-flag-455900 ).state
	I1218 13:47:15.331464   13916 ssh_runner.go:195] Run: cat /version.json
	I1218 13:47:15.331464   13916 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM force-systemd-flag-455900 ).state
	I1218 13:47:17.909110   13916 main.go:141] libmachine: [stdout =====>] : Running
	
	I1218 13:47:17.909110   13916 main.go:141] libmachine: [stderr =====>] : 
	I1218 13:47:17.909110   13916 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM force-systemd-flag-455900 ).networkadapters[0]).ipaddresses[0]
	I1218 13:47:17.920067   13916 main.go:141] libmachine: [stdout =====>] : Running
	
	I1218 13:47:17.920067   13916 main.go:141] libmachine: [stderr =====>] : 
	I1218 13:47:17.920067   13916 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM force-systemd-flag-455900 ).networkadapters[0]).ipaddresses[0]
	I1218 13:47:20.773118   13916 main.go:141] libmachine: [stdout =====>] : 192.168.225.240
	
	I1218 13:47:20.773118   13916 main.go:141] libmachine: [stderr =====>] : 
	I1218 13:47:20.773672   13916 sshutil.go:53] new ssh client: &{IP:192.168.225.240 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\force-systemd-flag-455900\id_rsa Username:docker}
	I1218 13:47:20.793393   13916 main.go:141] libmachine: [stdout =====>] : 192.168.225.240
	
	I1218 13:47:20.793393   13916 main.go:141] libmachine: [stderr =====>] : 
	I1218 13:47:20.793393   13916 sshutil.go:53] new ssh client: &{IP:192.168.225.240 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\force-systemd-flag-455900\id_rsa Username:docker}
	I1218 13:47:20.942892   13916 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (5.6237657s)
	I1218 13:47:20.943041   13916 ssh_runner.go:235] Completed: cat /version.json: (5.6114041s)
	I1218 13:47:20.957611   13916 ssh_runner.go:195] Run: systemctl --version
	I1218 13:47:20.988357   13916 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1218 13:47:20.997846   13916 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1218 13:47:21.022578   13916 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1218 13:47:21.049663   13916 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1218 13:47:21.049833   13916 start.go:475] detecting cgroup driver to use...
	I1218 13:47:21.049833   13916 start.go:479] using "systemd" cgroup driver as enforced via flags
	I1218 13:47:21.049833   13916 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1218 13:47:21.094678   13916 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I1218 13:47:21.124891   13916 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1218 13:47:21.140885   13916 containerd.go:145] configuring containerd to use "systemd" as cgroup driver...
	I1218 13:47:21.153022   13916 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I1218 13:47:21.182941   13916 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1218 13:47:21.211971   13916 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1218 13:47:21.243422   13916 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1218 13:47:21.286398   13916 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1218 13:47:21.336576   13916 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1218 13:47:21.376578   13916 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1218 13:47:21.411586   13916 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1218 13:47:21.451746   13916 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1218 13:47:21.650783   13916 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1218 13:47:21.682212   13916 start.go:475] detecting cgroup driver to use...
	I1218 13:47:21.682389   13916 start.go:479] using "systemd" cgroup driver as enforced via flags
	I1218 13:47:21.698201   13916 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1218 13:47:21.737979   13916 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1218 13:47:21.776573   13916 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1218 13:47:21.830094   13916 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1218 13:47:21.867911   13916 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1218 13:47:21.901242   13916 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1218 13:47:21.959209   13916 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1218 13:47:21.980047   13916 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1218 13:47:22.028591   13916 ssh_runner.go:195] Run: which cri-dockerd
	I1218 13:47:22.052758   13916 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1218 13:47:22.068329   13916 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I1218 13:47:22.110438   13916 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1218 13:47:22.300232   13916 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1218 13:47:22.471736   13916 docker.go:560] configuring docker to use "systemd" as cgroup driver...
	I1218 13:47:22.472023   13916 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (129 bytes)
	I1218 13:47:22.520470   13916 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1218 13:47:22.703776   13916 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1218 13:48:23.819850   13916 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m1.1157438s)
	I1218 13:48:23.838268   13916 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I1218 13:48:23.868534   13916 out.go:177] 
	W1218 13:48:23.869246   13916 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	-- Journal begins at Mon 2023-12-18 13:46:00 UTC, ends at Mon 2023-12-18 13:48:23 UTC. --
	Dec 18 13:46:50 force-systemd-flag-455900 systemd[1]: Starting Docker Application Container Engine...
	Dec 18 13:46:50 force-systemd-flag-455900 dockerd[682]: time="2023-12-18T13:46:50.842939121Z" level=info msg="Starting up"
	Dec 18 13:46:50 force-systemd-flag-455900 dockerd[682]: time="2023-12-18T13:46:50.844109917Z" level=info msg="containerd not running, starting managed containerd"
	Dec 18 13:46:50 force-systemd-flag-455900 dockerd[682]: time="2023-12-18T13:46:50.845614841Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=688
	Dec 18 13:46:50 force-systemd-flag-455900 dockerd[688]: time="2023-12-18T13:46:50.877744891Z" level=info msg="starting containerd" revision=64b8a811b07ba6288238eefc14d898ee0b5b99ba version=v1.7.11
	Dec 18 13:46:50 force-systemd-flag-455900 dockerd[688]: time="2023-12-18T13:46:50.903538218Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Dec 18 13:46:50 force-systemd-flag-455900 dockerd[688]: time="2023-12-18T13:46:50.903568621Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Dec 18 13:46:50 force-systemd-flag-455900 dockerd[688]: time="2023-12-18T13:46:50.906001621Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.57\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Dec 18 13:46:50 force-systemd-flag-455900 dockerd[688]: time="2023-12-18T13:46:50.906123431Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Dec 18 13:46:50 force-systemd-flag-455900 dockerd[688]: time="2023-12-18T13:46:50.906450858Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Dec 18 13:46:50 force-systemd-flag-455900 dockerd[688]: time="2023-12-18T13:46:50.906542766Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Dec 18 13:46:50 force-systemd-flag-455900 dockerd[688]: time="2023-12-18T13:46:50.906801087Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Dec 18 13:46:50 force-systemd-flag-455900 dockerd[688]: time="2023-12-18T13:46:50.907030306Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Dec 18 13:46:50 force-systemd-flag-455900 dockerd[688]: time="2023-12-18T13:46:50.907191419Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Dec 18 13:46:50 force-systemd-flag-455900 dockerd[688]: time="2023-12-18T13:46:50.907349532Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Dec 18 13:46:50 force-systemd-flag-455900 dockerd[688]: time="2023-12-18T13:46:50.907819971Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Dec 18 13:46:50 force-systemd-flag-455900 dockerd[688]: time="2023-12-18T13:46:50.908021488Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Dec 18 13:46:50 force-systemd-flag-455900 dockerd[688]: time="2023-12-18T13:46:50.908041589Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Dec 18 13:46:50 force-systemd-flag-455900 dockerd[688]: time="2023-12-18T13:46:50.908437122Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Dec 18 13:46:50 force-systemd-flag-455900 dockerd[688]: time="2023-12-18T13:46:50.908494827Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Dec 18 13:46:50 force-systemd-flag-455900 dockerd[688]: time="2023-12-18T13:46:50.908575633Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Dec 18 13:46:50 force-systemd-flag-455900 dockerd[688]: time="2023-12-18T13:46:50.908614737Z" level=info msg="metadata content store policy set" policy=shared
	Dec 18 13:46:50 force-systemd-flag-455900 dockerd[688]: time="2023-12-18T13:46:50.923454760Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Dec 18 13:46:50 force-systemd-flag-455900 dockerd[688]: time="2023-12-18T13:46:50.923577370Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Dec 18 13:46:50 force-systemd-flag-455900 dockerd[688]: time="2023-12-18T13:46:50.923690980Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Dec 18 13:46:50 force-systemd-flag-455900 dockerd[688]: time="2023-12-18T13:46:50.923755985Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Dec 18 13:46:50 force-systemd-flag-455900 dockerd[688]: time="2023-12-18T13:46:50.923777387Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Dec 18 13:46:50 force-systemd-flag-455900 dockerd[688]: time="2023-12-18T13:46:50.923860594Z" level=info msg="NRI interface is disabled by configuration."
	Dec 18 13:46:50 force-systemd-flag-455900 dockerd[688]: time="2023-12-18T13:46:50.923994905Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Dec 18 13:46:50 force-systemd-flag-455900 dockerd[688]: time="2023-12-18T13:46:50.924114015Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Dec 18 13:46:50 force-systemd-flag-455900 dockerd[688]: time="2023-12-18T13:46:50.924215123Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Dec 18 13:46:50 force-systemd-flag-455900 dockerd[688]: time="2023-12-18T13:46:50.924236125Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Dec 18 13:46:50 force-systemd-flag-455900 dockerd[688]: time="2023-12-18T13:46:50.924254226Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Dec 18 13:46:50 force-systemd-flag-455900 dockerd[688]: time="2023-12-18T13:46:50.924272228Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Dec 18 13:46:50 force-systemd-flag-455900 dockerd[688]: time="2023-12-18T13:46:50.924292229Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Dec 18 13:46:50 force-systemd-flag-455900 dockerd[688]: time="2023-12-18T13:46:50.924317332Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Dec 18 13:46:50 force-systemd-flag-455900 dockerd[688]: time="2023-12-18T13:46:50.924333633Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Dec 18 13:46:50 force-systemd-flag-455900 dockerd[688]: time="2023-12-18T13:46:50.924350034Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Dec 18 13:46:50 force-systemd-flag-455900 dockerd[688]: time="2023-12-18T13:46:50.924366236Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Dec 18 13:46:50 force-systemd-flag-455900 dockerd[688]: time="2023-12-18T13:46:50.924379537Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Dec 18 13:46:50 force-systemd-flag-455900 dockerd[688]: time="2023-12-18T13:46:50.924463144Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Dec 18 13:46:50 force-systemd-flag-455900 dockerd[688]: time="2023-12-18T13:46:50.924721365Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Dec 18 13:46:50 force-systemd-flag-455900 dockerd[688]: time="2023-12-18T13:46:50.925115397Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Dec 18 13:46:50 force-systemd-flag-455900 dockerd[688]: time="2023-12-18T13:46:50.925166902Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Dec 18 13:46:50 force-systemd-flag-455900 dockerd[688]: time="2023-12-18T13:46:50.925185803Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Dec 18 13:46:50 force-systemd-flag-455900 dockerd[688]: time="2023-12-18T13:46:50.925209805Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Dec 18 13:46:50 force-systemd-flag-455900 dockerd[688]: time="2023-12-18T13:46:50.925258209Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Dec 18 13:46:50 force-systemd-flag-455900 dockerd[688]: time="2023-12-18T13:46:50.925273610Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Dec 18 13:46:50 force-systemd-flag-455900 dockerd[688]: time="2023-12-18T13:46:50.925286211Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Dec 18 13:46:50 force-systemd-flag-455900 dockerd[688]: time="2023-12-18T13:46:50.925300613Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Dec 18 13:46:50 force-systemd-flag-455900 dockerd[688]: time="2023-12-18T13:46:50.925314114Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Dec 18 13:46:50 force-systemd-flag-455900 dockerd[688]: time="2023-12-18T13:46:50.925328015Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Dec 18 13:46:50 force-systemd-flag-455900 dockerd[688]: time="2023-12-18T13:46:50.925339916Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Dec 18 13:46:50 force-systemd-flag-455900 dockerd[688]: time="2023-12-18T13:46:50.925351917Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Dec 18 13:46:50 force-systemd-flag-455900 dockerd[688]: time="2023-12-18T13:46:50.925368718Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Dec 18 13:46:50 force-systemd-flag-455900 dockerd[688]: time="2023-12-18T13:46:50.925422723Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Dec 18 13:46:50 force-systemd-flag-455900 dockerd[688]: time="2023-12-18T13:46:50.925438724Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Dec 18 13:46:50 force-systemd-flag-455900 dockerd[688]: time="2023-12-18T13:46:50.925451025Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Dec 18 13:46:50 force-systemd-flag-455900 dockerd[688]: time="2023-12-18T13:46:50.925465026Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Dec 18 13:46:50 force-systemd-flag-455900 dockerd[688]: time="2023-12-18T13:46:50.925478527Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Dec 18 13:46:50 force-systemd-flag-455900 dockerd[688]: time="2023-12-18T13:46:50.925492828Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Dec 18 13:46:50 force-systemd-flag-455900 dockerd[688]: time="2023-12-18T13:46:50.925505429Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Dec 18 13:46:50 force-systemd-flag-455900 dockerd[688]: time="2023-12-18T13:46:50.925525131Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Dec 18 13:46:50 force-systemd-flag-455900 dockerd[688]: time="2023-12-18T13:46:50.925542033Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	Dec 18 13:46:50 force-systemd-flag-455900 dockerd[688]: time="2023-12-18T13:46:50.925554434Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Dec 18 13:46:50 force-systemd-flag-455900 dockerd[688]: time="2023-12-18T13:46:50.925565634Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	Dec 18 13:46:50 force-systemd-flag-455900 dockerd[688]: time="2023-12-18T13:46:50.926011071Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Dec 18 13:46:50 force-systemd-flag-455900 dockerd[688]: time="2023-12-18T13:46:50.926134881Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Dec 18 13:46:50 force-systemd-flag-455900 dockerd[688]: time="2023-12-18T13:46:50.926180285Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Dec 18 13:46:50 force-systemd-flag-455900 dockerd[688]: time="2023-12-18T13:46:50.926221489Z" level=info msg="containerd successfully booted in 0.049862s"
	Dec 18 13:46:50 force-systemd-flag-455900 dockerd[682]: time="2023-12-18T13:46:50.966265991Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Dec 18 13:46:50 force-systemd-flag-455900 dockerd[682]: time="2023-12-18T13:46:50.980126534Z" level=info msg="Loading containers: start."
	Dec 18 13:46:51 force-systemd-flag-455900 dockerd[682]: time="2023-12-18T13:46:51.200189154Z" level=info msg="Loading containers: done."
	Dec 18 13:46:51 force-systemd-flag-455900 dockerd[682]: time="2023-12-18T13:46:51.217551496Z" level=warning msg="WARNING: No blkio throttle.read_bps_device support"
	Dec 18 13:46:51 force-systemd-flag-455900 dockerd[682]: time="2023-12-18T13:46:51.217588199Z" level=warning msg="WARNING: No blkio throttle.write_bps_device support"
	Dec 18 13:46:51 force-systemd-flag-455900 dockerd[682]: time="2023-12-18T13:46:51.217594899Z" level=warning msg="WARNING: No blkio throttle.read_iops_device support"
	Dec 18 13:46:51 force-systemd-flag-455900 dockerd[682]: time="2023-12-18T13:46:51.217601000Z" level=warning msg="WARNING: No blkio throttle.write_iops_device support"
	Dec 18 13:46:51 force-systemd-flag-455900 dockerd[682]: time="2023-12-18T13:46:51.217618801Z" level=info msg="Docker daemon" commit=311b9ff graphdriver=overlay2 version=24.0.7
	Dec 18 13:46:51 force-systemd-flag-455900 dockerd[682]: time="2023-12-18T13:46:51.217706808Z" level=info msg="Daemon has completed initialization"
	Dec 18 13:46:51 force-systemd-flag-455900 dockerd[682]: time="2023-12-18T13:46:51.269757132Z" level=info msg="API listen on [::]:2376"
	Dec 18 13:46:51 force-systemd-flag-455900 dockerd[682]: time="2023-12-18T13:46:51.269958648Z" level=info msg="API listen on /var/run/docker.sock"
	Dec 18 13:46:51 force-systemd-flag-455900 systemd[1]: Started Docker Application Container Engine.
	Dec 18 13:47:22 force-systemd-flag-455900 dockerd[682]: time="2023-12-18T13:47:22.742161111Z" level=info msg="Processing signal 'terminated'"
	Dec 18 13:47:22 force-systemd-flag-455900 dockerd[682]: time="2023-12-18T13:47:22.743320311Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Dec 18 13:47:22 force-systemd-flag-455900 dockerd[682]: time="2023-12-18T13:47:22.743772411Z" level=info msg="Daemon shutdown complete"
	Dec 18 13:47:22 force-systemd-flag-455900 dockerd[682]: time="2023-12-18T13:47:22.743816811Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Dec 18 13:47:22 force-systemd-flag-455900 dockerd[682]: time="2023-12-18T13:47:22.743959711Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Dec 18 13:47:22 force-systemd-flag-455900 systemd[1]: Stopping Docker Application Container Engine...
	Dec 18 13:47:23 force-systemd-flag-455900 systemd[1]: docker.service: Succeeded.
	Dec 18 13:47:23 force-systemd-flag-455900 systemd[1]: Stopped Docker Application Container Engine.
	Dec 18 13:47:23 force-systemd-flag-455900 systemd[1]: Starting Docker Application Container Engine...
	Dec 18 13:47:23 force-systemd-flag-455900 dockerd[1017]: time="2023-12-18T13:47:23.820362911Z" level=info msg="Starting up"
	Dec 18 13:48:23 force-systemd-flag-455900 dockerd[1017]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Dec 18 13:48:23 force-systemd-flag-455900 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Dec 18 13:48:23 force-systemd-flag-455900 systemd[1]: docker.service: Failed with result 'exit-code'.
	Dec 18 13:48:23 force-systemd-flag-455900 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	-- Journal begins at Mon 2023-12-18 13:46:00 UTC, ends at Mon 2023-12-18 13:48:23 UTC. --
	Dec 18 13:46:50 force-systemd-flag-455900 systemd[1]: Starting Docker Application Container Engine...
	Dec 18 13:46:50 force-systemd-flag-455900 dockerd[682]: time="2023-12-18T13:46:50.842939121Z" level=info msg="Starting up"
	Dec 18 13:46:50 force-systemd-flag-455900 dockerd[682]: time="2023-12-18T13:46:50.844109917Z" level=info msg="containerd not running, starting managed containerd"
	Dec 18 13:46:50 force-systemd-flag-455900 dockerd[682]: time="2023-12-18T13:46:50.845614841Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=688
	Dec 18 13:46:50 force-systemd-flag-455900 dockerd[688]: time="2023-12-18T13:46:50.877744891Z" level=info msg="starting containerd" revision=64b8a811b07ba6288238eefc14d898ee0b5b99ba version=v1.7.11
	Dec 18 13:46:50 force-systemd-flag-455900 dockerd[688]: time="2023-12-18T13:46:50.903538218Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Dec 18 13:46:50 force-systemd-flag-455900 dockerd[688]: time="2023-12-18T13:46:50.903568621Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Dec 18 13:46:50 force-systemd-flag-455900 dockerd[688]: time="2023-12-18T13:46:50.906001621Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.57\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Dec 18 13:46:50 force-systemd-flag-455900 dockerd[688]: time="2023-12-18T13:46:50.906123431Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Dec 18 13:46:50 force-systemd-flag-455900 dockerd[688]: time="2023-12-18T13:46:50.906450858Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Dec 18 13:46:50 force-systemd-flag-455900 dockerd[688]: time="2023-12-18T13:46:50.906542766Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Dec 18 13:46:50 force-systemd-flag-455900 dockerd[688]: time="2023-12-18T13:46:50.906801087Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Dec 18 13:46:50 force-systemd-flag-455900 dockerd[688]: time="2023-12-18T13:46:50.907030306Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Dec 18 13:46:50 force-systemd-flag-455900 dockerd[688]: time="2023-12-18T13:46:50.907191419Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Dec 18 13:46:50 force-systemd-flag-455900 dockerd[688]: time="2023-12-18T13:46:50.907349532Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Dec 18 13:46:50 force-systemd-flag-455900 dockerd[688]: time="2023-12-18T13:46:50.907819971Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Dec 18 13:46:50 force-systemd-flag-455900 dockerd[688]: time="2023-12-18T13:46:50.908021488Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Dec 18 13:46:50 force-systemd-flag-455900 dockerd[688]: time="2023-12-18T13:46:50.908041589Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Dec 18 13:46:50 force-systemd-flag-455900 dockerd[688]: time="2023-12-18T13:46:50.908437122Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Dec 18 13:46:50 force-systemd-flag-455900 dockerd[688]: time="2023-12-18T13:46:50.908494827Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Dec 18 13:46:50 force-systemd-flag-455900 dockerd[688]: time="2023-12-18T13:46:50.908575633Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Dec 18 13:46:50 force-systemd-flag-455900 dockerd[688]: time="2023-12-18T13:46:50.908614737Z" level=info msg="metadata content store policy set" policy=shared
	Dec 18 13:46:50 force-systemd-flag-455900 dockerd[688]: time="2023-12-18T13:46:50.923454760Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Dec 18 13:46:50 force-systemd-flag-455900 dockerd[688]: time="2023-12-18T13:46:50.923577370Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Dec 18 13:46:50 force-systemd-flag-455900 dockerd[688]: time="2023-12-18T13:46:50.923690980Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Dec 18 13:46:50 force-systemd-flag-455900 dockerd[688]: time="2023-12-18T13:46:50.923755985Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Dec 18 13:46:50 force-systemd-flag-455900 dockerd[688]: time="2023-12-18T13:46:50.923777387Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Dec 18 13:46:50 force-systemd-flag-455900 dockerd[688]: time="2023-12-18T13:46:50.923860594Z" level=info msg="NRI interface is disabled by configuration."
	Dec 18 13:46:50 force-systemd-flag-455900 dockerd[688]: time="2023-12-18T13:46:50.923994905Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Dec 18 13:46:50 force-systemd-flag-455900 dockerd[688]: time="2023-12-18T13:46:50.924114015Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Dec 18 13:46:50 force-systemd-flag-455900 dockerd[688]: time="2023-12-18T13:46:50.924215123Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Dec 18 13:46:50 force-systemd-flag-455900 dockerd[688]: time="2023-12-18T13:46:50.924236125Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Dec 18 13:46:50 force-systemd-flag-455900 dockerd[688]: time="2023-12-18T13:46:50.924254226Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Dec 18 13:46:50 force-systemd-flag-455900 dockerd[688]: time="2023-12-18T13:46:50.924272228Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Dec 18 13:46:50 force-systemd-flag-455900 dockerd[688]: time="2023-12-18T13:46:50.924292229Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Dec 18 13:46:50 force-systemd-flag-455900 dockerd[688]: time="2023-12-18T13:46:50.924317332Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Dec 18 13:46:50 force-systemd-flag-455900 dockerd[688]: time="2023-12-18T13:46:50.924333633Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Dec 18 13:46:50 force-systemd-flag-455900 dockerd[688]: time="2023-12-18T13:46:50.924350034Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Dec 18 13:46:50 force-systemd-flag-455900 dockerd[688]: time="2023-12-18T13:46:50.924366236Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Dec 18 13:46:50 force-systemd-flag-455900 dockerd[688]: time="2023-12-18T13:46:50.924379537Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Dec 18 13:46:50 force-systemd-flag-455900 dockerd[688]: time="2023-12-18T13:46:50.924463144Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Dec 18 13:46:50 force-systemd-flag-455900 dockerd[688]: time="2023-12-18T13:46:50.924721365Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Dec 18 13:46:50 force-systemd-flag-455900 dockerd[688]: time="2023-12-18T13:46:50.925115397Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Dec 18 13:46:50 force-systemd-flag-455900 dockerd[688]: time="2023-12-18T13:46:50.925166902Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Dec 18 13:46:50 force-systemd-flag-455900 dockerd[688]: time="2023-12-18T13:46:50.925185803Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Dec 18 13:46:50 force-systemd-flag-455900 dockerd[688]: time="2023-12-18T13:46:50.925209805Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Dec 18 13:46:50 force-systemd-flag-455900 dockerd[688]: time="2023-12-18T13:46:50.925258209Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Dec 18 13:46:50 force-systemd-flag-455900 dockerd[688]: time="2023-12-18T13:46:50.925273610Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Dec 18 13:46:50 force-systemd-flag-455900 dockerd[688]: time="2023-12-18T13:46:50.925286211Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Dec 18 13:46:50 force-systemd-flag-455900 dockerd[688]: time="2023-12-18T13:46:50.925300613Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Dec 18 13:46:50 force-systemd-flag-455900 dockerd[688]: time="2023-12-18T13:46:50.925314114Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Dec 18 13:46:50 force-systemd-flag-455900 dockerd[688]: time="2023-12-18T13:46:50.925328015Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Dec 18 13:46:50 force-systemd-flag-455900 dockerd[688]: time="2023-12-18T13:46:50.925339916Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Dec 18 13:46:50 force-systemd-flag-455900 dockerd[688]: time="2023-12-18T13:46:50.925351917Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Dec 18 13:46:50 force-systemd-flag-455900 dockerd[688]: time="2023-12-18T13:46:50.925368718Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Dec 18 13:46:50 force-systemd-flag-455900 dockerd[688]: time="2023-12-18T13:46:50.925422723Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Dec 18 13:46:50 force-systemd-flag-455900 dockerd[688]: time="2023-12-18T13:46:50.925438724Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Dec 18 13:46:50 force-systemd-flag-455900 dockerd[688]: time="2023-12-18T13:46:50.925451025Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Dec 18 13:46:50 force-systemd-flag-455900 dockerd[688]: time="2023-12-18T13:46:50.925465026Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Dec 18 13:46:50 force-systemd-flag-455900 dockerd[688]: time="2023-12-18T13:46:50.925478527Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Dec 18 13:46:50 force-systemd-flag-455900 dockerd[688]: time="2023-12-18T13:46:50.925492828Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Dec 18 13:46:50 force-systemd-flag-455900 dockerd[688]: time="2023-12-18T13:46:50.925505429Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Dec 18 13:46:50 force-systemd-flag-455900 dockerd[688]: time="2023-12-18T13:46:50.925525131Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Dec 18 13:46:50 force-systemd-flag-455900 dockerd[688]: time="2023-12-18T13:46:50.925542033Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	Dec 18 13:46:50 force-systemd-flag-455900 dockerd[688]: time="2023-12-18T13:46:50.925554434Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Dec 18 13:46:50 force-systemd-flag-455900 dockerd[688]: time="2023-12-18T13:46:50.925565634Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	Dec 18 13:46:50 force-systemd-flag-455900 dockerd[688]: time="2023-12-18T13:46:50.926011071Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Dec 18 13:46:50 force-systemd-flag-455900 dockerd[688]: time="2023-12-18T13:46:50.926134881Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Dec 18 13:46:50 force-systemd-flag-455900 dockerd[688]: time="2023-12-18T13:46:50.926180285Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Dec 18 13:46:50 force-systemd-flag-455900 dockerd[688]: time="2023-12-18T13:46:50.926221489Z" level=info msg="containerd successfully booted in 0.049862s"
	Dec 18 13:46:50 force-systemd-flag-455900 dockerd[682]: time="2023-12-18T13:46:50.966265991Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Dec 18 13:46:50 force-systemd-flag-455900 dockerd[682]: time="2023-12-18T13:46:50.980126534Z" level=info msg="Loading containers: start."
	Dec 18 13:46:51 force-systemd-flag-455900 dockerd[682]: time="2023-12-18T13:46:51.200189154Z" level=info msg="Loading containers: done."
	Dec 18 13:46:51 force-systemd-flag-455900 dockerd[682]: time="2023-12-18T13:46:51.217551496Z" level=warning msg="WARNING: No blkio throttle.read_bps_device support"
	Dec 18 13:46:51 force-systemd-flag-455900 dockerd[682]: time="2023-12-18T13:46:51.217588199Z" level=warning msg="WARNING: No blkio throttle.write_bps_device support"
	Dec 18 13:46:51 force-systemd-flag-455900 dockerd[682]: time="2023-12-18T13:46:51.217594899Z" level=warning msg="WARNING: No blkio throttle.read_iops_device support"
	Dec 18 13:46:51 force-systemd-flag-455900 dockerd[682]: time="2023-12-18T13:46:51.217601000Z" level=warning msg="WARNING: No blkio throttle.write_iops_device support"
	Dec 18 13:46:51 force-systemd-flag-455900 dockerd[682]: time="2023-12-18T13:46:51.217618801Z" level=info msg="Docker daemon" commit=311b9ff graphdriver=overlay2 version=24.0.7
	Dec 18 13:46:51 force-systemd-flag-455900 dockerd[682]: time="2023-12-18T13:46:51.217706808Z" level=info msg="Daemon has completed initialization"
	Dec 18 13:46:51 force-systemd-flag-455900 dockerd[682]: time="2023-12-18T13:46:51.269757132Z" level=info msg="API listen on [::]:2376"
	Dec 18 13:46:51 force-systemd-flag-455900 dockerd[682]: time="2023-12-18T13:46:51.269958648Z" level=info msg="API listen on /var/run/docker.sock"
	Dec 18 13:46:51 force-systemd-flag-455900 systemd[1]: Started Docker Application Container Engine.
	Dec 18 13:47:22 force-systemd-flag-455900 dockerd[682]: time="2023-12-18T13:47:22.742161111Z" level=info msg="Processing signal 'terminated'"
	Dec 18 13:47:22 force-systemd-flag-455900 dockerd[682]: time="2023-12-18T13:47:22.743320311Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Dec 18 13:47:22 force-systemd-flag-455900 dockerd[682]: time="2023-12-18T13:47:22.743772411Z" level=info msg="Daemon shutdown complete"
	Dec 18 13:47:22 force-systemd-flag-455900 dockerd[682]: time="2023-12-18T13:47:22.743816811Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Dec 18 13:47:22 force-systemd-flag-455900 dockerd[682]: time="2023-12-18T13:47:22.743959711Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Dec 18 13:47:22 force-systemd-flag-455900 systemd[1]: Stopping Docker Application Container Engine...
	Dec 18 13:47:23 force-systemd-flag-455900 systemd[1]: docker.service: Succeeded.
	Dec 18 13:47:23 force-systemd-flag-455900 systemd[1]: Stopped Docker Application Container Engine.
	Dec 18 13:47:23 force-systemd-flag-455900 systemd[1]: Starting Docker Application Container Engine...
	Dec 18 13:47:23 force-systemd-flag-455900 dockerd[1017]: time="2023-12-18T13:47:23.820362911Z" level=info msg="Starting up"
	Dec 18 13:48:23 force-systemd-flag-455900 dockerd[1017]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Dec 18 13:48:23 force-systemd-flag-455900 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Dec 18 13:48:23 force-systemd-flag-455900 systemd[1]: docker.service: Failed with result 'exit-code'.
	Dec 18 13:48:23 force-systemd-flag-455900 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W1218 13:48:23.869246   13916 out.go:239] * 
	* 
	W1218 13:48:23.871364   13916 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1218 13:48:23.872322   13916 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:93: failed to start minikube with args: "out/minikube-windows-amd64.exe start -p force-systemd-flag-455900 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=hyperv" : exit status 90
docker_test.go:110: (dbg) Run:  out/minikube-windows-amd64.exe -p force-systemd-flag-455900 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Done: out/minikube-windows-amd64.exe -p force-systemd-flag-455900 ssh "docker info --format {{.CgroupDriver}}": (1m0.0311424s)
docker_test.go:115: expected systemd cgroup driver, got: 
-- stdout --
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	

                                                
                                                
-- /stdout --
** stderr ** 
	W1218 13:48:24.316062    8472 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
panic.go:523: *** TestForceSystemdFlag FAILED at 2023-12-18 13:49:24.1913122 +0000 UTC m=+7661.870826801
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p force-systemd-flag-455900 -n force-systemd-flag-455900
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p force-systemd-flag-455900 -n force-systemd-flag-455900: exit status 6 (13.288773s)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	W1218 13:49:24.311779   14920 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	E1218 13:49:37.401808   14920 status.go:415] kubeconfig endpoint: extract IP: "force-systemd-flag-455900" does not appear in C:\Users\jenkins.minikube7\minikube-integration\kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "force-systemd-flag-455900" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:175: Cleaning up "force-systemd-flag-455900" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p force-systemd-flag-455900
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p force-systemd-flag-455900: (1m1.2893593s)
--- FAIL: TestForceSystemdFlag (632.06s)

                                                
                                    
x
+
TestForceSystemdEnv (521.31s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-windows-amd64.exe start -p force-systemd-env-915900 --memory=2048 --alsologtostderr -v=5 --driver=hyperv
docker_test.go:155: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p force-systemd-env-915900 --memory=2048 --alsologtostderr -v=5 --driver=hyperv: exit status 90 (6m26.1777512s)

                                                
                                                
-- stdout --
	* [force-systemd-env-915900] minikube v1.32.0 on Microsoft Windows 10 Enterprise N 10.0.19045.3803 Build 19045.3803
	  - KUBECONFIG=C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube7\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=17824
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_FORCE_SYSTEMD=true
	* Using the hyperv driver based on user configuration
	* Starting control plane node force-systemd-env-915900 in cluster force-systemd-env-915900
	* Creating hyperv VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	W1218 13:39:46.423009   12136 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I1218 13:39:46.494293   12136 out.go:296] Setting OutFile to fd 1732 ...
	I1218 13:39:46.495287   12136 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1218 13:39:46.495287   12136 out.go:309] Setting ErrFile to fd 1000...
	I1218 13:39:46.495287   12136 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1218 13:39:46.521122   12136 out.go:303] Setting JSON to false
	I1218 13:39:46.526085   12136 start.go:128] hostinfo: {"hostname":"minikube7","uptime":7261,"bootTime":1702899525,"procs":206,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.3803 Build 19045.3803","kernelVersion":"10.0.19045.3803 Build 19045.3803","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f66ed2ea-6c04-4a6b-8eea-b2fb0953e990"}
	W1218 13:39:46.526085   12136 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1218 13:39:46.528103   12136 out.go:177] * [force-systemd-env-915900] minikube v1.32.0 on Microsoft Windows 10 Enterprise N 10.0.19045.3803 Build 19045.3803
	I1218 13:39:46.529433   12136 notify.go:220] Checking for updates...
	I1218 13:39:46.530283   12136 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	I1218 13:39:46.531201   12136 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube7\minikube-integration\.minikube
	I1218 13:39:46.531975   12136 out.go:177]   - MINIKUBE_LOCATION=17824
	I1218 13:39:46.533210   12136 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1218 13:39:46.534626   12136 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=true
	I1218 13:39:46.536635   12136 config.go:182] Loaded profile config "NoKubernetes-137000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1218 13:39:46.536733   12136 config.go:182] Loaded profile config "docker-flags-904000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1218 13:39:46.537668   12136 config.go:182] Loaded profile config "multinode-015900-m01": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1218 13:39:46.538184   12136 config.go:182] Loaded profile config "pause-984000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1218 13:39:46.538376   12136 driver.go:392] Setting default libvirt URI to qemu:///system
	I1218 13:39:52.551208   12136 out.go:177] * Using the hyperv driver based on user configuration
	I1218 13:39:52.552691   12136 start.go:298] selected driver: hyperv
	I1218 13:39:52.552760   12136 start.go:902] validating driver "hyperv" against <nil>
	I1218 13:39:52.552821   12136 start.go:913] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1218 13:39:52.606050   12136 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1218 13:39:52.607009   12136 start_flags.go:913] Wait components to verify : map[apiserver:true system_pods:true]
	I1218 13:39:52.607009   12136 cni.go:84] Creating CNI manager for ""
	I1218 13:39:52.607009   12136 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1218 13:39:52.607934   12136 start_flags.go:318] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1218 13:39:52.607934   12136 start_flags.go:323] config:
	{Name:force-systemd-env-915900 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702585004-17765@sha256:ba2fbb9efd5b81a443834ba0800f3bc13feea942ce199df74b0054a9bdb32bbd Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:force-systemd-env-915900 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1218 13:39:52.608021   12136 iso.go:125] acquiring lock: {Name:mka7fdf4332632f41b99a369cc525e40499a0030 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1218 13:39:52.609018   12136 out.go:177] * Starting control plane node force-systemd-env-915900 in cluster force-systemd-env-915900
	I1218 13:39:52.610021   12136 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I1218 13:39:52.610021   12136 preload.go:148] Found local preload: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4
	I1218 13:39:52.610021   12136 cache.go:56] Caching tarball of preloaded images
	I1218 13:39:52.610021   12136 preload.go:174] Found C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1218 13:39:52.610021   12136 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on docker
	I1218 13:39:52.611025   12136 profile.go:148] Saving config to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\force-systemd-env-915900\config.json ...
	I1218 13:39:52.611025   12136 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\force-systemd-env-915900\config.json: {Name:mk12af3ac6f2e8da816b2bb086a6516e9a5cba37 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1218 13:39:52.612010   12136 start.go:365] acquiring machines lock for force-systemd-env-915900: {Name:mk814f158b6187cc9297257c36fdbe0d2871c950 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1218 13:42:38.784423   12136 start.go:369] acquired machines lock for "force-systemd-env-915900" in 2m46.1716651s
	I1218 13:42:38.784896   12136 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-915900 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17765/minikube-v1.32.1-1702490427-17765-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702585004-17765@sha256:ba2fbb9efd5b81a443834ba0800f3bc13feea942ce199df74b0054a9bdb32bbd Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kub
ernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:force-systemd-env-915900 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9
p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1218 13:42:38.785189   12136 start.go:125] createHost starting for "" (driver="hyperv")
	I1218 13:42:38.786236   12136 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I1218 13:42:38.788359   12136 start.go:159] libmachine.API.Create for "force-systemd-env-915900" (driver="hyperv")
	I1218 13:42:38.788359   12136 client.go:168] LocalClient.Create starting
	I1218 13:42:38.789081   12136 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem
	I1218 13:42:38.789663   12136 main.go:141] libmachine: Decoding PEM data...
	I1218 13:42:38.789749   12136 main.go:141] libmachine: Parsing certificate...
	I1218 13:42:38.790160   12136 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem
	I1218 13:42:38.790580   12136 main.go:141] libmachine: Decoding PEM data...
	I1218 13:42:38.790644   12136 main.go:141] libmachine: Parsing certificate...
	I1218 13:42:38.790774   12136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I1218 13:42:41.113307   12136 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I1218 13:42:41.113425   12136 main.go:141] libmachine: [stderr =====>] : 
	I1218 13:42:41.113425   12136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I1218 13:42:43.131773   12136 main.go:141] libmachine: [stdout =====>] : False
	
	I1218 13:42:43.131773   12136 main.go:141] libmachine: [stderr =====>] : 
	I1218 13:42:43.131890   12136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I1218 13:42:44.892346   12136 main.go:141] libmachine: [stdout =====>] : True
	
	I1218 13:42:44.892685   12136 main.go:141] libmachine: [stderr =====>] : 
	I1218 13:42:44.892870   12136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I1218 13:42:49.498927   12136 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I1218 13:42:49.498927   12136 main.go:141] libmachine: [stderr =====>] : 
	I1218 13:42:49.501865   12136 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube7/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.32.1-1702490427-17765-amd64.iso...
	I1218 13:42:50.015081   12136 main.go:141] libmachine: Creating SSH key...
	I1218 13:42:50.313915   12136 main.go:141] libmachine: Creating VM...
	I1218 13:42:50.313915   12136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I1218 13:42:53.660894   12136 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I1218 13:42:53.661288   12136 main.go:141] libmachine: [stderr =====>] : 
	I1218 13:42:53.661345   12136 main.go:141] libmachine: Using switch "Default Switch"
	I1218 13:42:53.661401   12136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I1218 13:42:55.552561   12136 main.go:141] libmachine: [stdout =====>] : True
	
	I1218 13:42:55.552805   12136 main.go:141] libmachine: [stderr =====>] : 
	I1218 13:42:55.553172   12136 main.go:141] libmachine: Creating VHD
	I1218 13:42:55.553338   12136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\force-systemd-env-915900\fixed.vhd' -SizeBytes 10MB -Fixed
	I1218 13:42:59.489306   12136 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube7
	Path                    : C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\force-systemd-env-915900\f
	                          ixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 8D03E932-96D1-49E6-A4BC-B2B979628A58
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I1218 13:42:59.489417   12136 main.go:141] libmachine: [stderr =====>] : 
	I1218 13:42:59.489417   12136 main.go:141] libmachine: Writing magic tar header
	I1218 13:42:59.489500   12136 main.go:141] libmachine: Writing SSH key tar header
	I1218 13:42:59.499654   12136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\force-systemd-env-915900\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\force-systemd-env-915900\disk.vhd' -VHDType Dynamic -DeleteSource
	I1218 13:43:02.906606   12136 main.go:141] libmachine: [stdout =====>] : 
	I1218 13:43:02.906847   12136 main.go:141] libmachine: [stderr =====>] : 
	I1218 13:43:02.907009   12136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\force-systemd-env-915900\disk.vhd' -SizeBytes 20000MB
	I1218 13:43:05.590920   12136 main.go:141] libmachine: [stdout =====>] : 
	I1218 13:43:05.590920   12136 main.go:141] libmachine: [stderr =====>] : 
	I1218 13:43:05.591048   12136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM force-systemd-env-915900 -Path 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\force-systemd-env-915900' -SwitchName 'Default Switch' -MemoryStartupBytes 2048MB
	I1218 13:43:09.254009   12136 main.go:141] libmachine: [stdout =====>] : 
	Name                     State CPUUsage(%) MemoryAssigned(M) Uptime   Status             Version
	----                     ----- ----------- ----------------- ------   ------             -------
	force-systemd-env-915900 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I1218 13:43:09.254195   12136 main.go:141] libmachine: [stderr =====>] : 
	I1218 13:43:09.254195   12136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName force-systemd-env-915900 -DynamicMemoryEnabled $false
	I1218 13:43:11.621194   12136 main.go:141] libmachine: [stdout =====>] : 
	I1218 13:43:11.621489   12136 main.go:141] libmachine: [stderr =====>] : 
	I1218 13:43:11.621489   12136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor force-systemd-env-915900 -Count 2
	I1218 13:43:13.966318   12136 main.go:141] libmachine: [stdout =====>] : 
	I1218 13:43:13.966495   12136 main.go:141] libmachine: [stderr =====>] : 
	I1218 13:43:13.966495   12136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName force-systemd-env-915900 -Path 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\force-systemd-env-915900\boot2docker.iso'
	I1218 13:43:16.834539   12136 main.go:141] libmachine: [stdout =====>] : 
	I1218 13:43:16.834539   12136 main.go:141] libmachine: [stderr =====>] : 
	I1218 13:43:16.834539   12136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName force-systemd-env-915900 -Path 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\force-systemd-env-915900\disk.vhd'
	I1218 13:43:20.207162   12136 main.go:141] libmachine: [stdout =====>] : 
	I1218 13:43:20.207404   12136 main.go:141] libmachine: [stderr =====>] : 
	I1218 13:43:20.207404   12136 main.go:141] libmachine: Starting VM...
	I1218 13:43:20.207646   12136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM force-systemd-env-915900
	I1218 13:43:24.630084   12136 main.go:141] libmachine: [stdout =====>] : 
	I1218 13:43:24.630119   12136 main.go:141] libmachine: [stderr =====>] : 
	I1218 13:43:24.630119   12136 main.go:141] libmachine: Waiting for host to start...
	I1218 13:43:24.630253   12136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM force-systemd-env-915900 ).state
	I1218 13:43:27.061185   12136 main.go:141] libmachine: [stdout =====>] : Running
	
	I1218 13:43:27.061185   12136 main.go:141] libmachine: [stderr =====>] : 
	I1218 13:43:27.061185   12136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM force-systemd-env-915900 ).networkadapters[0]).ipaddresses[0]
	I1218 13:43:29.924096   12136 main.go:141] libmachine: [stdout =====>] : 
	I1218 13:43:29.924470   12136 main.go:141] libmachine: [stderr =====>] : 
	I1218 13:43:30.936496   12136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM force-systemd-env-915900 ).state
	I1218 13:43:33.704050   12136 main.go:141] libmachine: [stdout =====>] : Running
	
	I1218 13:43:33.704113   12136 main.go:141] libmachine: [stderr =====>] : 
	I1218 13:43:33.704149   12136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM force-systemd-env-915900 ).networkadapters[0]).ipaddresses[0]
	I1218 13:43:36.826398   12136 main.go:141] libmachine: [stdout =====>] : 
	I1218 13:43:36.826560   12136 main.go:141] libmachine: [stderr =====>] : 
	I1218 13:43:37.834631   12136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM force-systemd-env-915900 ).state
	I1218 13:43:40.120573   12136 main.go:141] libmachine: [stdout =====>] : Running
	
	I1218 13:43:40.120573   12136 main.go:141] libmachine: [stderr =====>] : 
	I1218 13:43:40.120573   12136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM force-systemd-env-915900 ).networkadapters[0]).ipaddresses[0]
	I1218 13:43:42.718262   12136 main.go:141] libmachine: [stdout =====>] : 
	I1218 13:43:42.718496   12136 main.go:141] libmachine: [stderr =====>] : 
	I1218 13:43:43.731344   12136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM force-systemd-env-915900 ).state
	I1218 13:43:46.176576   12136 main.go:141] libmachine: [stdout =====>] : Running
	
	I1218 13:43:46.176629   12136 main.go:141] libmachine: [stderr =====>] : 
	I1218 13:43:46.176629   12136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM force-systemd-env-915900 ).networkadapters[0]).ipaddresses[0]
	I1218 13:43:48.914410   12136 main.go:141] libmachine: [stdout =====>] : 
	I1218 13:43:48.914539   12136 main.go:141] libmachine: [stderr =====>] : 
	I1218 13:43:49.929504   12136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM force-systemd-env-915900 ).state
	I1218 13:43:52.333795   12136 main.go:141] libmachine: [stdout =====>] : Running
	
	I1218 13:43:52.333795   12136 main.go:141] libmachine: [stderr =====>] : 
	I1218 13:43:52.333795   12136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM force-systemd-env-915900 ).networkadapters[0]).ipaddresses[0]
	I1218 13:43:55.065693   12136 main.go:141] libmachine: [stdout =====>] : 192.168.225.188
	
	I1218 13:43:55.065693   12136 main.go:141] libmachine: [stderr =====>] : 
	I1218 13:43:55.065693   12136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM force-systemd-env-915900 ).state
	I1218 13:43:57.429962   12136 main.go:141] libmachine: [stdout =====>] : Running
	
	I1218 13:43:57.429962   12136 main.go:141] libmachine: [stderr =====>] : 
	I1218 13:43:57.430077   12136 machine.go:88] provisioning docker machine ...
	I1218 13:43:57.430077   12136 buildroot.go:166] provisioning hostname "force-systemd-env-915900"
	I1218 13:43:57.430221   12136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM force-systemd-env-915900 ).state
	I1218 13:43:59.703574   12136 main.go:141] libmachine: [stdout =====>] : Running
	
	I1218 13:43:59.703574   12136 main.go:141] libmachine: [stderr =====>] : 
	I1218 13:43:59.703733   12136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM force-systemd-env-915900 ).networkadapters[0]).ipaddresses[0]
	I1218 13:44:02.407782   12136 main.go:141] libmachine: [stdout =====>] : 192.168.225.188
	
	I1218 13:44:02.408008   12136 main.go:141] libmachine: [stderr =====>] : 
	I1218 13:44:02.414952   12136 main.go:141] libmachine: Using SSH client type: native
	I1218 13:44:02.415639   12136 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xdb4f40] 0xdb7a80 <nil>  [] 0s} 192.168.225.188 22 <nil> <nil>}
	I1218 13:44:02.415639   12136 main.go:141] libmachine: About to run SSH command:
	sudo hostname force-systemd-env-915900 && echo "force-systemd-env-915900" | sudo tee /etc/hostname
	I1218 13:44:02.586589   12136 main.go:141] libmachine: SSH cmd err, output: <nil>: force-systemd-env-915900
	
	I1218 13:44:02.586752   12136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM force-systemd-env-915900 ).state
	I1218 13:44:04.835674   12136 main.go:141] libmachine: [stdout =====>] : Running
	
	I1218 13:44:04.835757   12136 main.go:141] libmachine: [stderr =====>] : 
	I1218 13:44:04.835757   12136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM force-systemd-env-915900 ).networkadapters[0]).ipaddresses[0]
	I1218 13:44:07.531230   12136 main.go:141] libmachine: [stdout =====>] : 192.168.225.188
	
	I1218 13:44:07.531230   12136 main.go:141] libmachine: [stderr =====>] : 
	I1218 13:44:07.536651   12136 main.go:141] libmachine: Using SSH client type: native
	I1218 13:44:07.537297   12136 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xdb4f40] 0xdb7a80 <nil>  [] 0s} 192.168.225.188 22 <nil> <nil>}
	I1218 13:44:07.537297   12136 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sforce-systemd-env-915900' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 force-systemd-env-915900/g' /etc/hosts;
				else 
					echo '127.0.1.1 force-systemd-env-915900' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1218 13:44:07.690121   12136 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1218 13:44:07.690121   12136 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube7\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube7\minikube-integration\.minikube}
	I1218 13:44:07.690273   12136 buildroot.go:174] setting up certificates
	I1218 13:44:07.690273   12136 provision.go:83] configureAuth start
	I1218 13:44:07.690401   12136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM force-systemd-env-915900 ).state
	I1218 13:44:09.927326   12136 main.go:141] libmachine: [stdout =====>] : Running
	
	I1218 13:44:09.927326   12136 main.go:141] libmachine: [stderr =====>] : 
	I1218 13:44:09.927450   12136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM force-systemd-env-915900 ).networkadapters[0]).ipaddresses[0]
	I1218 13:44:12.589492   12136 main.go:141] libmachine: [stdout =====>] : 192.168.225.188
	
	I1218 13:44:12.589680   12136 main.go:141] libmachine: [stderr =====>] : 
	I1218 13:44:12.589680   12136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM force-systemd-env-915900 ).state
	I1218 13:44:14.872859   12136 main.go:141] libmachine: [stdout =====>] : Running
	
	I1218 13:44:14.873135   12136 main.go:141] libmachine: [stderr =====>] : 
	I1218 13:44:14.873196   12136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM force-systemd-env-915900 ).networkadapters[0]).ipaddresses[0]
	I1218 13:44:17.566215   12136 main.go:141] libmachine: [stdout =====>] : 192.168.225.188
	
	I1218 13:44:17.566215   12136 main.go:141] libmachine: [stderr =====>] : 
	I1218 13:44:17.566215   12136 provision.go:138] copyHostCerts
	I1218 13:44:17.566215   12136 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube7\minikube-integration\.minikube/ca.pem
	I1218 13:44:17.566215   12136 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/ca.pem, removing ...
	I1218 13:44:17.566215   12136 exec_runner.go:203] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.pem
	I1218 13:44:17.566853   12136 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/ca.pem (1078 bytes)
	I1218 13:44:17.568595   12136 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube7\minikube-integration\.minikube/cert.pem
	I1218 13:44:17.568817   12136 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/cert.pem, removing ...
	I1218 13:44:17.568817   12136 exec_runner.go:203] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cert.pem
	I1218 13:44:17.568817   12136 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/cert.pem (1123 bytes)
	I1218 13:44:17.569938   12136 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube7\minikube-integration\.minikube/key.pem
	I1218 13:44:17.569938   12136 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/key.pem, removing ...
	I1218 13:44:17.570480   12136 exec_runner.go:203] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\key.pem
	I1218 13:44:17.570576   12136 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/key.pem (1679 bytes)
	I1218 13:44:17.571373   12136 provision.go:112] generating server cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.force-systemd-env-915900 san=[192.168.225.188 192.168.225.188 localhost 127.0.0.1 minikube force-systemd-env-915900]
	I1218 13:44:17.776668   12136 provision.go:172] copyRemoteCerts
	I1218 13:44:17.790646   12136 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1218 13:44:17.790646   12136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM force-systemd-env-915900 ).state
	I1218 13:44:20.042058   12136 main.go:141] libmachine: [stdout =====>] : Running
	
	I1218 13:44:20.042233   12136 main.go:141] libmachine: [stderr =====>] : 
	I1218 13:44:20.042449   12136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM force-systemd-env-915900 ).networkadapters[0]).ipaddresses[0]
	I1218 13:44:22.759301   12136 main.go:141] libmachine: [stdout =====>] : 192.168.225.188
	
	I1218 13:44:22.759486   12136 main.go:141] libmachine: [stderr =====>] : 
	I1218 13:44:22.759557   12136 sshutil.go:53] new ssh client: &{IP:192.168.225.188 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\force-systemd-env-915900\id_rsa Username:docker}
	I1218 13:44:22.868872   12136 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (5.0782044s)
	I1218 13:44:22.869009   12136 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I1218 13:44:22.869289   12136 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1249 bytes)
	I1218 13:44:22.908422   12136 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I1218 13:44:22.908585   12136 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1218 13:44:22.947417   12136 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I1218 13:44:22.947483   12136 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1218 13:44:22.985657   12136 provision.go:86] duration metric: configureAuth took 15.2953188s
	I1218 13:44:22.985657   12136 buildroot.go:189] setting minikube options for container-runtime
	I1218 13:44:22.986283   12136 config.go:182] Loaded profile config "force-systemd-env-915900": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1218 13:44:22.986283   12136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM force-systemd-env-915900 ).state
	I1218 13:44:25.187694   12136 main.go:141] libmachine: [stdout =====>] : Running
	
	I1218 13:44:25.187772   12136 main.go:141] libmachine: [stderr =====>] : 
	I1218 13:44:25.187772   12136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM force-systemd-env-915900 ).networkadapters[0]).ipaddresses[0]
	I1218 13:44:27.779743   12136 main.go:141] libmachine: [stdout =====>] : 192.168.225.188
	
	I1218 13:44:27.779743   12136 main.go:141] libmachine: [stderr =====>] : 
	I1218 13:44:27.784443   12136 main.go:141] libmachine: Using SSH client type: native
	I1218 13:44:27.785337   12136 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xdb4f40] 0xdb7a80 <nil>  [] 0s} 192.168.225.188 22 <nil> <nil>}
	I1218 13:44:27.785337   12136 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1218 13:44:27.928477   12136 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1218 13:44:27.928477   12136 buildroot.go:70] root file system type: tmpfs
	I1218 13:44:27.928791   12136 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I1218 13:44:27.928791   12136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM force-systemd-env-915900 ).state
	I1218 13:44:30.182216   12136 main.go:141] libmachine: [stdout =====>] : Running
	
	I1218 13:44:30.182216   12136 main.go:141] libmachine: [stderr =====>] : 
	I1218 13:44:30.182306   12136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM force-systemd-env-915900 ).networkadapters[0]).ipaddresses[0]
	I1218 13:44:32.910374   12136 main.go:141] libmachine: [stdout =====>] : 192.168.225.188
	
	I1218 13:44:32.910374   12136 main.go:141] libmachine: [stderr =====>] : 
	I1218 13:44:32.915842   12136 main.go:141] libmachine: Using SSH client type: native
	I1218 13:44:32.916448   12136 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xdb4f40] 0xdb7a80 <nil>  [] 0s} 192.168.225.188 22 <nil> <nil>}
	I1218 13:44:32.916448   12136 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1218 13:44:33.080205   12136 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1218 13:44:33.080306   12136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM force-systemd-env-915900 ).state
	I1218 13:44:35.308128   12136 main.go:141] libmachine: [stdout =====>] : Running
	
	I1218 13:44:35.308191   12136 main.go:141] libmachine: [stderr =====>] : 
	I1218 13:44:35.308191   12136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM force-systemd-env-915900 ).networkadapters[0]).ipaddresses[0]
	I1218 13:44:37.966613   12136 main.go:141] libmachine: [stdout =====>] : 192.168.225.188
	
	I1218 13:44:37.966613   12136 main.go:141] libmachine: [stderr =====>] : 
	I1218 13:44:37.972005   12136 main.go:141] libmachine: Using SSH client type: native
	I1218 13:44:37.973324   12136 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xdb4f40] 0xdb7a80 <nil>  [] 0s} 192.168.225.188 22 <nil> <nil>}
	I1218 13:44:37.973324   12136 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1218 13:44:38.958865   12136 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I1218 13:44:38.958939   12136 machine.go:91] provisioned docker machine in 41.5286837s
	I1218 13:44:38.958939   12136 client.go:171] LocalClient.Create took 2m0.170064s
	I1218 13:44:38.959057   12136 start.go:167] duration metric: libmachine.API.Create for "force-systemd-env-915900" took 2m0.1701818s
	I1218 13:44:38.959099   12136 start.go:300] post-start starting for "force-systemd-env-915900" (driver="hyperv")
	I1218 13:44:38.959099   12136 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1218 13:44:38.972017   12136 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1218 13:44:38.972017   12136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM force-systemd-env-915900 ).state
	I1218 13:44:41.160685   12136 main.go:141] libmachine: [stdout =====>] : Running
	
	I1218 13:44:41.160685   12136 main.go:141] libmachine: [stderr =====>] : 
	I1218 13:44:41.160822   12136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM force-systemd-env-915900 ).networkadapters[0]).ipaddresses[0]
	I1218 13:44:43.733314   12136 main.go:141] libmachine: [stdout =====>] : 192.168.225.188
	
	I1218 13:44:43.733314   12136 main.go:141] libmachine: [stderr =====>] : 
	I1218 13:44:43.733454   12136 sshutil.go:53] new ssh client: &{IP:192.168.225.188 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\force-systemd-env-915900\id_rsa Username:docker}
	I1218 13:44:43.841829   12136 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.8697905s)
	I1218 13:44:43.856415   12136 ssh_runner.go:195] Run: cat /etc/os-release
	I1218 13:44:43.862439   12136 info.go:137] Remote host: Buildroot 2021.02.12
	I1218 13:44:43.862439   12136 filesync.go:126] Scanning C:\Users\jenkins.minikube7\minikube-integration\.minikube\addons for local assets ...
	I1218 13:44:43.862982   12136 filesync.go:126] Scanning C:\Users\jenkins.minikube7\minikube-integration\.minikube\files for local assets ...
	I1218 13:44:43.864282   12136 filesync.go:149] local asset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\149282.pem -> 149282.pem in /etc/ssl/certs
	I1218 13:44:43.864282   12136 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\149282.pem -> /etc/ssl/certs/149282.pem
	I1218 13:44:43.878639   12136 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1218 13:44:43.895479   12136 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\149282.pem --> /etc/ssl/certs/149282.pem (1708 bytes)
	I1218 13:44:43.935913   12136 start.go:303] post-start completed in 4.976793s
	I1218 13:44:43.938478   12136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM force-systemd-env-915900 ).state
	I1218 13:44:46.144743   12136 main.go:141] libmachine: [stdout =====>] : Running
	
	I1218 13:44:46.144952   12136 main.go:141] libmachine: [stderr =====>] : 
	I1218 13:44:46.144952   12136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM force-systemd-env-915900 ).networkadapters[0]).ipaddresses[0]
	I1218 13:44:48.788321   12136 main.go:141] libmachine: [stdout =====>] : 192.168.225.188
	
	I1218 13:44:48.788701   12136 main.go:141] libmachine: [stderr =====>] : 
	I1218 13:44:48.788943   12136 profile.go:148] Saving config to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\force-systemd-env-915900\config.json ...
	I1218 13:44:48.817360   12136 start.go:128] duration metric: createHost completed in 2m10.0316125s
	I1218 13:44:48.817915   12136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM force-systemd-env-915900 ).state
	I1218 13:44:51.054244   12136 main.go:141] libmachine: [stdout =====>] : Running
	
	I1218 13:44:51.054621   12136 main.go:141] libmachine: [stderr =====>] : 
	I1218 13:44:51.054737   12136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM force-systemd-env-915900 ).networkadapters[0]).ipaddresses[0]
	I1218 13:44:53.711462   12136 main.go:141] libmachine: [stdout =====>] : 192.168.225.188
	
	I1218 13:44:53.711462   12136 main.go:141] libmachine: [stderr =====>] : 
	I1218 13:44:53.718228   12136 main.go:141] libmachine: Using SSH client type: native
	I1218 13:44:53.718978   12136 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xdb4f40] 0xdb7a80 <nil>  [] 0s} 192.168.225.188 22 <nil> <nil>}
	I1218 13:44:53.718978   12136 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1218 13:44:53.855631   12136 main.go:141] libmachine: SSH cmd err, output: <nil>: 1702907093.866923892
	
	I1218 13:44:53.855696   12136 fix.go:206] guest clock: 1702907093.866923892
	I1218 13:44:53.855696   12136 fix.go:219] Guest: 2023-12-18 13:44:53.866923892 +0000 UTC Remote: 2023-12-18 13:44:48.8179159 +0000 UTC m=+302.503604701 (delta=5.049007992s)
	I1218 13:44:53.855789   12136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM force-systemd-env-915900 ).state
	I1218 13:44:56.075259   12136 main.go:141] libmachine: [stdout =====>] : Running
	
	I1218 13:44:56.075425   12136 main.go:141] libmachine: [stderr =====>] : 
	I1218 13:44:56.075503   12136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM force-systemd-env-915900 ).networkadapters[0]).ipaddresses[0]
	I1218 13:44:58.701300   12136 main.go:141] libmachine: [stdout =====>] : 192.168.225.188
	
	I1218 13:44:58.701562   12136 main.go:141] libmachine: [stderr =====>] : 
	I1218 13:44:58.706773   12136 main.go:141] libmachine: Using SSH client type: native
	I1218 13:44:58.707756   12136 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xdb4f40] 0xdb7a80 <nil>  [] 0s} 192.168.225.188 22 <nil> <nil>}
	I1218 13:44:58.707756   12136 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1702907093
	I1218 13:44:58.861055   12136 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Dec 18 13:44:53 UTC 2023
	
	I1218 13:44:58.861115   12136 fix.go:226] clock set: Mon Dec 18 13:44:53 UTC 2023
	 (err=<nil>)
	I1218 13:44:58.861115   12136 start.go:83] releasing machines lock for "force-systemd-env-915900", held for 2m20.0759583s
	I1218 13:44:58.861384   12136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM force-systemd-env-915900 ).state
	I1218 13:45:01.118443   12136 main.go:141] libmachine: [stdout =====>] : Running
	
	I1218 13:45:01.118501   12136 main.go:141] libmachine: [stderr =====>] : 
	I1218 13:45:01.118609   12136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM force-systemd-env-915900 ).networkadapters[0]).ipaddresses[0]
	I1218 13:45:03.806736   12136 main.go:141] libmachine: [stdout =====>] : 192.168.225.188
	
	I1218 13:45:03.806954   12136 main.go:141] libmachine: [stderr =====>] : 
	I1218 13:45:03.810991   12136 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1218 13:45:03.811074   12136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM force-systemd-env-915900 ).state
	I1218 13:45:03.834440   12136 ssh_runner.go:195] Run: cat /version.json
	I1218 13:45:03.834440   12136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM force-systemd-env-915900 ).state
	I1218 13:45:06.409249   12136 main.go:141] libmachine: [stdout =====>] : Running
	
	I1218 13:45:06.409425   12136 main.go:141] libmachine: [stderr =====>] : 
	I1218 13:45:06.409493   12136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM force-systemd-env-915900 ).networkadapters[0]).ipaddresses[0]
	I1218 13:45:06.439779   12136 main.go:141] libmachine: [stdout =====>] : Running
	
	I1218 13:45:06.439949   12136 main.go:141] libmachine: [stderr =====>] : 
	I1218 13:45:06.439949   12136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM force-systemd-env-915900 ).networkadapters[0]).ipaddresses[0]
	I1218 13:45:09.365847   12136 main.go:141] libmachine: [stdout =====>] : 192.168.225.188
	
	I1218 13:45:09.365847   12136 main.go:141] libmachine: [stderr =====>] : 
	I1218 13:45:09.365847   12136 sshutil.go:53] new ssh client: &{IP:192.168.225.188 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\force-systemd-env-915900\id_rsa Username:docker}
	I1218 13:45:09.467378   12136 ssh_runner.go:235] Completed: cat /version.json: (5.6329138s)
	I1218 13:45:09.476012   12136 main.go:141] libmachine: [stdout =====>] : 192.168.225.188
	
	I1218 13:45:09.476345   12136 main.go:141] libmachine: [stderr =====>] : 
	I1218 13:45:09.476754   12136 sshutil.go:53] new ssh client: &{IP:192.168.225.188 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\force-systemd-env-915900\id_rsa Username:docker}
	I1218 13:45:09.483257   12136 ssh_runner.go:195] Run: systemctl --version
	I1218 13:45:09.509492   12136 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1218 13:45:09.516250   12136 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1218 13:45:09.529745   12136 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1218 13:45:09.602685   12136 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (5.7915867s)
	I1218 13:45:09.604119   12136 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1218 13:45:09.604119   12136 start.go:475] detecting cgroup driver to use...
	I1218 13:45:09.604215   12136 start.go:479] using "systemd" cgroup driver as enforced via flags
	I1218 13:45:09.604471   12136 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1218 13:45:09.649779   12136 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I1218 13:45:09.684616   12136 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1218 13:45:09.707539   12136 containerd.go:145] configuring containerd to use "systemd" as cgroup driver...
	I1218 13:45:09.721307   12136 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I1218 13:45:09.756045   12136 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1218 13:45:09.789349   12136 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1218 13:45:09.823294   12136 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1218 13:45:09.853173   12136 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1218 13:45:09.885483   12136 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1218 13:45:09.918853   12136 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1218 13:45:09.949617   12136 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1218 13:45:09.980576   12136 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1218 13:45:10.174297   12136 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1218 13:45:10.205115   12136 start.go:475] detecting cgroup driver to use...
	I1218 13:45:10.205222   12136 start.go:479] using "systemd" cgroup driver as enforced via flags
	I1218 13:45:10.219095   12136 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1218 13:45:10.267273   12136 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1218 13:45:10.313377   12136 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1218 13:45:10.369368   12136 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1218 13:45:10.409340   12136 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1218 13:45:10.454655   12136 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1218 13:45:10.510687   12136 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1218 13:45:10.534711   12136 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1218 13:45:10.583716   12136 ssh_runner.go:195] Run: which cri-dockerd
	I1218 13:45:10.607737   12136 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1218 13:45:10.625965   12136 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I1218 13:45:10.666201   12136 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1218 13:45:10.849205   12136 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1218 13:45:11.013812   12136 docker.go:560] configuring docker to use "systemd" as cgroup driver...
	I1218 13:45:11.013812   12136 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (129 bytes)
	I1218 13:45:11.056188   12136 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1218 13:45:11.239022   12136 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1218 13:46:12.353526   12136 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m1.1141038s)
	I1218 13:46:12.367705   12136 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I1218 13:46:12.399052   12136 out.go:177] 
	W1218 13:46:12.400228   12136 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	-- Journal begins at Mon 2023-12-18 13:43:43 UTC, ends at Mon 2023-12-18 13:46:12 UTC. --
	Dec 18 13:44:38 force-systemd-env-915900 systemd[1]: Starting Docker Application Container Engine...
	Dec 18 13:44:38 force-systemd-env-915900 dockerd[679]: time="2023-12-18T13:44:38.549677943Z" level=info msg="Starting up"
	Dec 18 13:44:38 force-systemd-env-915900 dockerd[679]: time="2023-12-18T13:44:38.550597113Z" level=info msg="containerd not running, starting managed containerd"
	Dec 18 13:44:38 force-systemd-env-915900 dockerd[679]: time="2023-12-18T13:44:38.551624191Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=685
	Dec 18 13:44:38 force-systemd-env-915900 dockerd[685]: time="2023-12-18T13:44:38.586068199Z" level=info msg="starting containerd" revision=64b8a811b07ba6288238eefc14d898ee0b5b99ba version=v1.7.11
	Dec 18 13:44:38 force-systemd-env-915900 dockerd[685]: time="2023-12-18T13:44:38.614326538Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Dec 18 13:44:38 force-systemd-env-915900 dockerd[685]: time="2023-12-18T13:44:38.614515753Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Dec 18 13:44:38 force-systemd-env-915900 dockerd[685]: time="2023-12-18T13:44:38.616618112Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.57\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Dec 18 13:44:38 force-systemd-env-915900 dockerd[685]: time="2023-12-18T13:44:38.616721620Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Dec 18 13:44:38 force-systemd-env-915900 dockerd[685]: time="2023-12-18T13:44:38.617065346Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Dec 18 13:44:38 force-systemd-env-915900 dockerd[685]: time="2023-12-18T13:44:38.617154553Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Dec 18 13:44:38 force-systemd-env-915900 dockerd[685]: time="2023-12-18T13:44:38.617258960Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Dec 18 13:44:38 force-systemd-env-915900 dockerd[685]: time="2023-12-18T13:44:38.617399471Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Dec 18 13:44:38 force-systemd-env-915900 dockerd[685]: time="2023-12-18T13:44:38.617545482Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Dec 18 13:44:38 force-systemd-env-915900 dockerd[685]: time="2023-12-18T13:44:38.617657491Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Dec 18 13:44:38 force-systemd-env-915900 dockerd[685]: time="2023-12-18T13:44:38.618190131Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Dec 18 13:44:38 force-systemd-env-915900 dockerd[685]: time="2023-12-18T13:44:38.618281038Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Dec 18 13:44:38 force-systemd-env-915900 dockerd[685]: time="2023-12-18T13:44:38.618297839Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Dec 18 13:44:38 force-systemd-env-915900 dockerd[685]: time="2023-12-18T13:44:38.618441750Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Dec 18 13:44:38 force-systemd-env-915900 dockerd[685]: time="2023-12-18T13:44:38.618581161Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Dec 18 13:44:38 force-systemd-env-915900 dockerd[685]: time="2023-12-18T13:44:38.618649866Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Dec 18 13:44:38 force-systemd-env-915900 dockerd[685]: time="2023-12-18T13:44:38.618736672Z" level=info msg="metadata content store policy set" policy=shared
	Dec 18 13:44:38 force-systemd-env-915900 dockerd[685]: time="2023-12-18T13:44:38.631836464Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Dec 18 13:44:38 force-systemd-env-915900 dockerd[685]: time="2023-12-18T13:44:38.631991776Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Dec 18 13:44:38 force-systemd-env-915900 dockerd[685]: time="2023-12-18T13:44:38.632015278Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Dec 18 13:44:38 force-systemd-env-915900 dockerd[685]: time="2023-12-18T13:44:38.632077582Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Dec 18 13:44:38 force-systemd-env-915900 dockerd[685]: time="2023-12-18T13:44:38.632102184Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Dec 18 13:44:38 force-systemd-env-915900 dockerd[685]: time="2023-12-18T13:44:38.632299299Z" level=info msg="NRI interface is disabled by configuration."
	Dec 18 13:44:38 force-systemd-env-915900 dockerd[685]: time="2023-12-18T13:44:38.632321301Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Dec 18 13:44:38 force-systemd-env-915900 dockerd[685]: time="2023-12-18T13:44:38.632517016Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Dec 18 13:44:38 force-systemd-env-915900 dockerd[685]: time="2023-12-18T13:44:38.632560619Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Dec 18 13:44:38 force-systemd-env-915900 dockerd[685]: time="2023-12-18T13:44:38.632580221Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Dec 18 13:44:38 force-systemd-env-915900 dockerd[685]: time="2023-12-18T13:44:38.632596122Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Dec 18 13:44:38 force-systemd-env-915900 dockerd[685]: time="2023-12-18T13:44:38.632612923Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Dec 18 13:44:38 force-systemd-env-915900 dockerd[685]: time="2023-12-18T13:44:38.632634325Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Dec 18 13:44:38 force-systemd-env-915900 dockerd[685]: time="2023-12-18T13:44:38.632650526Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Dec 18 13:44:38 force-systemd-env-915900 dockerd[685]: time="2023-12-18T13:44:38.632676928Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Dec 18 13:44:38 force-systemd-env-915900 dockerd[685]: time="2023-12-18T13:44:38.632714631Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Dec 18 13:44:38 force-systemd-env-915900 dockerd[685]: time="2023-12-18T13:44:38.632751133Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Dec 18 13:44:38 force-systemd-env-915900 dockerd[685]: time="2023-12-18T13:44:38.632768735Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Dec 18 13:44:38 force-systemd-env-915900 dockerd[685]: time="2023-12-18T13:44:38.632806838Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Dec 18 13:44:38 force-systemd-env-915900 dockerd[685]: time="2023-12-18T13:44:38.632906845Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Dec 18 13:44:38 force-systemd-env-915900 dockerd[685]: time="2023-12-18T13:44:38.633708706Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Dec 18 13:44:38 force-systemd-env-915900 dockerd[685]: time="2023-12-18T13:44:38.633872718Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Dec 18 13:44:38 force-systemd-env-915900 dockerd[685]: time="2023-12-18T13:44:38.633905221Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Dec 18 13:44:38 force-systemd-env-915900 dockerd[685]: time="2023-12-18T13:44:38.633995228Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Dec 18 13:44:38 force-systemd-env-915900 dockerd[685]: time="2023-12-18T13:44:38.634055432Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Dec 18 13:44:38 force-systemd-env-915900 dockerd[685]: time="2023-12-18T13:44:38.634095835Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Dec 18 13:44:38 force-systemd-env-915900 dockerd[685]: time="2023-12-18T13:44:38.634113237Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Dec 18 13:44:38 force-systemd-env-915900 dockerd[685]: time="2023-12-18T13:44:38.634128338Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Dec 18 13:44:38 force-systemd-env-915900 dockerd[685]: time="2023-12-18T13:44:38.634143539Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Dec 18 13:44:38 force-systemd-env-915900 dockerd[685]: time="2023-12-18T13:44:38.634158840Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Dec 18 13:44:38 force-systemd-env-915900 dockerd[685]: time="2023-12-18T13:44:38.634172641Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Dec 18 13:44:38 force-systemd-env-915900 dockerd[685]: time="2023-12-18T13:44:38.634186442Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Dec 18 13:44:38 force-systemd-env-915900 dockerd[685]: time="2023-12-18T13:44:38.634201843Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Dec 18 13:44:38 force-systemd-env-915900 dockerd[685]: time="2023-12-18T13:44:38.634315152Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Dec 18 13:44:38 force-systemd-env-915900 dockerd[685]: time="2023-12-18T13:44:38.634449262Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Dec 18 13:44:38 force-systemd-env-915900 dockerd[685]: time="2023-12-18T13:44:38.634539169Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Dec 18 13:44:38 force-systemd-env-915900 dockerd[685]: time="2023-12-18T13:44:38.634595873Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Dec 18 13:44:38 force-systemd-env-915900 dockerd[685]: time="2023-12-18T13:44:38.634615575Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Dec 18 13:44:38 force-systemd-env-915900 dockerd[685]: time="2023-12-18T13:44:38.634632076Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Dec 18 13:44:38 force-systemd-env-915900 dockerd[685]: time="2023-12-18T13:44:38.634645977Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Dec 18 13:44:38 force-systemd-env-915900 dockerd[685]: time="2023-12-18T13:44:38.634659778Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Dec 18 13:44:38 force-systemd-env-915900 dockerd[685]: time="2023-12-18T13:44:38.634677379Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	Dec 18 13:44:38 force-systemd-env-915900 dockerd[685]: time="2023-12-18T13:44:38.634697481Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Dec 18 13:44:38 force-systemd-env-915900 dockerd[685]: time="2023-12-18T13:44:38.634711882Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	Dec 18 13:44:38 force-systemd-env-915900 dockerd[685]: time="2023-12-18T13:44:38.635138314Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Dec 18 13:44:38 force-systemd-env-915900 dockerd[685]: time="2023-12-18T13:44:38.635343430Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Dec 18 13:44:38 force-systemd-env-915900 dockerd[685]: time="2023-12-18T13:44:38.635636352Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Dec 18 13:44:38 force-systemd-env-915900 dockerd[685]: time="2023-12-18T13:44:38.635671355Z" level=info msg="containerd successfully booted in 0.051554s"
	Dec 18 13:44:38 force-systemd-env-915900 dockerd[679]: time="2023-12-18T13:44:38.674400187Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Dec 18 13:44:38 force-systemd-env-915900 dockerd[679]: time="2023-12-18T13:44:38.691990419Z" level=info msg="Loading containers: start."
	Dec 18 13:44:38 force-systemd-env-915900 dockerd[679]: time="2023-12-18T13:44:38.896894634Z" level=info msg="Loading containers: done."
	Dec 18 13:44:38 force-systemd-env-915900 dockerd[679]: time="2023-12-18T13:44:38.913349580Z" level=warning msg="WARNING: No blkio throttle.read_bps_device support"
	Dec 18 13:44:38 force-systemd-env-915900 dockerd[679]: time="2023-12-18T13:44:38.913371182Z" level=warning msg="WARNING: No blkio throttle.write_bps_device support"
	Dec 18 13:44:38 force-systemd-env-915900 dockerd[679]: time="2023-12-18T13:44:38.913378883Z" level=warning msg="WARNING: No blkio throttle.read_iops_device support"
	Dec 18 13:44:38 force-systemd-env-915900 dockerd[679]: time="2023-12-18T13:44:38.913384983Z" level=warning msg="WARNING: No blkio throttle.write_iops_device support"
	Dec 18 13:44:38 force-systemd-env-915900 dockerd[679]: time="2023-12-18T13:44:38.913416385Z" level=info msg="Docker daemon" commit=311b9ff graphdriver=overlay2 version=24.0.7
	Dec 18 13:44:38 force-systemd-env-915900 dockerd[679]: time="2023-12-18T13:44:38.913564097Z" level=info msg="Daemon has completed initialization"
	Dec 18 13:44:38 force-systemd-env-915900 dockerd[679]: time="2023-12-18T13:44:38.967873209Z" level=info msg="API listen on /var/run/docker.sock"
	Dec 18 13:44:38 force-systemd-env-915900 systemd[1]: Started Docker Application Container Engine.
	Dec 18 13:44:38 force-systemd-env-915900 dockerd[679]: time="2023-12-18T13:44:38.968155030Z" level=info msg="API listen on [::]:2376"
	Dec 18 13:45:11 force-systemd-env-915900 dockerd[679]: time="2023-12-18T13:45:11.273769876Z" level=info msg="Processing signal 'terminated'"
	Dec 18 13:45:11 force-systemd-env-915900 systemd[1]: Stopping Docker Application Container Engine...
	Dec 18 13:45:11 force-systemd-env-915900 dockerd[679]: time="2023-12-18T13:45:11.277166476Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Dec 18 13:45:11 force-systemd-env-915900 dockerd[679]: time="2023-12-18T13:45:11.277218176Z" level=info msg="Daemon shutdown complete"
	Dec 18 13:45:11 force-systemd-env-915900 dockerd[679]: time="2023-12-18T13:45:11.277320876Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Dec 18 13:45:11 force-systemd-env-915900 dockerd[679]: time="2023-12-18T13:45:11.277433676Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Dec 18 13:45:12 force-systemd-env-915900 systemd[1]: docker.service: Succeeded.
	Dec 18 13:45:12 force-systemd-env-915900 systemd[1]: Stopped Docker Application Container Engine.
	Dec 18 13:45:12 force-systemd-env-915900 systemd[1]: Starting Docker Application Container Engine...
	Dec 18 13:45:12 force-systemd-env-915900 dockerd[1033]: time="2023-12-18T13:45:12.353505276Z" level=info msg="Starting up"
	Dec 18 13:46:12 force-systemd-env-915900 dockerd[1033]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Dec 18 13:46:12 force-systemd-env-915900 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Dec 18 13:46:12 force-systemd-env-915900 systemd[1]: docker.service: Failed with result 'exit-code'.
	Dec 18 13:46:12 force-systemd-env-915900 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	-- Journal begins at Mon 2023-12-18 13:43:43 UTC, ends at Mon 2023-12-18 13:46:12 UTC. --
	Dec 18 13:44:38 force-systemd-env-915900 systemd[1]: Starting Docker Application Container Engine...
	Dec 18 13:44:38 force-systemd-env-915900 dockerd[679]: time="2023-12-18T13:44:38.549677943Z" level=info msg="Starting up"
	Dec 18 13:44:38 force-systemd-env-915900 dockerd[679]: time="2023-12-18T13:44:38.550597113Z" level=info msg="containerd not running, starting managed containerd"
	Dec 18 13:44:38 force-systemd-env-915900 dockerd[679]: time="2023-12-18T13:44:38.551624191Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=685
	Dec 18 13:44:38 force-systemd-env-915900 dockerd[685]: time="2023-12-18T13:44:38.586068199Z" level=info msg="starting containerd" revision=64b8a811b07ba6288238eefc14d898ee0b5b99ba version=v1.7.11
	Dec 18 13:44:38 force-systemd-env-915900 dockerd[685]: time="2023-12-18T13:44:38.614326538Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Dec 18 13:44:38 force-systemd-env-915900 dockerd[685]: time="2023-12-18T13:44:38.614515753Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Dec 18 13:44:38 force-systemd-env-915900 dockerd[685]: time="2023-12-18T13:44:38.616618112Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.57\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Dec 18 13:44:38 force-systemd-env-915900 dockerd[685]: time="2023-12-18T13:44:38.616721620Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Dec 18 13:44:38 force-systemd-env-915900 dockerd[685]: time="2023-12-18T13:44:38.617065346Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Dec 18 13:44:38 force-systemd-env-915900 dockerd[685]: time="2023-12-18T13:44:38.617154553Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Dec 18 13:44:38 force-systemd-env-915900 dockerd[685]: time="2023-12-18T13:44:38.617258960Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Dec 18 13:44:38 force-systemd-env-915900 dockerd[685]: time="2023-12-18T13:44:38.617399471Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Dec 18 13:44:38 force-systemd-env-915900 dockerd[685]: time="2023-12-18T13:44:38.617545482Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Dec 18 13:44:38 force-systemd-env-915900 dockerd[685]: time="2023-12-18T13:44:38.617657491Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Dec 18 13:44:38 force-systemd-env-915900 dockerd[685]: time="2023-12-18T13:44:38.618190131Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Dec 18 13:44:38 force-systemd-env-915900 dockerd[685]: time="2023-12-18T13:44:38.618281038Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Dec 18 13:44:38 force-systemd-env-915900 dockerd[685]: time="2023-12-18T13:44:38.618297839Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Dec 18 13:44:38 force-systemd-env-915900 dockerd[685]: time="2023-12-18T13:44:38.618441750Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Dec 18 13:44:38 force-systemd-env-915900 dockerd[685]: time="2023-12-18T13:44:38.618581161Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Dec 18 13:44:38 force-systemd-env-915900 dockerd[685]: time="2023-12-18T13:44:38.618649866Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Dec 18 13:44:38 force-systemd-env-915900 dockerd[685]: time="2023-12-18T13:44:38.618736672Z" level=info msg="metadata content store policy set" policy=shared
	Dec 18 13:44:38 force-systemd-env-915900 dockerd[685]: time="2023-12-18T13:44:38.631836464Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Dec 18 13:44:38 force-systemd-env-915900 dockerd[685]: time="2023-12-18T13:44:38.631991776Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Dec 18 13:44:38 force-systemd-env-915900 dockerd[685]: time="2023-12-18T13:44:38.632015278Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Dec 18 13:44:38 force-systemd-env-915900 dockerd[685]: time="2023-12-18T13:44:38.632077582Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Dec 18 13:44:38 force-systemd-env-915900 dockerd[685]: time="2023-12-18T13:44:38.632102184Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Dec 18 13:44:38 force-systemd-env-915900 dockerd[685]: time="2023-12-18T13:44:38.632299299Z" level=info msg="NRI interface is disabled by configuration."
	Dec 18 13:44:38 force-systemd-env-915900 dockerd[685]: time="2023-12-18T13:44:38.632321301Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Dec 18 13:44:38 force-systemd-env-915900 dockerd[685]: time="2023-12-18T13:44:38.632517016Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Dec 18 13:44:38 force-systemd-env-915900 dockerd[685]: time="2023-12-18T13:44:38.632560619Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Dec 18 13:44:38 force-systemd-env-915900 dockerd[685]: time="2023-12-18T13:44:38.632580221Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Dec 18 13:44:38 force-systemd-env-915900 dockerd[685]: time="2023-12-18T13:44:38.632596122Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Dec 18 13:44:38 force-systemd-env-915900 dockerd[685]: time="2023-12-18T13:44:38.632612923Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Dec 18 13:44:38 force-systemd-env-915900 dockerd[685]: time="2023-12-18T13:44:38.632634325Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Dec 18 13:44:38 force-systemd-env-915900 dockerd[685]: time="2023-12-18T13:44:38.632650526Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Dec 18 13:44:38 force-systemd-env-915900 dockerd[685]: time="2023-12-18T13:44:38.632676928Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Dec 18 13:44:38 force-systemd-env-915900 dockerd[685]: time="2023-12-18T13:44:38.632714631Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Dec 18 13:44:38 force-systemd-env-915900 dockerd[685]: time="2023-12-18T13:44:38.632751133Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Dec 18 13:44:38 force-systemd-env-915900 dockerd[685]: time="2023-12-18T13:44:38.632768735Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Dec 18 13:44:38 force-systemd-env-915900 dockerd[685]: time="2023-12-18T13:44:38.632806838Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Dec 18 13:44:38 force-systemd-env-915900 dockerd[685]: time="2023-12-18T13:44:38.632906845Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Dec 18 13:44:38 force-systemd-env-915900 dockerd[685]: time="2023-12-18T13:44:38.633708706Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Dec 18 13:44:38 force-systemd-env-915900 dockerd[685]: time="2023-12-18T13:44:38.633872718Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Dec 18 13:44:38 force-systemd-env-915900 dockerd[685]: time="2023-12-18T13:44:38.633905221Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Dec 18 13:44:38 force-systemd-env-915900 dockerd[685]: time="2023-12-18T13:44:38.633995228Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Dec 18 13:44:38 force-systemd-env-915900 dockerd[685]: time="2023-12-18T13:44:38.634055432Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Dec 18 13:44:38 force-systemd-env-915900 dockerd[685]: time="2023-12-18T13:44:38.634095835Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Dec 18 13:44:38 force-systemd-env-915900 dockerd[685]: time="2023-12-18T13:44:38.634113237Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Dec 18 13:44:38 force-systemd-env-915900 dockerd[685]: time="2023-12-18T13:44:38.634128338Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Dec 18 13:44:38 force-systemd-env-915900 dockerd[685]: time="2023-12-18T13:44:38.634143539Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Dec 18 13:44:38 force-systemd-env-915900 dockerd[685]: time="2023-12-18T13:44:38.634158840Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Dec 18 13:44:38 force-systemd-env-915900 dockerd[685]: time="2023-12-18T13:44:38.634172641Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Dec 18 13:44:38 force-systemd-env-915900 dockerd[685]: time="2023-12-18T13:44:38.634186442Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Dec 18 13:44:38 force-systemd-env-915900 dockerd[685]: time="2023-12-18T13:44:38.634201843Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Dec 18 13:44:38 force-systemd-env-915900 dockerd[685]: time="2023-12-18T13:44:38.634315152Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Dec 18 13:44:38 force-systemd-env-915900 dockerd[685]: time="2023-12-18T13:44:38.634449262Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Dec 18 13:44:38 force-systemd-env-915900 dockerd[685]: time="2023-12-18T13:44:38.634539169Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Dec 18 13:44:38 force-systemd-env-915900 dockerd[685]: time="2023-12-18T13:44:38.634595873Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Dec 18 13:44:38 force-systemd-env-915900 dockerd[685]: time="2023-12-18T13:44:38.634615575Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Dec 18 13:44:38 force-systemd-env-915900 dockerd[685]: time="2023-12-18T13:44:38.634632076Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Dec 18 13:44:38 force-systemd-env-915900 dockerd[685]: time="2023-12-18T13:44:38.634645977Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Dec 18 13:44:38 force-systemd-env-915900 dockerd[685]: time="2023-12-18T13:44:38.634659778Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Dec 18 13:44:38 force-systemd-env-915900 dockerd[685]: time="2023-12-18T13:44:38.634677379Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	Dec 18 13:44:38 force-systemd-env-915900 dockerd[685]: time="2023-12-18T13:44:38.634697481Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Dec 18 13:44:38 force-systemd-env-915900 dockerd[685]: time="2023-12-18T13:44:38.634711882Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	Dec 18 13:44:38 force-systemd-env-915900 dockerd[685]: time="2023-12-18T13:44:38.635138314Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Dec 18 13:44:38 force-systemd-env-915900 dockerd[685]: time="2023-12-18T13:44:38.635343430Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Dec 18 13:44:38 force-systemd-env-915900 dockerd[685]: time="2023-12-18T13:44:38.635636352Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Dec 18 13:44:38 force-systemd-env-915900 dockerd[685]: time="2023-12-18T13:44:38.635671355Z" level=info msg="containerd successfully booted in 0.051554s"
	Dec 18 13:44:38 force-systemd-env-915900 dockerd[679]: time="2023-12-18T13:44:38.674400187Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Dec 18 13:44:38 force-systemd-env-915900 dockerd[679]: time="2023-12-18T13:44:38.691990419Z" level=info msg="Loading containers: start."
	Dec 18 13:44:38 force-systemd-env-915900 dockerd[679]: time="2023-12-18T13:44:38.896894634Z" level=info msg="Loading containers: done."
	Dec 18 13:44:38 force-systemd-env-915900 dockerd[679]: time="2023-12-18T13:44:38.913349580Z" level=warning msg="WARNING: No blkio throttle.read_bps_device support"
	Dec 18 13:44:38 force-systemd-env-915900 dockerd[679]: time="2023-12-18T13:44:38.913371182Z" level=warning msg="WARNING: No blkio throttle.write_bps_device support"
	Dec 18 13:44:38 force-systemd-env-915900 dockerd[679]: time="2023-12-18T13:44:38.913378883Z" level=warning msg="WARNING: No blkio throttle.read_iops_device support"
	Dec 18 13:44:38 force-systemd-env-915900 dockerd[679]: time="2023-12-18T13:44:38.913384983Z" level=warning msg="WARNING: No blkio throttle.write_iops_device support"
	Dec 18 13:44:38 force-systemd-env-915900 dockerd[679]: time="2023-12-18T13:44:38.913416385Z" level=info msg="Docker daemon" commit=311b9ff graphdriver=overlay2 version=24.0.7
	Dec 18 13:44:38 force-systemd-env-915900 dockerd[679]: time="2023-12-18T13:44:38.913564097Z" level=info msg="Daemon has completed initialization"
	Dec 18 13:44:38 force-systemd-env-915900 dockerd[679]: time="2023-12-18T13:44:38.967873209Z" level=info msg="API listen on /var/run/docker.sock"
	Dec 18 13:44:38 force-systemd-env-915900 systemd[1]: Started Docker Application Container Engine.
	Dec 18 13:44:38 force-systemd-env-915900 dockerd[679]: time="2023-12-18T13:44:38.968155030Z" level=info msg="API listen on [::]:2376"
	Dec 18 13:45:11 force-systemd-env-915900 dockerd[679]: time="2023-12-18T13:45:11.273769876Z" level=info msg="Processing signal 'terminated'"
	Dec 18 13:45:11 force-systemd-env-915900 systemd[1]: Stopping Docker Application Container Engine...
	Dec 18 13:45:11 force-systemd-env-915900 dockerd[679]: time="2023-12-18T13:45:11.277166476Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Dec 18 13:45:11 force-systemd-env-915900 dockerd[679]: time="2023-12-18T13:45:11.277218176Z" level=info msg="Daemon shutdown complete"
	Dec 18 13:45:11 force-systemd-env-915900 dockerd[679]: time="2023-12-18T13:45:11.277320876Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Dec 18 13:45:11 force-systemd-env-915900 dockerd[679]: time="2023-12-18T13:45:11.277433676Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Dec 18 13:45:12 force-systemd-env-915900 systemd[1]: docker.service: Succeeded.
	Dec 18 13:45:12 force-systemd-env-915900 systemd[1]: Stopped Docker Application Container Engine.
	Dec 18 13:45:12 force-systemd-env-915900 systemd[1]: Starting Docker Application Container Engine...
	Dec 18 13:45:12 force-systemd-env-915900 dockerd[1033]: time="2023-12-18T13:45:12.353505276Z" level=info msg="Starting up"
	Dec 18 13:46:12 force-systemd-env-915900 dockerd[1033]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Dec 18 13:46:12 force-systemd-env-915900 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Dec 18 13:46:12 force-systemd-env-915900 systemd[1]: docker.service: Failed with result 'exit-code'.
	Dec 18 13:46:12 force-systemd-env-915900 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W1218 13:46:12.400830   12136 out.go:239] * 
	* 
	W1218 13:46:12.402135   12136 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1218 13:46:12.403378   12136 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:157: failed to start minikube with args: "out/minikube-windows-amd64.exe start -p force-systemd-env-915900 --memory=2048 --alsologtostderr -v=5 --driver=hyperv" : exit status 90
docker_test.go:110: (dbg) Run:  out/minikube-windows-amd64.exe -p force-systemd-env-915900 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Done: out/minikube-windows-amd64.exe -p force-systemd-env-915900 ssh "docker info --format {{.CgroupDriver}}": (1m0.1653171s)
docker_test.go:115: expected systemd cgroup driver, got: 
-- stdout --
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	

                                                
                                                
-- /stdout --
** stderr ** 
	W1218 13:46:12.814245   13184 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
panic.go:523: *** TestForceSystemdEnv FAILED at 2023-12-18 13:47:12.8419976 +0000 UTC m=+7530.522088601
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p force-systemd-env-915900 -n force-systemd-env-915900
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p force-systemd-env-915900 -n force-systemd-env-915900: exit status 6 (13.2155004s)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	W1218 13:47:12.971083   14396 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	E1218 13:47:25.979287   14396 status.go:415] kubeconfig endpoint: extract IP: "force-systemd-env-915900" does not appear in C:\Users\jenkins.minikube7\minikube-integration\kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "force-systemd-env-915900" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:175: Cleaning up "force-systemd-env-915900" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p force-systemd-env-915900
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p force-systemd-env-915900: (1m1.5368471s)
--- FAIL: TestForceSystemdEnv (521.31s)

                                                
                                    
x
+
TestErrorSpam/setup (190.09s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-windows-amd64.exe start -p nospam-356100 -n=1 --memory=2250 --wait=false --log_dir=C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-356100 --driver=hyperv
E1218 11:54:02.400710   14928 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-922300\client.crt: The system cannot find the path specified.
E1218 11:54:02.416112   14928 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-922300\client.crt: The system cannot find the path specified.
E1218 11:54:02.433487   14928 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-922300\client.crt: The system cannot find the path specified.
E1218 11:54:02.463303   14928 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-922300\client.crt: The system cannot find the path specified.
E1218 11:54:02.509909   14928 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-922300\client.crt: The system cannot find the path specified.
E1218 11:54:02.603934   14928 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-922300\client.crt: The system cannot find the path specified.
E1218 11:54:02.775183   14928 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-922300\client.crt: The system cannot find the path specified.
E1218 11:54:03.104549   14928 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-922300\client.crt: The system cannot find the path specified.
E1218 11:54:03.754243   14928 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-922300\client.crt: The system cannot find the path specified.
E1218 11:54:05.045094   14928 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-922300\client.crt: The system cannot find the path specified.
E1218 11:54:07.608049   14928 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-922300\client.crt: The system cannot find the path specified.
E1218 11:54:12.728626   14928 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-922300\client.crt: The system cannot find the path specified.
E1218 11:54:22.972673   14928 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-922300\client.crt: The system cannot find the path specified.
E1218 11:54:43.455491   14928 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-922300\client.crt: The system cannot find the path specified.
E1218 11:55:24.418958   14928 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-922300\client.crt: The system cannot find the path specified.
E1218 11:56:46.354132   14928 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-922300\client.crt: The system cannot find the path specified.
error_spam_test.go:81: (dbg) Done: out/minikube-windows-amd64.exe start -p nospam-356100 -n=1 --memory=2250 --wait=false --log_dir=C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-356100 --driver=hyperv: (3m10.0883759s)
error_spam_test.go:96: unexpected stderr: "W1218 11:53:36.973212    4000 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube7\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified."
error_spam_test.go:110: minikube stdout:
* [nospam-356100] minikube v1.32.0 on Microsoft Windows 10 Enterprise N 10.0.19045.3803 Build 19045.3803
- KUBECONFIG=C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=C:\Users\jenkins.minikube7\minikube-integration\.minikube
- MINIKUBE_LOCATION=17824
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
* Using the hyperv driver based on user configuration
* Starting control plane node nospam-356100 in cluster nospam-356100
* Creating hyperv VM (CPUs=2, Memory=2250MB, Disk=20000MB) ...
* Preparing Kubernetes v1.28.4 on Docker 24.0.7 ...
- Generating certificates and keys ...
- Booting up control plane ...
- Configuring RBAC rules ...
* Configuring bridge CNI (Container Networking Interface) ...
* Verifying Kubernetes components...
- Using image gcr.io/k8s-minikube/storage-provisioner:v5
* Enabled addons: storage-provisioner, default-storageclass
* Done! kubectl is now configured to use "nospam-356100" cluster and "default" namespace by default
error_spam_test.go:111: minikube stderr:
W1218 11:53:36.973212    4000 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
--- FAIL: TestErrorSpam/setup (190.09s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (1.89s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-806500 config unset cpus
functional_test.go:1206: expected config error for "out/minikube-windows-amd64.exe -p functional-806500 config unset cpus" to be -""- but got *"W1218 12:08:42.057730   14320 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube7\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified."*
functional_test.go:1195: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-806500 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-806500 config get cpus: exit status 14 (283.7013ms)

                                                
                                                
** stderr ** 
	W1218 12:08:42.447362    1164 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1206: expected config error for "out/minikube-windows-amd64.exe -p functional-806500 config get cpus" to be -"Error: specified key could not be found in config"- but got *"W1218 12:08:42.447362    1164 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube7\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified.\nError: specified key could not be found in config"*
functional_test.go:1195: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-806500 config set cpus 2
functional_test.go:1206: expected config error for "out/minikube-windows-amd64.exe -p functional-806500 config set cpus 2" to be -"! These changes will take effect upon a minikube delete and then a minikube start"- but got *"W1218 12:08:42.717358   14816 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube7\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified.\n! These changes will take effect upon a minikube delete and then a minikube start"*
functional_test.go:1195: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-806500 config get cpus
functional_test.go:1206: expected config error for "out/minikube-windows-amd64.exe -p functional-806500 config get cpus" to be -""- but got *"W1218 12:08:43.033782    1828 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube7\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified."*
functional_test.go:1195: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-806500 config unset cpus
functional_test.go:1206: expected config error for "out/minikube-windows-amd64.exe -p functional-806500 config unset cpus" to be -""- but got *"W1218 12:08:43.345430    2548 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube7\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified."*
functional_test.go:1195: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-806500 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-806500 config get cpus: exit status 14 (284.5335ms)

                                                
                                                
** stderr ** 
	W1218 12:08:43.659726    6248 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1206: expected config error for "out/minikube-windows-amd64.exe -p functional-806500 config get cpus" to be -"Error: specified key could not be found in config"- but got *"W1218 12:08:43.659726    6248 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube7\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified.\nError: specified key could not be found in config"*
--- FAIL: TestFunctional/parallel/ConfigCmd (1.89s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (15.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1508: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-806500 service --namespace=default --https --url hello-node
functional_test.go:1508: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-806500 service --namespace=default --https --url hello-node: exit status 1 (15.0264292s)

                                                
                                                
** stderr ** 
	W1218 12:09:30.619826    8188 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
functional_test.go:1510: failed to get service url. args "out/minikube-windows-amd64.exe -p functional-806500 service --namespace=default --https --url hello-node" : exit status 1
--- FAIL: TestFunctional/parallel/ServiceCmd/HTTPS (15.03s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (15.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1539: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-806500 service hello-node --url --format={{.IP}}
functional_test.go:1539: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-806500 service hello-node --url --format={{.IP}}: exit status 1 (15.0568084s)

                                                
                                                
** stderr ** 
	W1218 12:09:45.661989    4812 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
functional_test.go:1541: failed to get service url with custom format. args "out/minikube-windows-amd64.exe -p functional-806500 service hello-node --url --format={{.IP}}": exit status 1
functional_test.go:1547: "" is not a valid IP
--- FAIL: TestFunctional/parallel/ServiceCmd/Format (15.06s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (15.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1558: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-806500 service hello-node --url
functional_test.go:1558: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-806500 service hello-node --url: exit status 1 (15.0271791s)

                                                
                                                
** stderr ** 
	W1218 12:10:00.712945    2448 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
functional_test.go:1560: failed to get service url. args: "out/minikube-windows-amd64.exe -p functional-806500 service hello-node --url": exit status 1
functional_test.go:1564: found endpoint for hello-node: 
functional_test.go:1572: expected scheme to be -"http"- got scheme: *""*
--- FAIL: TestFunctional/parallel/ServiceCmd/URL (15.03s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (216.34s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-windows-amd64.exe start -p mount-start-2-926400 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=hyperv
E1218 12:43:42.367214   14928 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-806500\client.crt: The system cannot find the path specified.
E1218 12:43:45.589716   14928 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-922300\client.crt: The system cannot find the path specified.
E1218 12:43:56.554649   14928 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ingress-addon-legacy-134100\client.crt: The system cannot find the path specified.
E1218 12:44:02.411479   14928 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-922300\client.crt: The system cannot find the path specified.
mount_start_test.go:98: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p mount-start-2-926400 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=hyperv: exit status 90 (3m24.4166572s)

                                                
                                                
-- stdout --
	* [mount-start-2-926400] minikube v1.32.0 on Microsoft Windows 10 Enterprise N 10.0.19045.3803 Build 19045.3803
	  - KUBECONFIG=C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube7\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=17824
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the hyperv driver based on user configuration
	* Starting minikube without Kubernetes in cluster mount-start-2-926400
	* Creating hyperv VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	W1218 12:41:48.814304   10060 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	-- Journal begins at Mon 2023-12-18 12:42:51 UTC, ends at Mon 2023-12-18 12:45:13 UTC. --
	Dec 18 12:43:41 mount-start-2-926400 systemd[1]: Starting Docker Application Container Engine...
	Dec 18 12:43:41 mount-start-2-926400 dockerd[692]: time="2023-12-18T12:43:41.472084981Z" level=info msg="Starting up"
	Dec 18 12:43:41 mount-start-2-926400 dockerd[692]: time="2023-12-18T12:43:41.472955947Z" level=info msg="containerd not running, starting managed containerd"
	Dec 18 12:43:41 mount-start-2-926400 dockerd[692]: time="2023-12-18T12:43:41.474123035Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=698
	Dec 18 12:43:41 mount-start-2-926400 dockerd[698]: time="2023-12-18T12:43:41.513672419Z" level=info msg="starting containerd" revision=64b8a811b07ba6288238eefc14d898ee0b5b99ba version=v1.7.11
	Dec 18 12:43:41 mount-start-2-926400 dockerd[698]: time="2023-12-18T12:43:41.540229923Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Dec 18 12:43:41 mount-start-2-926400 dockerd[698]: time="2023-12-18T12:43:41.540283727Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Dec 18 12:43:41 mount-start-2-926400 dockerd[698]: time="2023-12-18T12:43:41.542561299Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.57\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Dec 18 12:43:41 mount-start-2-926400 dockerd[698]: time="2023-12-18T12:43:41.542738713Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Dec 18 12:43:41 mount-start-2-926400 dockerd[698]: time="2023-12-18T12:43:41.543077738Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Dec 18 12:43:41 mount-start-2-926400 dockerd[698]: time="2023-12-18T12:43:41.543188147Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Dec 18 12:43:41 mount-start-2-926400 dockerd[698]: time="2023-12-18T12:43:41.543293555Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Dec 18 12:43:41 mount-start-2-926400 dockerd[698]: time="2023-12-18T12:43:41.543500170Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Dec 18 12:43:41 mount-start-2-926400 dockerd[698]: time="2023-12-18T12:43:41.543676783Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Dec 18 12:43:41 mount-start-2-926400 dockerd[698]: time="2023-12-18T12:43:41.543811994Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Dec 18 12:43:41 mount-start-2-926400 dockerd[698]: time="2023-12-18T12:43:41.547186148Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Dec 18 12:43:41 mount-start-2-926400 dockerd[698]: time="2023-12-18T12:43:41.547317858Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Dec 18 12:43:41 mount-start-2-926400 dockerd[698]: time="2023-12-18T12:43:41.547339460Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Dec 18 12:43:41 mount-start-2-926400 dockerd[698]: time="2023-12-18T12:43:41.547551276Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Dec 18 12:43:41 mount-start-2-926400 dockerd[698]: time="2023-12-18T12:43:41.547729489Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Dec 18 12:43:41 mount-start-2-926400 dockerd[698]: time="2023-12-18T12:43:41.547804795Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Dec 18 12:43:41 mount-start-2-926400 dockerd[698]: time="2023-12-18T12:43:41.547843998Z" level=info msg="metadata content store policy set" policy=shared
	Dec 18 12:43:41 mount-start-2-926400 dockerd[698]: time="2023-12-18T12:43:41.560152727Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Dec 18 12:43:41 mount-start-2-926400 dockerd[698]: time="2023-12-18T12:43:41.560205931Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Dec 18 12:43:41 mount-start-2-926400 dockerd[698]: time="2023-12-18T12:43:41.560227532Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Dec 18 12:43:41 mount-start-2-926400 dockerd[698]: time="2023-12-18T12:43:41.560316939Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Dec 18 12:43:41 mount-start-2-926400 dockerd[698]: time="2023-12-18T12:43:41.560341941Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Dec 18 12:43:41 mount-start-2-926400 dockerd[698]: time="2023-12-18T12:43:41.560355242Z" level=info msg="NRI interface is disabled by configuration."
	Dec 18 12:43:41 mount-start-2-926400 dockerd[698]: time="2023-12-18T12:43:41.560370543Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Dec 18 12:43:41 mount-start-2-926400 dockerd[698]: time="2023-12-18T12:43:41.560485352Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Dec 18 12:43:41 mount-start-2-926400 dockerd[698]: time="2023-12-18T12:43:41.560592160Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Dec 18 12:43:41 mount-start-2-926400 dockerd[698]: time="2023-12-18T12:43:41.560614162Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Dec 18 12:43:41 mount-start-2-926400 dockerd[698]: time="2023-12-18T12:43:41.560629763Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Dec 18 12:43:41 mount-start-2-926400 dockerd[698]: time="2023-12-18T12:43:41.560646464Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Dec 18 12:43:41 mount-start-2-926400 dockerd[698]: time="2023-12-18T12:43:41.560664965Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Dec 18 12:43:41 mount-start-2-926400 dockerd[698]: time="2023-12-18T12:43:41.560681067Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Dec 18 12:43:41 mount-start-2-926400 dockerd[698]: time="2023-12-18T12:43:41.560695668Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Dec 18 12:43:41 mount-start-2-926400 dockerd[698]: time="2023-12-18T12:43:41.560711169Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Dec 18 12:43:41 mount-start-2-926400 dockerd[698]: time="2023-12-18T12:43:41.560726270Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Dec 18 12:43:41 mount-start-2-926400 dockerd[698]: time="2023-12-18T12:43:41.560750172Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Dec 18 12:43:41 mount-start-2-926400 dockerd[698]: time="2023-12-18T12:43:41.560765773Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Dec 18 12:43:41 mount-start-2-926400 dockerd[698]: time="2023-12-18T12:43:41.560863780Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Dec 18 12:43:41 mount-start-2-926400 dockerd[698]: time="2023-12-18T12:43:41.561362318Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Dec 18 12:43:41 mount-start-2-926400 dockerd[698]: time="2023-12-18T12:43:41.561479827Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Dec 18 12:43:41 mount-start-2-926400 dockerd[698]: time="2023-12-18T12:43:41.561502129Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Dec 18 12:43:41 mount-start-2-926400 dockerd[698]: time="2023-12-18T12:43:41.561528231Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Dec 18 12:43:41 mount-start-2-926400 dockerd[698]: time="2023-12-18T12:43:41.561585835Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Dec 18 12:43:41 mount-start-2-926400 dockerd[698]: time="2023-12-18T12:43:41.561690443Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Dec 18 12:43:41 mount-start-2-926400 dockerd[698]: time="2023-12-18T12:43:41.561713845Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Dec 18 12:43:41 mount-start-2-926400 dockerd[698]: time="2023-12-18T12:43:41.561728246Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Dec 18 12:43:41 mount-start-2-926400 dockerd[698]: time="2023-12-18T12:43:41.561743747Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Dec 18 12:43:41 mount-start-2-926400 dockerd[698]: time="2023-12-18T12:43:41.561758948Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Dec 18 12:43:41 mount-start-2-926400 dockerd[698]: time="2023-12-18T12:43:41.561774049Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Dec 18 12:43:41 mount-start-2-926400 dockerd[698]: time="2023-12-18T12:43:41.561788750Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Dec 18 12:43:41 mount-start-2-926400 dockerd[698]: time="2023-12-18T12:43:41.561805151Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Dec 18 12:43:41 mount-start-2-926400 dockerd[698]: time="2023-12-18T12:43:41.561873057Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Dec 18 12:43:41 mount-start-2-926400 dockerd[698]: time="2023-12-18T12:43:41.561921560Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Dec 18 12:43:41 mount-start-2-926400 dockerd[698]: time="2023-12-18T12:43:41.561939562Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Dec 18 12:43:41 mount-start-2-926400 dockerd[698]: time="2023-12-18T12:43:41.561954363Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Dec 18 12:43:41 mount-start-2-926400 dockerd[698]: time="2023-12-18T12:43:41.562027568Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Dec 18 12:43:41 mount-start-2-926400 dockerd[698]: time="2023-12-18T12:43:41.562045470Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Dec 18 12:43:41 mount-start-2-926400 dockerd[698]: time="2023-12-18T12:43:41.562059671Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Dec 18 12:43:41 mount-start-2-926400 dockerd[698]: time="2023-12-18T12:43:41.562073872Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Dec 18 12:43:41 mount-start-2-926400 dockerd[698]: time="2023-12-18T12:43:41.562112175Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	Dec 18 12:43:41 mount-start-2-926400 dockerd[698]: time="2023-12-18T12:43:41.562125676Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Dec 18 12:43:41 mount-start-2-926400 dockerd[698]: time="2023-12-18T12:43:41.562154578Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	Dec 18 12:43:41 mount-start-2-926400 dockerd[698]: time="2023-12-18T12:43:41.562528206Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Dec 18 12:43:41 mount-start-2-926400 dockerd[698]: time="2023-12-18T12:43:41.562607012Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Dec 18 12:43:41 mount-start-2-926400 dockerd[698]: time="2023-12-18T12:43:41.562650315Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Dec 18 12:43:41 mount-start-2-926400 dockerd[698]: time="2023-12-18T12:43:41.562689718Z" level=info msg="containerd successfully booted in 0.052312s"
	Dec 18 12:43:41 mount-start-2-926400 dockerd[692]: time="2023-12-18T12:43:41.599788118Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Dec 18 12:43:41 mount-start-2-926400 dockerd[692]: time="2023-12-18T12:43:41.613959287Z" level=info msg="Loading containers: start."
	Dec 18 12:43:41 mount-start-2-926400 dockerd[692]: time="2023-12-18T12:43:41.825627860Z" level=info msg="Loading containers: done."
	Dec 18 12:43:41 mount-start-2-926400 dockerd[692]: time="2023-12-18T12:43:41.845894489Z" level=warning msg="WARNING: No blkio throttle.read_bps_device support"
	Dec 18 12:43:41 mount-start-2-926400 dockerd[692]: time="2023-12-18T12:43:41.845940993Z" level=warning msg="WARNING: No blkio throttle.write_bps_device support"
	Dec 18 12:43:41 mount-start-2-926400 dockerd[692]: time="2023-12-18T12:43:41.845952694Z" level=warning msg="WARNING: No blkio throttle.read_iops_device support"
	Dec 18 12:43:41 mount-start-2-926400 dockerd[692]: time="2023-12-18T12:43:41.845959994Z" level=warning msg="WARNING: No blkio throttle.write_iops_device support"
	Dec 18 12:43:41 mount-start-2-926400 dockerd[692]: time="2023-12-18T12:43:41.846101705Z" level=info msg="Docker daemon" commit=311b9ff graphdriver=overlay2 version=24.0.7
	Dec 18 12:43:41 mount-start-2-926400 dockerd[692]: time="2023-12-18T12:43:41.846223114Z" level=info msg="Daemon has completed initialization"
	Dec 18 12:43:41 mount-start-2-926400 dockerd[692]: time="2023-12-18T12:43:41.901268068Z" level=info msg="API listen on /var/run/docker.sock"
	Dec 18 12:43:41 mount-start-2-926400 dockerd[692]: time="2023-12-18T12:43:41.901385977Z" level=info msg="API listen on [::]:2376"
	Dec 18 12:43:41 mount-start-2-926400 systemd[1]: Started Docker Application Container Engine.
	Dec 18 12:44:11 mount-start-2-926400 dockerd[692]: time="2023-12-18T12:44:11.949782444Z" level=info msg="Processing signal 'terminated'"
	Dec 18 12:44:11 mount-start-2-926400 dockerd[692]: time="2023-12-18T12:44:11.951275544Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Dec 18 12:44:11 mount-start-2-926400 dockerd[692]: time="2023-12-18T12:44:11.951659244Z" level=info msg="Daemon shutdown complete"
	Dec 18 12:44:11 mount-start-2-926400 dockerd[692]: time="2023-12-18T12:44:11.951752744Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Dec 18 12:44:11 mount-start-2-926400 dockerd[692]: time="2023-12-18T12:44:11.951779944Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Dec 18 12:44:11 mount-start-2-926400 systemd[1]: Stopping Docker Application Container Engine...
	Dec 18 12:44:12 mount-start-2-926400 systemd[1]: docker.service: Succeeded.
	Dec 18 12:44:12 mount-start-2-926400 systemd[1]: Stopped Docker Application Container Engine.
	Dec 18 12:44:12 mount-start-2-926400 systemd[1]: Starting Docker Application Container Engine...
	Dec 18 12:44:13 mount-start-2-926400 dockerd[1029]: time="2023-12-18T12:44:13.021455544Z" level=info msg="Starting up"
	Dec 18 12:45:13 mount-start-2-926400 dockerd[1029]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Dec 18 12:45:13 mount-start-2-926400 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Dec 18 12:45:13 mount-start-2-926400 systemd[1]: docker.service: Failed with result 'exit-code'.
	Dec 18 12:45:13 mount-start-2-926400 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
mount_start_test.go:100: failed to start minikube with args: "out/minikube-windows-amd64.exe start -p mount-start-2-926400 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=hyperv" : exit status 90
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p mount-start-2-926400 -n mount-start-2-926400
E1218 12:45:19.742630   14928 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ingress-addon-legacy-134100\client.crt: The system cannot find the path specified.
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p mount-start-2-926400 -n mount-start-2-926400: exit status 6 (11.9243683s)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	W1218 12:45:13.258111   10384 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	E1218 12:45:24.971364   10384 status.go:415] kubeconfig endpoint: extract IP: "mount-start-2-926400" does not appear in C:\Users\jenkins.minikube7\minikube-integration\kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "mount-start-2-926400" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestMountStart/serial/StartWithMountSecond (216.34s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (217.03s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:86: (dbg) Run:  out/minikube-windows-amd64.exe start -p multinode-015900 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=hyperv
E1218 12:48:42.363101   14928 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-806500\client.crt: The system cannot find the path specified.
E1218 12:48:56.564336   14928 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ingress-addon-legacy-134100\client.crt: The system cannot find the path specified.
E1218 12:49:02.420679   14928 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-922300\client.crt: The system cannot find the path specified.
multinode_test.go:86: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p multinode-015900 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=hyperv: exit status 90 (3m24.7073928s)

                                                
                                                
-- stdout --
	* [multinode-015900] minikube v1.32.0 on Microsoft Windows 10 Enterprise N 10.0.19045.3803 Build 19045.3803
	  - KUBECONFIG=C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube7\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=17824
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the hyperv driver based on user configuration
	* Starting control plane node multinode-015900 in cluster multinode-015900
	* Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	W1218 12:46:52.963325    3376 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I1218 12:46:53.031440    3376 out.go:296] Setting OutFile to fd 928 ...
	I1218 12:46:53.031440    3376 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1218 12:46:53.031440    3376 out.go:309] Setting ErrFile to fd 1000...
	I1218 12:46:53.031440    3376 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1218 12:46:53.059273    3376 out.go:303] Setting JSON to false
	I1218 12:46:53.062095    3376 start.go:128] hostinfo: {"hostname":"minikube7","uptime":4087,"bootTime":1702899525,"procs":192,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.3803 Build 19045.3803","kernelVersion":"10.0.19045.3803 Build 19045.3803","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f66ed2ea-6c04-4a6b-8eea-b2fb0953e990"}
	W1218 12:46:53.063131    3376 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1218 12:46:53.064470    3376 out.go:177] * [multinode-015900] minikube v1.32.0 on Microsoft Windows 10 Enterprise N 10.0.19045.3803 Build 19045.3803
	I1218 12:46:53.065202    3376 notify.go:220] Checking for updates...
	I1218 12:46:53.065824    3376 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	I1218 12:46:53.066588    3376 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1218 12:46:53.067291    3376 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube7\minikube-integration\.minikube
	I1218 12:46:53.067965    3376 out.go:177]   - MINIKUBE_LOCATION=17824
	I1218 12:46:53.068532    3376 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1218 12:46:53.070110    3376 driver.go:392] Setting default libvirt URI to qemu:///system
	I1218 12:46:58.321655    3376 out.go:177] * Using the hyperv driver based on user configuration
	I1218 12:46:58.322810    3376 start.go:298] selected driver: hyperv
	I1218 12:46:58.322810    3376 start.go:902] validating driver "hyperv" against <nil>
	I1218 12:46:58.322810    3376 start.go:913] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1218 12:46:58.374567    3376 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1218 12:46:58.375287    3376 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1218 12:46:58.375870    3376 cni.go:84] Creating CNI manager for ""
	I1218 12:46:58.375870    3376 cni.go:136] 0 nodes found, recommending kindnet
	I1218 12:46:58.375870    3376 start_flags.go:318] Found "CNI" CNI - setting NetworkPlugin=cni
	I1218 12:46:58.376078    3376 start_flags.go:323] config:
	{Name:multinode-015900 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702585004-17765@sha256:ba2fbb9efd5b81a443834ba0800f3bc13feea942ce199df74b0054a9bdb32bbd Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-015900 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:do
cker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1218 12:46:58.376203    3376 iso.go:125] acquiring lock: {Name:mka7fdf4332632f41b99a369cc525e40499a0030 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1218 12:46:58.376993    3376 out.go:177] * Starting control plane node multinode-015900 in cluster multinode-015900
	I1218 12:46:58.378009    3376 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I1218 12:46:58.378009    3376 preload.go:148] Found local preload: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4
	I1218 12:46:58.378638    3376 cache.go:56] Caching tarball of preloaded images
	I1218 12:46:58.378973    3376 preload.go:174] Found C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1218 12:46:58.379250    3376 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on docker
	I1218 12:46:58.379398    3376 profile.go:148] Saving config to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-015900\config.json ...
	I1218 12:46:58.379398    3376 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-015900\config.json: {Name:mk7ea6d968a3cd86b7c8084e992d65167639a0fb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1218 12:46:58.380063    3376 start.go:365] acquiring machines lock for multinode-015900: {Name:mk814f158b6187cc9297257c36fdbe0d2871c950 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1218 12:46:58.381069    3376 start.go:369] acquired machines lock for "multinode-015900" in 1.0066ms
	I1218 12:46:58.381069    3376 start.go:93] Provisioning new machine with config: &{Name:multinode-015900 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17765/minikube-v1.32.1-1702490427-17765-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702585004-17765@sha256:ba2fbb9efd5b81a443834ba0800f3bc13feea942ce199df74b0054a9bdb32bbd Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-015900 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP:
MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1218 12:46:58.381069    3376 start.go:125] createHost starting for "" (driver="hyperv")
	I1218 12:46:58.381854    3376 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1218 12:46:58.382389    3376 start.go:159] libmachine.API.Create for "multinode-015900" (driver="hyperv")
	I1218 12:46:58.382485    3376 client.go:168] LocalClient.Create starting
	I1218 12:46:58.382722    3376 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem
	I1218 12:46:58.382722    3376 main.go:141] libmachine: Decoding PEM data...
	I1218 12:46:58.382722    3376 main.go:141] libmachine: Parsing certificate...
	I1218 12:46:58.383335    3376 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem
	I1218 12:46:58.383532    3376 main.go:141] libmachine: Decoding PEM data...
	I1218 12:46:58.383532    3376 main.go:141] libmachine: Parsing certificate...
	I1218 12:46:58.383702    3376 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I1218 12:47:00.452452    3376 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I1218 12:47:00.452671    3376 main.go:141] libmachine: [stderr =====>] : 
	I1218 12:47:00.452797    3376 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I1218 12:47:02.177696    3376 main.go:141] libmachine: [stdout =====>] : False
	
	I1218 12:47:02.177696    3376 main.go:141] libmachine: [stderr =====>] : 
	I1218 12:47:02.177866    3376 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I1218 12:47:03.652575    3376 main.go:141] libmachine: [stdout =====>] : True
	
	I1218 12:47:03.652575    3376 main.go:141] libmachine: [stderr =====>] : 
	I1218 12:47:03.652781    3376 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I1218 12:47:07.166592    3376 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I1218 12:47:07.166592    3376 main.go:141] libmachine: [stderr =====>] : 
	I1218 12:47:07.169665    3376 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube7/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.32.1-1702490427-17765-amd64.iso...
	I1218 12:47:07.634728    3376 main.go:141] libmachine: Creating SSH key...
	I1218 12:47:07.959894    3376 main.go:141] libmachine: Creating VM...
	I1218 12:47:07.959894    3376 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I1218 12:47:10.828844    3376 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I1218 12:47:10.828917    3376 main.go:141] libmachine: [stderr =====>] : 
	I1218 12:47:10.828976    3376 main.go:141] libmachine: Using switch "Default Switch"
	I1218 12:47:10.828976    3376 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I1218 12:47:12.611027    3376 main.go:141] libmachine: [stdout =====>] : True
	
	I1218 12:47:12.611027    3376 main.go:141] libmachine: [stderr =====>] : 
	I1218 12:47:12.611284    3376 main.go:141] libmachine: Creating VHD
	I1218 12:47:12.611402    3376 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-015900\fixed.vhd' -SizeBytes 10MB -Fixed
	I1218 12:47:16.330009    3376 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube7
	Path                    : C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-015900\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 57D26B5B-9F47-4FBD-BA50-00D71E24A81D
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I1218 12:47:16.330233    3376 main.go:141] libmachine: [stderr =====>] : 
	I1218 12:47:16.330233    3376 main.go:141] libmachine: Writing magic tar header
	I1218 12:47:16.330362    3376 main.go:141] libmachine: Writing SSH key tar header
	I1218 12:47:16.339920    3376 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-015900\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-015900\disk.vhd' -VHDType Dynamic -DeleteSource
	I1218 12:47:19.463895    3376 main.go:141] libmachine: [stdout =====>] : 
	I1218 12:47:19.464104    3376 main.go:141] libmachine: [stderr =====>] : 
	I1218 12:47:19.464104    3376 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-015900\disk.vhd' -SizeBytes 20000MB
	I1218 12:47:21.983969    3376 main.go:141] libmachine: [stdout =====>] : 
	I1218 12:47:21.983969    3376 main.go:141] libmachine: [stderr =====>] : 
	I1218 12:47:21.983969    3376 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM multinode-015900 -Path 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-015900' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I1218 12:47:25.517348    3376 main.go:141] libmachine: [stdout =====>] : 
	Name             State CPUUsage(%) MemoryAssigned(M) Uptime   Status             Version
	----             ----- ----------- ----------------- ------   ------             -------
	multinode-015900 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I1218 12:47:25.517509    3376 main.go:141] libmachine: [stderr =====>] : 
	I1218 12:47:25.517567    3376 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName multinode-015900 -DynamicMemoryEnabled $false
	I1218 12:47:27.686774    3376 main.go:141] libmachine: [stdout =====>] : 
	I1218 12:47:27.686774    3376 main.go:141] libmachine: [stderr =====>] : 
	I1218 12:47:27.686937    3376 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor multinode-015900 -Count 2
	I1218 12:47:29.846687    3376 main.go:141] libmachine: [stdout =====>] : 
	I1218 12:47:29.846687    3376 main.go:141] libmachine: [stderr =====>] : 
	I1218 12:47:29.846687    3376 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName multinode-015900 -Path 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-015900\boot2docker.iso'
	I1218 12:47:32.376179    3376 main.go:141] libmachine: [stdout =====>] : 
	I1218 12:47:32.376179    3376 main.go:141] libmachine: [stderr =====>] : 
	I1218 12:47:32.376179    3376 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName multinode-015900 -Path 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-015900\disk.vhd'
	I1218 12:47:34.968575    3376 main.go:141] libmachine: [stdout =====>] : 
	I1218 12:47:34.968848    3376 main.go:141] libmachine: [stderr =====>] : 
	I1218 12:47:34.968848    3376 main.go:141] libmachine: Starting VM...
	I1218 12:47:34.968848    3376 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-015900
	I1218 12:47:37.780940    3376 main.go:141] libmachine: [stdout =====>] : 
	I1218 12:47:37.781158    3376 main.go:141] libmachine: [stderr =====>] : 
	I1218 12:47:37.781158    3376 main.go:141] libmachine: Waiting for host to start...
	I1218 12:47:37.781230    3376 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-015900 ).state
	I1218 12:47:40.042268    3376 main.go:141] libmachine: [stdout =====>] : Running
	
	I1218 12:47:40.042741    3376 main.go:141] libmachine: [stderr =====>] : 
	I1218 12:47:40.042810    3376 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-015900 ).networkadapters[0]).ipaddresses[0]
	I1218 12:47:42.535619    3376 main.go:141] libmachine: [stdout =====>] : 
	I1218 12:47:42.535619    3376 main.go:141] libmachine: [stderr =====>] : 
	I1218 12:47:43.537489    3376 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-015900 ).state
	I1218 12:47:45.752510    3376 main.go:141] libmachine: [stdout =====>] : Running
	
	I1218 12:47:45.752510    3376 main.go:141] libmachine: [stderr =====>] : 
	I1218 12:47:45.752678    3376 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-015900 ).networkadapters[0]).ipaddresses[0]
	I1218 12:47:48.260350    3376 main.go:141] libmachine: [stdout =====>] : 
	I1218 12:47:48.260350    3376 main.go:141] libmachine: [stderr =====>] : 
	I1218 12:47:49.264511    3376 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-015900 ).state
	I1218 12:47:51.393401    3376 main.go:141] libmachine: [stdout =====>] : Running
	
	I1218 12:47:51.393436    3376 main.go:141] libmachine: [stderr =====>] : 
	I1218 12:47:51.393519    3376 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-015900 ).networkadapters[0]).ipaddresses[0]
	I1218 12:47:53.887450    3376 main.go:141] libmachine: [stdout =====>] : 
	I1218 12:47:53.887506    3376 main.go:141] libmachine: [stderr =====>] : 
	I1218 12:47:54.893317    3376 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-015900 ).state
	I1218 12:47:57.077032    3376 main.go:141] libmachine: [stdout =====>] : Running
	
	I1218 12:47:57.077207    3376 main.go:141] libmachine: [stderr =====>] : 
	I1218 12:47:57.077334    3376 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-015900 ).networkadapters[0]).ipaddresses[0]
	I1218 12:47:59.593607    3376 main.go:141] libmachine: [stdout =====>] : 
	I1218 12:47:59.593607    3376 main.go:141] libmachine: [stderr =====>] : 
	I1218 12:48:00.602113    3376 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-015900 ).state
	I1218 12:48:02.779810    3376 main.go:141] libmachine: [stdout =====>] : Running
	
	I1218 12:48:02.779810    3376 main.go:141] libmachine: [stderr =====>] : 
	I1218 12:48:02.780133    3376 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-015900 ).networkadapters[0]).ipaddresses[0]
	I1218 12:48:05.370644    3376 main.go:141] libmachine: [stdout =====>] : 192.168.235.154
	
	I1218 12:48:05.370644    3376 main.go:141] libmachine: [stderr =====>] : 
	I1218 12:48:05.370644    3376 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-015900 ).state
	I1218 12:48:07.507846    3376 main.go:141] libmachine: [stdout =====>] : Running
	
	I1218 12:48:07.508014    3376 main.go:141] libmachine: [stderr =====>] : 
	I1218 12:48:07.508014    3376 machine.go:88] provisioning docker machine ...
	I1218 12:48:07.508014    3376 buildroot.go:166] provisioning hostname "multinode-015900"
	I1218 12:48:07.508014    3376 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-015900 ).state
	I1218 12:48:09.626180    3376 main.go:141] libmachine: [stdout =====>] : Running
	
	I1218 12:48:09.626590    3376 main.go:141] libmachine: [stderr =====>] : 
	I1218 12:48:09.626590    3376 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-015900 ).networkadapters[0]).ipaddresses[0]
	I1218 12:48:12.139989    3376 main.go:141] libmachine: [stdout =====>] : 192.168.235.154
	
	I1218 12:48:12.139989    3376 main.go:141] libmachine: [stderr =====>] : 
	I1218 12:48:12.145156    3376 main.go:141] libmachine: Using SSH client type: native
	I1218 12:48:12.145820    3376 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xdb4f40] 0xdb7a80 <nil>  [] 0s} 192.168.235.154 22 <nil> <nil>}
	I1218 12:48:12.145820    3376 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-015900 && echo "multinode-015900" | sudo tee /etc/hostname
	I1218 12:48:12.308176    3376 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-015900
	
	I1218 12:48:12.308176    3376 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-015900 ).state
	I1218 12:48:14.377022    3376 main.go:141] libmachine: [stdout =====>] : Running
	
	I1218 12:48:14.377022    3376 main.go:141] libmachine: [stderr =====>] : 
	I1218 12:48:14.377022    3376 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-015900 ).networkadapters[0]).ipaddresses[0]
	I1218 12:48:16.940545    3376 main.go:141] libmachine: [stdout =====>] : 192.168.235.154
	
	I1218 12:48:16.940545    3376 main.go:141] libmachine: [stderr =====>] : 
	I1218 12:48:16.946071    3376 main.go:141] libmachine: Using SSH client type: native
	I1218 12:48:16.946878    3376 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xdb4f40] 0xdb7a80 <nil>  [] 0s} 192.168.235.154 22 <nil> <nil>}
	I1218 12:48:16.946878    3376 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-015900' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-015900/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-015900' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1218 12:48:17.099596    3376 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1218 12:48:17.099596    3376 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube7\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube7\minikube-integration\.minikube}
	I1218 12:48:17.099596    3376 buildroot.go:174] setting up certificates
	I1218 12:48:17.099596    3376 provision.go:83] configureAuth start
	I1218 12:48:17.099596    3376 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-015900 ).state
	I1218 12:48:19.229366    3376 main.go:141] libmachine: [stdout =====>] : Running
	
	I1218 12:48:19.229714    3376 main.go:141] libmachine: [stderr =====>] : 
	I1218 12:48:19.229776    3376 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-015900 ).networkadapters[0]).ipaddresses[0]
	I1218 12:48:21.741942    3376 main.go:141] libmachine: [stdout =====>] : 192.168.235.154
	
	I1218 12:48:21.741942    3376 main.go:141] libmachine: [stderr =====>] : 
	I1218 12:48:21.742060    3376 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-015900 ).state
	I1218 12:48:23.826008    3376 main.go:141] libmachine: [stdout =====>] : Running
	
	I1218 12:48:23.826008    3376 main.go:141] libmachine: [stderr =====>] : 
	I1218 12:48:23.826106    3376 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-015900 ).networkadapters[0]).ipaddresses[0]
	I1218 12:48:26.271166    3376 main.go:141] libmachine: [stdout =====>] : 192.168.235.154
	
	I1218 12:48:26.271166    3376 main.go:141] libmachine: [stderr =====>] : 
	I1218 12:48:26.271166    3376 provision.go:138] copyHostCerts
	I1218 12:48:26.271166    3376 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube7\minikube-integration\.minikube/ca.pem
	I1218 12:48:26.271792    3376 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/ca.pem, removing ...
	I1218 12:48:26.271792    3376 exec_runner.go:203] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.pem
	I1218 12:48:26.271792    3376 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/ca.pem (1078 bytes)
	I1218 12:48:26.275483    3376 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube7\minikube-integration\.minikube/cert.pem
	I1218 12:48:26.276270    3376 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/cert.pem, removing ...
	I1218 12:48:26.276393    3376 exec_runner.go:203] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cert.pem
	I1218 12:48:26.276508    3376 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/cert.pem (1123 bytes)
	I1218 12:48:26.277854    3376 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube7\minikube-integration\.minikube/key.pem
	I1218 12:48:26.277943    3376 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/key.pem, removing ...
	I1218 12:48:26.277943    3376 exec_runner.go:203] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\key.pem
	I1218 12:48:26.278484    3376 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/key.pem (1679 bytes)
	I1218 12:48:26.279354    3376 provision.go:112] generating server cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-015900 san=[192.168.235.154 192.168.235.154 localhost 127.0.0.1 minikube multinode-015900]
	I1218 12:48:26.423700    3376 provision.go:172] copyRemoteCerts
	I1218 12:48:26.437300    3376 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1218 12:48:26.437300    3376 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-015900 ).state
	I1218 12:48:28.531694    3376 main.go:141] libmachine: [stdout =====>] : Running
	
	I1218 12:48:28.531694    3376 main.go:141] libmachine: [stderr =====>] : 
	I1218 12:48:28.531880    3376 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-015900 ).networkadapters[0]).ipaddresses[0]
	I1218 12:48:31.043887    3376 main.go:141] libmachine: [stdout =====>] : 192.168.235.154
	
	I1218 12:48:31.043887    3376 main.go:141] libmachine: [stderr =====>] : 
	I1218 12:48:31.044475    3376 sshutil.go:53] new ssh client: &{IP:192.168.235.154 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-015900\id_rsa Username:docker}
	I1218 12:48:31.153705    3376 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.7163123s)
	I1218 12:48:31.153766    3376 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I1218 12:48:31.153766    3376 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1218 12:48:31.191296    3376 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I1218 12:48:31.191296    3376 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1224 bytes)
	I1218 12:48:31.229446    3376 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I1218 12:48:31.229446    3376 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1218 12:48:31.267665    3376 provision.go:86] duration metric: configureAuth took 14.1679481s
	I1218 12:48:31.267704    3376 buildroot.go:189] setting minikube options for container-runtime
	I1218 12:48:31.268351    3376 config.go:182] Loaded profile config "multinode-015900": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1218 12:48:31.268382    3376 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-015900 ).state
	I1218 12:48:33.371598    3376 main.go:141] libmachine: [stdout =====>] : Running
	
	I1218 12:48:33.371893    3376 main.go:141] libmachine: [stderr =====>] : 
	I1218 12:48:33.372130    3376 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-015900 ).networkadapters[0]).ipaddresses[0]
	I1218 12:48:35.865313    3376 main.go:141] libmachine: [stdout =====>] : 192.168.235.154
	
	I1218 12:48:35.865313    3376 main.go:141] libmachine: [stderr =====>] : 
	I1218 12:48:35.870560    3376 main.go:141] libmachine: Using SSH client type: native
	I1218 12:48:35.871283    3376 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xdb4f40] 0xdb7a80 <nil>  [] 0s} 192.168.235.154 22 <nil> <nil>}
	I1218 12:48:35.871283    3376 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1218 12:48:36.029664    3376 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1218 12:48:36.029664    3376 buildroot.go:70] root file system type: tmpfs
	I1218 12:48:36.029664    3376 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I1218 12:48:36.030199    3376 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-015900 ).state
	I1218 12:48:38.104398    3376 main.go:141] libmachine: [stdout =====>] : Running
	
	I1218 12:48:38.104398    3376 main.go:141] libmachine: [stderr =====>] : 
	I1218 12:48:38.104398    3376 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-015900 ).networkadapters[0]).ipaddresses[0]
	I1218 12:48:40.618609    3376 main.go:141] libmachine: [stdout =====>] : 192.168.235.154
	
	I1218 12:48:40.618691    3376 main.go:141] libmachine: [stderr =====>] : 
	I1218 12:48:40.624168    3376 main.go:141] libmachine: Using SSH client type: native
	I1218 12:48:40.624942    3376 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xdb4f40] 0xdb7a80 <nil>  [] 0s} 192.168.235.154 22 <nil> <nil>}
	I1218 12:48:40.624942    3376 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1218 12:48:40.787052    3376 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1218 12:48:40.787052    3376 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-015900 ).state
	I1218 12:48:42.888332    3376 main.go:141] libmachine: [stdout =====>] : Running
	
	I1218 12:48:42.888543    3376 main.go:141] libmachine: [stderr =====>] : 
	I1218 12:48:42.888645    3376 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-015900 ).networkadapters[0]).ipaddresses[0]
	I1218 12:48:45.381667    3376 main.go:141] libmachine: [stdout =====>] : 192.168.235.154
	
	I1218 12:48:45.381894    3376 main.go:141] libmachine: [stderr =====>] : 
	I1218 12:48:45.387326    3376 main.go:141] libmachine: Using SSH client type: native
	I1218 12:48:45.387961    3376 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xdb4f40] 0xdb7a80 <nil>  [] 0s} 192.168.235.154 22 <nil> <nil>}
	I1218 12:48:45.387961    3376 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1218 12:48:46.328168    3376 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I1218 12:48:46.328168    3376 machine.go:91] provisioned docker machine in 38.8200259s
	I1218 12:48:46.328168    3376 client.go:171] LocalClient.Create took 1m47.9453029s
	I1218 12:48:46.328168    3376 start.go:167] duration metric: libmachine.API.Create for "multinode-015900" took 1m47.9454297s
	I1218 12:48:46.328168    3376 start.go:300] post-start starting for "multinode-015900" (driver="hyperv")
	I1218 12:48:46.328168    3376 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1218 12:48:46.340201    3376 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1218 12:48:46.340201    3376 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-015900 ).state
	I1218 12:48:48.447319    3376 main.go:141] libmachine: [stdout =====>] : Running
	
	I1218 12:48:48.447319    3376 main.go:141] libmachine: [stderr =====>] : 
	I1218 12:48:48.447443    3376 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-015900 ).networkadapters[0]).ipaddresses[0]
	I1218 12:48:50.947456    3376 main.go:141] libmachine: [stdout =====>] : 192.168.235.154
	
	I1218 12:48:50.947456    3376 main.go:141] libmachine: [stderr =====>] : 
	I1218 12:48:50.947944    3376 sshutil.go:53] new ssh client: &{IP:192.168.235.154 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-015900\id_rsa Username:docker}
	I1218 12:48:51.057806    3376 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.7175893s)
	I1218 12:48:51.072097    3376 ssh_runner.go:195] Run: cat /etc/os-release
	I1218 12:48:51.078382    3376 command_runner.go:130] > NAME=Buildroot
	I1218 12:48:51.078382    3376 command_runner.go:130] > VERSION=2021.02.12-1-g0492d51-dirty
	I1218 12:48:51.078382    3376 command_runner.go:130] > ID=buildroot
	I1218 12:48:51.078382    3376 command_runner.go:130] > VERSION_ID=2021.02.12
	I1218 12:48:51.078382    3376 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
	I1218 12:48:51.078453    3376 info.go:137] Remote host: Buildroot 2021.02.12
	I1218 12:48:51.078568    3376 filesync.go:126] Scanning C:\Users\jenkins.minikube7\minikube-integration\.minikube\addons for local assets ...
	I1218 12:48:51.079056    3376 filesync.go:126] Scanning C:\Users\jenkins.minikube7\minikube-integration\.minikube\files for local assets ...
	I1218 12:48:51.080174    3376 filesync.go:149] local asset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\149282.pem -> 149282.pem in /etc/ssl/certs
	I1218 12:48:51.080246    3376 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\149282.pem -> /etc/ssl/certs/149282.pem
	I1218 12:48:51.092345    3376 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1218 12:48:51.106518    3376 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\149282.pem --> /etc/ssl/certs/149282.pem (1708 bytes)
	I1218 12:48:51.143284    3376 start.go:303] post-start completed in 4.8150994s
	I1218 12:48:51.147272    3376 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-015900 ).state
	I1218 12:48:53.239087    3376 main.go:141] libmachine: [stdout =====>] : Running
	
	I1218 12:48:53.239087    3376 main.go:141] libmachine: [stderr =====>] : 
	I1218 12:48:53.239218    3376 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-015900 ).networkadapters[0]).ipaddresses[0]
	I1218 12:48:55.735456    3376 main.go:141] libmachine: [stdout =====>] : 192.168.235.154
	
	I1218 12:48:55.735644    3376 main.go:141] libmachine: [stderr =====>] : 
	I1218 12:48:55.735884    3376 profile.go:148] Saving config to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-015900\config.json ...
	I1218 12:48:55.738772    3376 start.go:128] duration metric: createHost completed in 1m57.3573219s
	I1218 12:48:55.738772    3376 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-015900 ).state
	I1218 12:48:57.822498    3376 main.go:141] libmachine: [stdout =====>] : Running
	
	I1218 12:48:57.822498    3376 main.go:141] libmachine: [stderr =====>] : 
	I1218 12:48:57.822498    3376 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-015900 ).networkadapters[0]).ipaddresses[0]
	I1218 12:49:00.292047    3376 main.go:141] libmachine: [stdout =====>] : 192.168.235.154
	
	I1218 12:49:00.292047    3376 main.go:141] libmachine: [stderr =====>] : 
	I1218 12:49:00.297205    3376 main.go:141] libmachine: Using SSH client type: native
	I1218 12:49:00.297951    3376 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xdb4f40] 0xdb7a80 <nil>  [] 0s} 192.168.235.154 22 <nil> <nil>}
	I1218 12:49:00.297951    3376 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1218 12:49:00.438410    3376 main.go:141] libmachine: SSH cmd err, output: <nil>: 1702903740.448248052
	
	I1218 12:49:00.438557    3376 fix.go:206] guest clock: 1702903740.448248052
	I1218 12:49:00.438557    3376 fix.go:219] Guest: 2023-12-18 12:49:00.448248052 +0000 UTC Remote: 2023-12-18 12:48:55.738772 +0000 UTC m=+122.870123101 (delta=4.709476052s)
	I1218 12:49:00.438685    3376 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-015900 ).state
	I1218 12:49:02.528683    3376 main.go:141] libmachine: [stdout =====>] : Running
	
	I1218 12:49:02.528683    3376 main.go:141] libmachine: [stderr =====>] : 
	I1218 12:49:02.528814    3376 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-015900 ).networkadapters[0]).ipaddresses[0]
	I1218 12:49:05.058640    3376 main.go:141] libmachine: [stdout =====>] : 192.168.235.154
	
	I1218 12:49:05.058722    3376 main.go:141] libmachine: [stderr =====>] : 
	I1218 12:49:05.064043    3376 main.go:141] libmachine: Using SSH client type: native
	I1218 12:49:05.064654    3376 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xdb4f40] 0xdb7a80 <nil>  [] 0s} 192.168.235.154 22 <nil> <nil>}
	I1218 12:49:05.064842    3376 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1702903740
	I1218 12:49:05.216063    3376 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Dec 18 12:49:00 UTC 2023
	
	I1218 12:49:05.216063    3376 fix.go:226] clock set: Mon Dec 18 12:49:00 UTC 2023
	 (err=<nil>)
	I1218 12:49:05.216063    3376 start.go:83] releasing machines lock for "multinode-015900", held for 2m6.8345822s
	I1218 12:49:05.216809    3376 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-015900 ).state
	I1218 12:49:07.285002    3376 main.go:141] libmachine: [stdout =====>] : Running
	
	I1218 12:49:07.285303    3376 main.go:141] libmachine: [stderr =====>] : 
	I1218 12:49:07.285449    3376 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-015900 ).networkadapters[0]).ipaddresses[0]
	I1218 12:49:09.793253    3376 main.go:141] libmachine: [stdout =====>] : 192.168.235.154
	
	I1218 12:49:09.793253    3376 main.go:141] libmachine: [stderr =====>] : 
	I1218 12:49:09.798915    3376 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1218 12:49:09.799115    3376 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-015900 ).state
	I1218 12:49:09.811726    3376 ssh_runner.go:195] Run: cat /version.json
	I1218 12:49:09.811726    3376 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-015900 ).state
	I1218 12:49:11.969214    3376 main.go:141] libmachine: [stdout =====>] : Running
	
	I1218 12:49:11.969311    3376 main.go:141] libmachine: [stderr =====>] : 
	I1218 12:49:11.969311    3376 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-015900 ).networkadapters[0]).ipaddresses[0]
	I1218 12:49:12.000617    3376 main.go:141] libmachine: [stdout =====>] : Running
	
	I1218 12:49:12.000617    3376 main.go:141] libmachine: [stderr =====>] : 
	I1218 12:49:12.000675    3376 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-015900 ).networkadapters[0]).ipaddresses[0]
	I1218 12:49:14.644178    3376 main.go:141] libmachine: [stdout =====>] : 192.168.235.154
	
	I1218 12:49:14.644178    3376 main.go:141] libmachine: [stderr =====>] : 
	I1218 12:49:14.644719    3376 sshutil.go:53] new ssh client: &{IP:192.168.235.154 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-015900\id_rsa Username:docker}
	I1218 12:49:14.661344    3376 main.go:141] libmachine: [stdout =====>] : 192.168.235.154
	
	I1218 12:49:14.661573    3376 main.go:141] libmachine: [stderr =====>] : 
	I1218 12:49:14.662018    3376 sshutil.go:53] new ssh client: &{IP:192.168.235.154 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-015900\id_rsa Username:docker}
	I1218 12:49:14.746674    3376 command_runner.go:130] > {"iso_version": "v1.32.1-1702490427-17765", "kicbase_version": "v0.0.42-1702394725-17761", "minikube_version": "v1.32.0", "commit": "2780c4af854905e5cd4b94dc93de1f9d00b9040d"}
	I1218 12:49:14.746770    3376 ssh_runner.go:235] Completed: cat /version.json: (4.9350277s)
	I1218 12:49:14.759274    3376 ssh_runner.go:195] Run: systemctl --version
	I1218 12:49:14.831702    3376 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1218 12:49:14.831797    3376 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (5.0328642s)
	I1218 12:49:14.831913    3376 command_runner.go:130] > systemd 247 (247)
	I1218 12:49:14.832111    3376 command_runner.go:130] > -PAM -AUDIT -SELINUX -IMA -APPARMOR -SMACK -SYSVINIT -UTMP -LIBCRYPTSETUP -GCRYPT -GNUTLS +ACL +XZ +LZ4 -ZSTD +SECCOMP +BLKID -ELFUTILS +KMOD -IDN2 -IDN -PCRE2 default-hierarchy=hybrid
	I1218 12:49:14.845022    3376 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1218 12:49:14.852669    3376 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1218 12:49:14.852978    3376 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1218 12:49:14.864980    3376 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1218 12:49:14.886396    3376 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I1218 12:49:14.886818    3376 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1218 12:49:14.886818    3376 start.go:475] detecting cgroup driver to use...
	I1218 12:49:14.887122    3376 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1218 12:49:14.915675    3376 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I1218 12:49:14.928032    3376 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I1218 12:49:14.955547    3376 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1218 12:49:14.969678    3376 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I1218 12:49:14.982087    3376 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1218 12:49:15.014579    3376 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1218 12:49:15.043724    3376 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1218 12:49:15.073901    3376 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1218 12:49:15.103146    3376 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1218 12:49:15.135230    3376 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1218 12:49:15.163358    3376 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1218 12:49:15.177541    3376 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1218 12:49:15.190913    3376 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1218 12:49:15.218420    3376 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1218 12:49:15.391948    3376 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1218 12:49:15.419424    3376 start.go:475] detecting cgroup driver to use...
	I1218 12:49:15.435585    3376 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1218 12:49:15.459745    3376 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I1218 12:49:15.459816    3376 command_runner.go:130] > [Unit]
	I1218 12:49:15.459899    3376 command_runner.go:130] > Description=Docker Application Container Engine
	I1218 12:49:15.459899    3376 command_runner.go:130] > Documentation=https://docs.docker.com
	I1218 12:49:15.459973    3376 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I1218 12:49:15.459973    3376 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I1218 12:49:15.460066    3376 command_runner.go:130] > StartLimitBurst=3
	I1218 12:49:15.460123    3376 command_runner.go:130] > StartLimitIntervalSec=60
	I1218 12:49:15.460167    3376 command_runner.go:130] > [Service]
	I1218 12:49:15.460167    3376 command_runner.go:130] > Type=notify
	I1218 12:49:15.460248    3376 command_runner.go:130] > Restart=on-failure
	I1218 12:49:15.460248    3376 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I1218 12:49:15.460309    3376 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I1218 12:49:15.460361    3376 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I1218 12:49:15.460361    3376 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I1218 12:49:15.460420    3376 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I1218 12:49:15.460522    3376 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I1218 12:49:15.460580    3376 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I1218 12:49:15.460633    3376 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I1218 12:49:15.460633    3376 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I1218 12:49:15.460692    3376 command_runner.go:130] > ExecStart=
	I1218 12:49:15.460745    3376 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I1218 12:49:15.460821    3376 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I1218 12:49:15.460891    3376 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I1218 12:49:15.460891    3376 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I1218 12:49:15.460947    3376 command_runner.go:130] > LimitNOFILE=infinity
	I1218 12:49:15.460947    3376 command_runner.go:130] > LimitNPROC=infinity
	I1218 12:49:15.461017    3376 command_runner.go:130] > LimitCORE=infinity
	I1218 12:49:15.461075    3376 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I1218 12:49:15.461127    3376 command_runner.go:130] > # Only systemd 226 and above support this version.
	I1218 12:49:15.461127    3376 command_runner.go:130] > TasksMax=infinity
	I1218 12:49:15.461127    3376 command_runner.go:130] > TimeoutStartSec=0
	I1218 12:49:15.461187    3376 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I1218 12:49:15.461187    3376 command_runner.go:130] > Delegate=yes
	I1218 12:49:15.461240    3376 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I1218 12:49:15.461297    3376 command_runner.go:130] > KillMode=process
	I1218 12:49:15.461297    3376 command_runner.go:130] > [Install]
	I1218 12:49:15.461364    3376 command_runner.go:130] > WantedBy=multi-user.target
	I1218 12:49:15.476757    3376 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1218 12:49:15.510274    3376 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1218 12:49:15.561035    3376 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1218 12:49:15.594232    3376 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1218 12:49:15.628371    3376 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1218 12:49:15.683713    3376 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1218 12:49:15.701881    3376 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1218 12:49:15.728295    3376 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I1218 12:49:15.740624    3376 ssh_runner.go:195] Run: which cri-dockerd
	I1218 12:49:15.746063    3376 command_runner.go:130] > /usr/bin/cri-dockerd
	I1218 12:49:15.762374    3376 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1218 12:49:15.775485    3376 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I1218 12:49:15.813578    3376 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1218 12:49:15.987989    3376 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1218 12:49:16.138927    3376 docker.go:560] configuring docker to use "cgroupfs" as cgroup driver...
	I1218 12:49:16.139292    3376 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1218 12:49:16.178723    3376 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1218 12:49:16.344742    3376 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1218 12:50:17.460491    3376 command_runner.go:130] ! Job for docker.service failed because the control process exited with error code.
	I1218 12:50:17.460559    3376 command_runner.go:130] ! See "systemctl status docker.service" and "journalctl -xe" for details.
	I1218 12:50:17.460629    3376 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m1.1146679s)
	I1218 12:50:17.479426    3376 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I1218 12:50:17.497426    3376 command_runner.go:130] > -- Journal begins at Mon 2023-12-18 12:47:55 UTC, ends at Mon 2023-12-18 12:50:17 UTC. --
	I1218 12:50:17.497426    3376 command_runner.go:130] > Dec 18 12:48:45 multinode-015900 systemd[1]: Starting Docker Application Container Engine...
	I1218 12:50:17.497541    3376 command_runner.go:130] > Dec 18 12:48:45 multinode-015900 dockerd[672]: time="2023-12-18T12:48:45.916993167Z" level=info msg="Starting up"
	I1218 12:50:17.497541    3376 command_runner.go:130] > Dec 18 12:48:45 multinode-015900 dockerd[672]: time="2023-12-18T12:48:45.918246369Z" level=info msg="containerd not running, starting managed containerd"
	I1218 12:50:17.497541    3376 command_runner.go:130] > Dec 18 12:48:45 multinode-015900 dockerd[672]: time="2023-12-18T12:48:45.920314339Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=679
	I1218 12:50:17.497604    3376 command_runner.go:130] > Dec 18 12:48:45 multinode-015900 dockerd[679]: time="2023-12-18T12:48:45.952574480Z" level=info msg="starting containerd" revision=64b8a811b07ba6288238eefc14d898ee0b5b99ba version=v1.7.11
	I1218 12:50:17.497604    3376 command_runner.go:130] > Dec 18 12:48:45 multinode-015900 dockerd[679]: time="2023-12-18T12:48:45.979669898Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I1218 12:50:17.497604    3376 command_runner.go:130] > Dec 18 12:48:45 multinode-015900 dockerd[679]: time="2023-12-18T12:48:45.979776707Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I1218 12:50:17.497706    3376 command_runner.go:130] > Dec 18 12:48:45 multinode-015900 dockerd[679]: time="2023-12-18T12:48:45.982345917Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.57\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I1218 12:50:17.497706    3376 command_runner.go:130] > Dec 18 12:48:45 multinode-015900 dockerd[679]: time="2023-12-18T12:48:45.982561335Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I1218 12:50:17.497706    3376 command_runner.go:130] > Dec 18 12:48:45 multinode-015900 dockerd[679]: time="2023-12-18T12:48:45.982898862Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I1218 12:50:17.497789    3376 command_runner.go:130] > Dec 18 12:48:45 multinode-015900 dockerd[679]: time="2023-12-18T12:48:45.983098279Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I1218 12:50:17.497789    3376 command_runner.go:130] > Dec 18 12:48:45 multinode-015900 dockerd[679]: time="2023-12-18T12:48:45.983245391Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I1218 12:50:17.497789    3376 command_runner.go:130] > Dec 18 12:48:45 multinode-015900 dockerd[679]: time="2023-12-18T12:48:45.983438907Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I1218 12:50:17.497789    3376 command_runner.go:130] > Dec 18 12:48:45 multinode-015900 dockerd[679]: time="2023-12-18T12:48:45.983502012Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I1218 12:50:17.497923    3376 command_runner.go:130] > Dec 18 12:48:45 multinode-015900 dockerd[679]: time="2023-12-18T12:48:45.983618021Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I1218 12:50:17.497923    3376 command_runner.go:130] > Dec 18 12:48:45 multinode-015900 dockerd[679]: time="2023-12-18T12:48:45.983908545Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I1218 12:50:17.497923    3376 command_runner.go:130] > Dec 18 12:48:45 multinode-015900 dockerd[679]: time="2023-12-18T12:48:45.984014354Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	I1218 12:50:17.497994    3376 command_runner.go:130] > Dec 18 12:48:45 multinode-015900 dockerd[679]: time="2023-12-18T12:48:45.984031555Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I1218 12:50:17.498117    3376 command_runner.go:130] > Dec 18 12:48:45 multinode-015900 dockerd[679]: time="2023-12-18T12:48:45.984168666Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I1218 12:50:17.498168    3376 command_runner.go:130] > Dec 18 12:48:45 multinode-015900 dockerd[679]: time="2023-12-18T12:48:45.984247573Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I1218 12:50:17.498215    3376 command_runner.go:130] > Dec 18 12:48:45 multinode-015900 dockerd[679]: time="2023-12-18T12:48:45.984312778Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	I1218 12:50:17.498215    3376 command_runner.go:130] > Dec 18 12:48:45 multinode-015900 dockerd[679]: time="2023-12-18T12:48:45.984325679Z" level=info msg="metadata content store policy set" policy=shared
	I1218 12:50:17.498215    3376 command_runner.go:130] > Dec 18 12:48:45 multinode-015900 dockerd[679]: time="2023-12-18T12:48:45.996380466Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I1218 12:50:17.498215    3376 command_runner.go:130] > Dec 18 12:48:45 multinode-015900 dockerd[679]: time="2023-12-18T12:48:45.996506277Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I1218 12:50:17.498215    3376 command_runner.go:130] > Dec 18 12:48:45 multinode-015900 dockerd[679]: time="2023-12-18T12:48:45.996527378Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I1218 12:50:17.498215    3376 command_runner.go:130] > Dec 18 12:48:45 multinode-015900 dockerd[679]: time="2023-12-18T12:48:45.996559481Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I1218 12:50:17.498215    3376 command_runner.go:130] > Dec 18 12:48:45 multinode-015900 dockerd[679]: time="2023-12-18T12:48:45.996592984Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I1218 12:50:17.498215    3376 command_runner.go:130] > Dec 18 12:48:45 multinode-015900 dockerd[679]: time="2023-12-18T12:48:45.996606285Z" level=info msg="NRI interface is disabled by configuration."
	I1218 12:50:17.498215    3376 command_runner.go:130] > Dec 18 12:48:45 multinode-015900 dockerd[679]: time="2023-12-18T12:48:45.996621286Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I1218 12:50:17.498215    3376 command_runner.go:130] > Dec 18 12:48:45 multinode-015900 dockerd[679]: time="2023-12-18T12:48:45.996794900Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I1218 12:50:17.498215    3376 command_runner.go:130] > Dec 18 12:48:45 multinode-015900 dockerd[679]: time="2023-12-18T12:48:45.996837604Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I1218 12:50:17.498215    3376 command_runner.go:130] > Dec 18 12:48:45 multinode-015900 dockerd[679]: time="2023-12-18T12:48:45.997012418Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I1218 12:50:17.498215    3376 command_runner.go:130] > Dec 18 12:48:45 multinode-015900 dockerd[679]: time="2023-12-18T12:48:45.997044221Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I1218 12:50:17.498215    3376 command_runner.go:130] > Dec 18 12:48:45 multinode-015900 dockerd[679]: time="2023-12-18T12:48:45.997060822Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I1218 12:50:17.498215    3376 command_runner.go:130] > Dec 18 12:48:45 multinode-015900 dockerd[679]: time="2023-12-18T12:48:45.997078023Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I1218 12:50:17.498215    3376 command_runner.go:130] > Dec 18 12:48:45 multinode-015900 dockerd[679]: time="2023-12-18T12:48:45.997092225Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I1218 12:50:17.498215    3376 command_runner.go:130] > Dec 18 12:48:45 multinode-015900 dockerd[679]: time="2023-12-18T12:48:45.997105326Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I1218 12:50:17.498215    3376 command_runner.go:130] > Dec 18 12:48:45 multinode-015900 dockerd[679]: time="2023-12-18T12:48:45.997120427Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I1218 12:50:17.498215    3376 command_runner.go:130] > Dec 18 12:48:45 multinode-015900 dockerd[679]: time="2023-12-18T12:48:45.997134928Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I1218 12:50:17.498215    3376 command_runner.go:130] > Dec 18 12:48:45 multinode-015900 dockerd[679]: time="2023-12-18T12:48:45.997148429Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I1218 12:50:17.499053    3376 command_runner.go:130] > Dec 18 12:48:45 multinode-015900 dockerd[679]: time="2023-12-18T12:48:45.997160630Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I1218 12:50:17.499118    3376 command_runner.go:130] > Dec 18 12:48:45 multinode-015900 dockerd[679]: time="2023-12-18T12:48:45.997377448Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I1218 12:50:17.499118    3376 command_runner.go:130] > Dec 18 12:48:45 multinode-015900 dockerd[679]: time="2023-12-18T12:48:45.998469137Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I1218 12:50:17.499184    3376 command_runner.go:130] > Dec 18 12:48:45 multinode-015900 dockerd[679]: time="2023-12-18T12:48:45.998650952Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I1218 12:50:17.499184    3376 command_runner.go:130] > Dec 18 12:48:45 multinode-015900 dockerd[679]: time="2023-12-18T12:48:45.998847468Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I1218 12:50:17.499294    3376 command_runner.go:130] > Dec 18 12:48:45 multinode-015900 dockerd[679]: time="2023-12-18T12:48:45.999099089Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I1218 12:50:17.499294    3376 command_runner.go:130] > Dec 18 12:48:45 multinode-015900 dockerd[679]: time="2023-12-18T12:48:45.999329808Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I1218 12:50:17.499373    3376 command_runner.go:130] > Dec 18 12:48:45 multinode-015900 dockerd[679]: time="2023-12-18T12:48:45.999822448Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I1218 12:50:17.499399    3376 command_runner.go:130] > Dec 18 12:48:45 multinode-015900 dockerd[679]: time="2023-12-18T12:48:45.999862851Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I1218 12:50:17.499399    3376 command_runner.go:130] > Dec 18 12:48:45 multinode-015900 dockerd[679]: time="2023-12-18T12:48:45.999886353Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I1218 12:50:17.499399    3376 command_runner.go:130] > Dec 18 12:48:45 multinode-015900 dockerd[679]: time="2023-12-18T12:48:45.999905455Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I1218 12:50:17.499399    3376 command_runner.go:130] > Dec 18 12:48:45 multinode-015900 dockerd[679]: time="2023-12-18T12:48:45.999924256Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I1218 12:50:17.499399    3376 command_runner.go:130] > Dec 18 12:48:46 multinode-015900 dockerd[679]: time="2023-12-18T12:48:45.999941258Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I1218 12:50:17.499399    3376 command_runner.go:130] > Dec 18 12:48:46 multinode-015900 dockerd[679]: time="2023-12-18T12:48:46.000075269Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I1218 12:50:17.499399    3376 command_runner.go:130] > Dec 18 12:48:46 multinode-015900 dockerd[679]: time="2023-12-18T12:48:46.000106371Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I1218 12:50:17.499399    3376 command_runner.go:130] > Dec 18 12:48:46 multinode-015900 dockerd[679]: time="2023-12-18T12:48:46.000371893Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I1218 12:50:17.499399    3376 command_runner.go:130] > Dec 18 12:48:46 multinode-015900 dockerd[679]: time="2023-12-18T12:48:46.000411796Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I1218 12:50:17.499399    3376 command_runner.go:130] > Dec 18 12:48:46 multinode-015900 dockerd[679]: time="2023-12-18T12:48:46.000431298Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I1218 12:50:17.499399    3376 command_runner.go:130] > Dec 18 12:48:46 multinode-015900 dockerd[679]: time="2023-12-18T12:48:46.000454500Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I1218 12:50:17.499399    3376 command_runner.go:130] > Dec 18 12:48:46 multinode-015900 dockerd[679]: time="2023-12-18T12:48:46.000473601Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I1218 12:50:17.499399    3376 command_runner.go:130] > Dec 18 12:48:46 multinode-015900 dockerd[679]: time="2023-12-18T12:48:46.000493003Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I1218 12:50:17.499399    3376 command_runner.go:130] > Dec 18 12:48:46 multinode-015900 dockerd[679]: time="2023-12-18T12:48:46.000512905Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I1218 12:50:17.499399    3376 command_runner.go:130] > Dec 18 12:48:46 multinode-015900 dockerd[679]: time="2023-12-18T12:48:46.000532006Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I1218 12:50:17.500251    3376 command_runner.go:130] > Dec 18 12:48:46 multinode-015900 dockerd[679]: time="2023-12-18T12:48:46.000551808Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	I1218 12:50:17.500251    3376 command_runner.go:130] > Dec 18 12:48:46 multinode-015900 dockerd[679]: time="2023-12-18T12:48:46.000567809Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I1218 12:50:17.500251    3376 command_runner.go:130] > Dec 18 12:48:46 multinode-015900 dockerd[679]: time="2023-12-18T12:48:46.000582510Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	I1218 12:50:17.500251    3376 command_runner.go:130] > Dec 18 12:48:46 multinode-015900 dockerd[679]: time="2023-12-18T12:48:46.001412877Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I1218 12:50:17.500251    3376 command_runner.go:130] > Dec 18 12:48:46 multinode-015900 dockerd[679]: time="2023-12-18T12:48:46.001674597Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I1218 12:50:17.500251    3376 command_runner.go:130] > Dec 18 12:48:46 multinode-015900 dockerd[679]: time="2023-12-18T12:48:46.001736502Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I1218 12:50:17.500251    3376 command_runner.go:130] > Dec 18 12:48:46 multinode-015900 dockerd[679]: time="2023-12-18T12:48:46.001758004Z" level=info msg="containerd successfully booted in 0.051162s"
	I1218 12:50:17.500251    3376 command_runner.go:130] > Dec 18 12:48:46 multinode-015900 dockerd[672]: time="2023-12-18T12:48:46.029885463Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I1218 12:50:17.500251    3376 command_runner.go:130] > Dec 18 12:48:46 multinode-015900 dockerd[672]: time="2023-12-18T12:48:46.050434840Z" level=info msg="Loading containers: start."
	I1218 12:50:17.500251    3376 command_runner.go:130] > Dec 18 12:48:46 multinode-015900 dockerd[672]: time="2023-12-18T12:48:46.259703202Z" level=info msg="Loading containers: done."
	I1218 12:50:17.500251    3376 command_runner.go:130] > Dec 18 12:48:46 multinode-015900 dockerd[672]: time="2023-12-18T12:48:46.278873673Z" level=warning msg="WARNING: No blkio throttle.read_bps_device support"
	I1218 12:50:17.500251    3376 command_runner.go:130] > Dec 18 12:48:46 multinode-015900 dockerd[672]: time="2023-12-18T12:48:46.278979982Z" level=warning msg="WARNING: No blkio throttle.write_bps_device support"
	I1218 12:50:17.500251    3376 command_runner.go:130] > Dec 18 12:48:46 multinode-015900 dockerd[672]: time="2023-12-18T12:48:46.279096490Z" level=warning msg="WARNING: No blkio throttle.read_iops_device support"
	I1218 12:50:17.500251    3376 command_runner.go:130] > Dec 18 12:48:46 multinode-015900 dockerd[672]: time="2023-12-18T12:48:46.279192798Z" level=warning msg="WARNING: No blkio throttle.write_iops_device support"
	I1218 12:50:17.500251    3376 command_runner.go:130] > Dec 18 12:48:46 multinode-015900 dockerd[672]: time="2023-12-18T12:48:46.279364511Z" level=info msg="Docker daemon" commit=311b9ff graphdriver=overlay2 version=24.0.7
	I1218 12:50:17.500251    3376 command_runner.go:130] > Dec 18 12:48:46 multinode-015900 dockerd[672]: time="2023-12-18T12:48:46.279733839Z" level=info msg="Daemon has completed initialization"
	I1218 12:50:17.500251    3376 command_runner.go:130] > Dec 18 12:48:46 multinode-015900 dockerd[672]: time="2023-12-18T12:48:46.335346608Z" level=info msg="API listen on /var/run/docker.sock"
	I1218 12:50:17.500251    3376 command_runner.go:130] > Dec 18 12:48:46 multinode-015900 dockerd[672]: time="2023-12-18T12:48:46.335431514Z" level=info msg="API listen on [::]:2376"
	I1218 12:50:17.500251    3376 command_runner.go:130] > Dec 18 12:48:46 multinode-015900 systemd[1]: Started Docker Application Container Engine.
	I1218 12:50:17.500251    3376 command_runner.go:130] > Dec 18 12:49:16 multinode-015900 dockerd[672]: time="2023-12-18T12:49:16.375352822Z" level=info msg="Processing signal 'terminated'"
	I1218 12:50:17.500251    3376 command_runner.go:130] > Dec 18 12:49:16 multinode-015900 systemd[1]: Stopping Docker Application Container Engine...
	I1218 12:50:17.500251    3376 command_runner.go:130] > Dec 18 12:49:16 multinode-015900 dockerd[672]: time="2023-12-18T12:49:16.377314422Z" level=info msg="Daemon shutdown complete"
	I1218 12:50:17.500251    3376 command_runner.go:130] > Dec 18 12:49:16 multinode-015900 dockerd[672]: time="2023-12-18T12:49:16.377325322Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	I1218 12:50:17.500251    3376 command_runner.go:130] > Dec 18 12:49:16 multinode-015900 dockerd[672]: time="2023-12-18T12:49:16.377441122Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	I1218 12:50:17.500251    3376 command_runner.go:130] > Dec 18 12:49:16 multinode-015900 dockerd[672]: time="2023-12-18T12:49:16.377567422Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	I1218 12:50:17.500251    3376 command_runner.go:130] > Dec 18 12:49:17 multinode-015900 systemd[1]: docker.service: Succeeded.
	I1218 12:50:17.500888    3376 command_runner.go:130] > Dec 18 12:49:17 multinode-015900 systemd[1]: Stopped Docker Application Container Engine.
	I1218 12:50:17.500944    3376 command_runner.go:130] > Dec 18 12:49:17 multinode-015900 systemd[1]: Starting Docker Application Container Engine...
	I1218 12:50:17.500944    3376 command_runner.go:130] > Dec 18 12:49:17 multinode-015900 dockerd[1008]: time="2023-12-18T12:49:17.460625722Z" level=info msg="Starting up"
	I1218 12:50:17.500970    3376 command_runner.go:130] > Dec 18 12:50:17 multinode-015900 dockerd[1008]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	I1218 12:50:17.500970    3376 command_runner.go:130] > Dec 18 12:50:17 multinode-015900 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	I1218 12:50:17.501042    3376 command_runner.go:130] > Dec 18 12:50:17 multinode-015900 systemd[1]: docker.service: Failed with result 'exit-code'.
	I1218 12:50:17.501067    3376 command_runner.go:130] > Dec 18 12:50:17 multinode-015900 systemd[1]: Failed to start Docker Application Container Engine.
	I1218 12:50:17.507983    3376 out.go:177] 
	W1218 12:50:17.508338    3376 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	-- Journal begins at Mon 2023-12-18 12:47:55 UTC, ends at Mon 2023-12-18 12:50:17 UTC. --
	Dec 18 12:48:45 multinode-015900 systemd[1]: Starting Docker Application Container Engine...
	Dec 18 12:48:45 multinode-015900 dockerd[672]: time="2023-12-18T12:48:45.916993167Z" level=info msg="Starting up"
	Dec 18 12:48:45 multinode-015900 dockerd[672]: time="2023-12-18T12:48:45.918246369Z" level=info msg="containerd not running, starting managed containerd"
	Dec 18 12:48:45 multinode-015900 dockerd[672]: time="2023-12-18T12:48:45.920314339Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=679
	Dec 18 12:48:45 multinode-015900 dockerd[679]: time="2023-12-18T12:48:45.952574480Z" level=info msg="starting containerd" revision=64b8a811b07ba6288238eefc14d898ee0b5b99ba version=v1.7.11
	Dec 18 12:48:45 multinode-015900 dockerd[679]: time="2023-12-18T12:48:45.979669898Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Dec 18 12:48:45 multinode-015900 dockerd[679]: time="2023-12-18T12:48:45.979776707Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Dec 18 12:48:45 multinode-015900 dockerd[679]: time="2023-12-18T12:48:45.982345917Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.57\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Dec 18 12:48:45 multinode-015900 dockerd[679]: time="2023-12-18T12:48:45.982561335Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Dec 18 12:48:45 multinode-015900 dockerd[679]: time="2023-12-18T12:48:45.982898862Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Dec 18 12:48:45 multinode-015900 dockerd[679]: time="2023-12-18T12:48:45.983098279Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Dec 18 12:48:45 multinode-015900 dockerd[679]: time="2023-12-18T12:48:45.983245391Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Dec 18 12:48:45 multinode-015900 dockerd[679]: time="2023-12-18T12:48:45.983438907Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Dec 18 12:48:45 multinode-015900 dockerd[679]: time="2023-12-18T12:48:45.983502012Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Dec 18 12:48:45 multinode-015900 dockerd[679]: time="2023-12-18T12:48:45.983618021Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Dec 18 12:48:45 multinode-015900 dockerd[679]: time="2023-12-18T12:48:45.983908545Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Dec 18 12:48:45 multinode-015900 dockerd[679]: time="2023-12-18T12:48:45.984014354Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Dec 18 12:48:45 multinode-015900 dockerd[679]: time="2023-12-18T12:48:45.984031555Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Dec 18 12:48:45 multinode-015900 dockerd[679]: time="2023-12-18T12:48:45.984168666Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Dec 18 12:48:45 multinode-015900 dockerd[679]: time="2023-12-18T12:48:45.984247573Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Dec 18 12:48:45 multinode-015900 dockerd[679]: time="2023-12-18T12:48:45.984312778Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Dec 18 12:48:45 multinode-015900 dockerd[679]: time="2023-12-18T12:48:45.984325679Z" level=info msg="metadata content store policy set" policy=shared
	Dec 18 12:48:45 multinode-015900 dockerd[679]: time="2023-12-18T12:48:45.996380466Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Dec 18 12:48:45 multinode-015900 dockerd[679]: time="2023-12-18T12:48:45.996506277Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Dec 18 12:48:45 multinode-015900 dockerd[679]: time="2023-12-18T12:48:45.996527378Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Dec 18 12:48:45 multinode-015900 dockerd[679]: time="2023-12-18T12:48:45.996559481Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Dec 18 12:48:45 multinode-015900 dockerd[679]: time="2023-12-18T12:48:45.996592984Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Dec 18 12:48:45 multinode-015900 dockerd[679]: time="2023-12-18T12:48:45.996606285Z" level=info msg="NRI interface is disabled by configuration."
	Dec 18 12:48:45 multinode-015900 dockerd[679]: time="2023-12-18T12:48:45.996621286Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Dec 18 12:48:45 multinode-015900 dockerd[679]: time="2023-12-18T12:48:45.996794900Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Dec 18 12:48:45 multinode-015900 dockerd[679]: time="2023-12-18T12:48:45.996837604Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Dec 18 12:48:45 multinode-015900 dockerd[679]: time="2023-12-18T12:48:45.997012418Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Dec 18 12:48:45 multinode-015900 dockerd[679]: time="2023-12-18T12:48:45.997044221Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Dec 18 12:48:45 multinode-015900 dockerd[679]: time="2023-12-18T12:48:45.997060822Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Dec 18 12:48:45 multinode-015900 dockerd[679]: time="2023-12-18T12:48:45.997078023Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Dec 18 12:48:45 multinode-015900 dockerd[679]: time="2023-12-18T12:48:45.997092225Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Dec 18 12:48:45 multinode-015900 dockerd[679]: time="2023-12-18T12:48:45.997105326Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Dec 18 12:48:45 multinode-015900 dockerd[679]: time="2023-12-18T12:48:45.997120427Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Dec 18 12:48:45 multinode-015900 dockerd[679]: time="2023-12-18T12:48:45.997134928Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Dec 18 12:48:45 multinode-015900 dockerd[679]: time="2023-12-18T12:48:45.997148429Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Dec 18 12:48:45 multinode-015900 dockerd[679]: time="2023-12-18T12:48:45.997160630Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Dec 18 12:48:45 multinode-015900 dockerd[679]: time="2023-12-18T12:48:45.997377448Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Dec 18 12:48:45 multinode-015900 dockerd[679]: time="2023-12-18T12:48:45.998469137Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Dec 18 12:48:45 multinode-015900 dockerd[679]: time="2023-12-18T12:48:45.998650952Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Dec 18 12:48:45 multinode-015900 dockerd[679]: time="2023-12-18T12:48:45.998847468Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Dec 18 12:48:45 multinode-015900 dockerd[679]: time="2023-12-18T12:48:45.999099089Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Dec 18 12:48:45 multinode-015900 dockerd[679]: time="2023-12-18T12:48:45.999329808Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Dec 18 12:48:45 multinode-015900 dockerd[679]: time="2023-12-18T12:48:45.999822448Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Dec 18 12:48:45 multinode-015900 dockerd[679]: time="2023-12-18T12:48:45.999862851Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Dec 18 12:48:45 multinode-015900 dockerd[679]: time="2023-12-18T12:48:45.999886353Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Dec 18 12:48:45 multinode-015900 dockerd[679]: time="2023-12-18T12:48:45.999905455Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Dec 18 12:48:45 multinode-015900 dockerd[679]: time="2023-12-18T12:48:45.999924256Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Dec 18 12:48:46 multinode-015900 dockerd[679]: time="2023-12-18T12:48:45.999941258Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Dec 18 12:48:46 multinode-015900 dockerd[679]: time="2023-12-18T12:48:46.000075269Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Dec 18 12:48:46 multinode-015900 dockerd[679]: time="2023-12-18T12:48:46.000106371Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Dec 18 12:48:46 multinode-015900 dockerd[679]: time="2023-12-18T12:48:46.000371893Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Dec 18 12:48:46 multinode-015900 dockerd[679]: time="2023-12-18T12:48:46.000411796Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Dec 18 12:48:46 multinode-015900 dockerd[679]: time="2023-12-18T12:48:46.000431298Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Dec 18 12:48:46 multinode-015900 dockerd[679]: time="2023-12-18T12:48:46.000454500Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Dec 18 12:48:46 multinode-015900 dockerd[679]: time="2023-12-18T12:48:46.000473601Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Dec 18 12:48:46 multinode-015900 dockerd[679]: time="2023-12-18T12:48:46.000493003Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Dec 18 12:48:46 multinode-015900 dockerd[679]: time="2023-12-18T12:48:46.000512905Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Dec 18 12:48:46 multinode-015900 dockerd[679]: time="2023-12-18T12:48:46.000532006Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Dec 18 12:48:46 multinode-015900 dockerd[679]: time="2023-12-18T12:48:46.000551808Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	Dec 18 12:48:46 multinode-015900 dockerd[679]: time="2023-12-18T12:48:46.000567809Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Dec 18 12:48:46 multinode-015900 dockerd[679]: time="2023-12-18T12:48:46.000582510Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	Dec 18 12:48:46 multinode-015900 dockerd[679]: time="2023-12-18T12:48:46.001412877Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Dec 18 12:48:46 multinode-015900 dockerd[679]: time="2023-12-18T12:48:46.001674597Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Dec 18 12:48:46 multinode-015900 dockerd[679]: time="2023-12-18T12:48:46.001736502Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Dec 18 12:48:46 multinode-015900 dockerd[679]: time="2023-12-18T12:48:46.001758004Z" level=info msg="containerd successfully booted in 0.051162s"
	Dec 18 12:48:46 multinode-015900 dockerd[672]: time="2023-12-18T12:48:46.029885463Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Dec 18 12:48:46 multinode-015900 dockerd[672]: time="2023-12-18T12:48:46.050434840Z" level=info msg="Loading containers: start."
	Dec 18 12:48:46 multinode-015900 dockerd[672]: time="2023-12-18T12:48:46.259703202Z" level=info msg="Loading containers: done."
	Dec 18 12:48:46 multinode-015900 dockerd[672]: time="2023-12-18T12:48:46.278873673Z" level=warning msg="WARNING: No blkio throttle.read_bps_device support"
	Dec 18 12:48:46 multinode-015900 dockerd[672]: time="2023-12-18T12:48:46.278979982Z" level=warning msg="WARNING: No blkio throttle.write_bps_device support"
	Dec 18 12:48:46 multinode-015900 dockerd[672]: time="2023-12-18T12:48:46.279096490Z" level=warning msg="WARNING: No blkio throttle.read_iops_device support"
	Dec 18 12:48:46 multinode-015900 dockerd[672]: time="2023-12-18T12:48:46.279192798Z" level=warning msg="WARNING: No blkio throttle.write_iops_device support"
	Dec 18 12:48:46 multinode-015900 dockerd[672]: time="2023-12-18T12:48:46.279364511Z" level=info msg="Docker daemon" commit=311b9ff graphdriver=overlay2 version=24.0.7
	Dec 18 12:48:46 multinode-015900 dockerd[672]: time="2023-12-18T12:48:46.279733839Z" level=info msg="Daemon has completed initialization"
	Dec 18 12:48:46 multinode-015900 dockerd[672]: time="2023-12-18T12:48:46.335346608Z" level=info msg="API listen on /var/run/docker.sock"
	Dec 18 12:48:46 multinode-015900 dockerd[672]: time="2023-12-18T12:48:46.335431514Z" level=info msg="API listen on [::]:2376"
	Dec 18 12:48:46 multinode-015900 systemd[1]: Started Docker Application Container Engine.
	Dec 18 12:49:16 multinode-015900 dockerd[672]: time="2023-12-18T12:49:16.375352822Z" level=info msg="Processing signal 'terminated'"
	Dec 18 12:49:16 multinode-015900 systemd[1]: Stopping Docker Application Container Engine...
	Dec 18 12:49:16 multinode-015900 dockerd[672]: time="2023-12-18T12:49:16.377314422Z" level=info msg="Daemon shutdown complete"
	Dec 18 12:49:16 multinode-015900 dockerd[672]: time="2023-12-18T12:49:16.377325322Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Dec 18 12:49:16 multinode-015900 dockerd[672]: time="2023-12-18T12:49:16.377441122Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Dec 18 12:49:16 multinode-015900 dockerd[672]: time="2023-12-18T12:49:16.377567422Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Dec 18 12:49:17 multinode-015900 systemd[1]: docker.service: Succeeded.
	Dec 18 12:49:17 multinode-015900 systemd[1]: Stopped Docker Application Container Engine.
	Dec 18 12:49:17 multinode-015900 systemd[1]: Starting Docker Application Container Engine...
	Dec 18 12:49:17 multinode-015900 dockerd[1008]: time="2023-12-18T12:49:17.460625722Z" level=info msg="Starting up"
	Dec 18 12:50:17 multinode-015900 dockerd[1008]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Dec 18 12:50:17 multinode-015900 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Dec 18 12:50:17 multinode-015900 systemd[1]: docker.service: Failed with result 'exit-code'.
	Dec 18 12:50:17 multinode-015900 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	-- Journal begins at Mon 2023-12-18 12:47:55 UTC, ends at Mon 2023-12-18 12:50:17 UTC. --
	Dec 18 12:48:45 multinode-015900 systemd[1]: Starting Docker Application Container Engine...
	Dec 18 12:48:45 multinode-015900 dockerd[672]: time="2023-12-18T12:48:45.916993167Z" level=info msg="Starting up"
	Dec 18 12:48:45 multinode-015900 dockerd[672]: time="2023-12-18T12:48:45.918246369Z" level=info msg="containerd not running, starting managed containerd"
	Dec 18 12:48:45 multinode-015900 dockerd[672]: time="2023-12-18T12:48:45.920314339Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=679
	Dec 18 12:48:45 multinode-015900 dockerd[679]: time="2023-12-18T12:48:45.952574480Z" level=info msg="starting containerd" revision=64b8a811b07ba6288238eefc14d898ee0b5b99ba version=v1.7.11
	Dec 18 12:48:45 multinode-015900 dockerd[679]: time="2023-12-18T12:48:45.979669898Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Dec 18 12:48:45 multinode-015900 dockerd[679]: time="2023-12-18T12:48:45.979776707Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Dec 18 12:48:45 multinode-015900 dockerd[679]: time="2023-12-18T12:48:45.982345917Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.57\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Dec 18 12:48:45 multinode-015900 dockerd[679]: time="2023-12-18T12:48:45.982561335Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Dec 18 12:48:45 multinode-015900 dockerd[679]: time="2023-12-18T12:48:45.982898862Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Dec 18 12:48:45 multinode-015900 dockerd[679]: time="2023-12-18T12:48:45.983098279Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Dec 18 12:48:45 multinode-015900 dockerd[679]: time="2023-12-18T12:48:45.983245391Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Dec 18 12:48:45 multinode-015900 dockerd[679]: time="2023-12-18T12:48:45.983438907Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Dec 18 12:48:45 multinode-015900 dockerd[679]: time="2023-12-18T12:48:45.983502012Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Dec 18 12:48:45 multinode-015900 dockerd[679]: time="2023-12-18T12:48:45.983618021Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Dec 18 12:48:45 multinode-015900 dockerd[679]: time="2023-12-18T12:48:45.983908545Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Dec 18 12:48:45 multinode-015900 dockerd[679]: time="2023-12-18T12:48:45.984014354Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Dec 18 12:48:45 multinode-015900 dockerd[679]: time="2023-12-18T12:48:45.984031555Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Dec 18 12:48:45 multinode-015900 dockerd[679]: time="2023-12-18T12:48:45.984168666Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Dec 18 12:48:45 multinode-015900 dockerd[679]: time="2023-12-18T12:48:45.984247573Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Dec 18 12:48:45 multinode-015900 dockerd[679]: time="2023-12-18T12:48:45.984312778Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Dec 18 12:48:45 multinode-015900 dockerd[679]: time="2023-12-18T12:48:45.984325679Z" level=info msg="metadata content store policy set" policy=shared
	Dec 18 12:48:45 multinode-015900 dockerd[679]: time="2023-12-18T12:48:45.996380466Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Dec 18 12:48:45 multinode-015900 dockerd[679]: time="2023-12-18T12:48:45.996506277Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Dec 18 12:48:45 multinode-015900 dockerd[679]: time="2023-12-18T12:48:45.996527378Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Dec 18 12:48:45 multinode-015900 dockerd[679]: time="2023-12-18T12:48:45.996559481Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Dec 18 12:48:45 multinode-015900 dockerd[679]: time="2023-12-18T12:48:45.996592984Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Dec 18 12:48:45 multinode-015900 dockerd[679]: time="2023-12-18T12:48:45.996606285Z" level=info msg="NRI interface is disabled by configuration."
	Dec 18 12:48:45 multinode-015900 dockerd[679]: time="2023-12-18T12:48:45.996621286Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Dec 18 12:48:45 multinode-015900 dockerd[679]: time="2023-12-18T12:48:45.996794900Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Dec 18 12:48:45 multinode-015900 dockerd[679]: time="2023-12-18T12:48:45.996837604Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Dec 18 12:48:45 multinode-015900 dockerd[679]: time="2023-12-18T12:48:45.997012418Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Dec 18 12:48:45 multinode-015900 dockerd[679]: time="2023-12-18T12:48:45.997044221Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Dec 18 12:48:45 multinode-015900 dockerd[679]: time="2023-12-18T12:48:45.997060822Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Dec 18 12:48:45 multinode-015900 dockerd[679]: time="2023-12-18T12:48:45.997078023Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Dec 18 12:48:45 multinode-015900 dockerd[679]: time="2023-12-18T12:48:45.997092225Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Dec 18 12:48:45 multinode-015900 dockerd[679]: time="2023-12-18T12:48:45.997105326Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Dec 18 12:48:45 multinode-015900 dockerd[679]: time="2023-12-18T12:48:45.997120427Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Dec 18 12:48:45 multinode-015900 dockerd[679]: time="2023-12-18T12:48:45.997134928Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Dec 18 12:48:45 multinode-015900 dockerd[679]: time="2023-12-18T12:48:45.997148429Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Dec 18 12:48:45 multinode-015900 dockerd[679]: time="2023-12-18T12:48:45.997160630Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Dec 18 12:48:45 multinode-015900 dockerd[679]: time="2023-12-18T12:48:45.997377448Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Dec 18 12:48:45 multinode-015900 dockerd[679]: time="2023-12-18T12:48:45.998469137Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Dec 18 12:48:45 multinode-015900 dockerd[679]: time="2023-12-18T12:48:45.998650952Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Dec 18 12:48:45 multinode-015900 dockerd[679]: time="2023-12-18T12:48:45.998847468Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Dec 18 12:48:45 multinode-015900 dockerd[679]: time="2023-12-18T12:48:45.999099089Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Dec 18 12:48:45 multinode-015900 dockerd[679]: time="2023-12-18T12:48:45.999329808Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Dec 18 12:48:45 multinode-015900 dockerd[679]: time="2023-12-18T12:48:45.999822448Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Dec 18 12:48:45 multinode-015900 dockerd[679]: time="2023-12-18T12:48:45.999862851Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Dec 18 12:48:45 multinode-015900 dockerd[679]: time="2023-12-18T12:48:45.999886353Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Dec 18 12:48:45 multinode-015900 dockerd[679]: time="2023-12-18T12:48:45.999905455Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Dec 18 12:48:45 multinode-015900 dockerd[679]: time="2023-12-18T12:48:45.999924256Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Dec 18 12:48:46 multinode-015900 dockerd[679]: time="2023-12-18T12:48:45.999941258Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Dec 18 12:48:46 multinode-015900 dockerd[679]: time="2023-12-18T12:48:46.000075269Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Dec 18 12:48:46 multinode-015900 dockerd[679]: time="2023-12-18T12:48:46.000106371Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Dec 18 12:48:46 multinode-015900 dockerd[679]: time="2023-12-18T12:48:46.000371893Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Dec 18 12:48:46 multinode-015900 dockerd[679]: time="2023-12-18T12:48:46.000411796Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Dec 18 12:48:46 multinode-015900 dockerd[679]: time="2023-12-18T12:48:46.000431298Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Dec 18 12:48:46 multinode-015900 dockerd[679]: time="2023-12-18T12:48:46.000454500Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Dec 18 12:48:46 multinode-015900 dockerd[679]: time="2023-12-18T12:48:46.000473601Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Dec 18 12:48:46 multinode-015900 dockerd[679]: time="2023-12-18T12:48:46.000493003Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Dec 18 12:48:46 multinode-015900 dockerd[679]: time="2023-12-18T12:48:46.000512905Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Dec 18 12:48:46 multinode-015900 dockerd[679]: time="2023-12-18T12:48:46.000532006Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Dec 18 12:48:46 multinode-015900 dockerd[679]: time="2023-12-18T12:48:46.000551808Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	Dec 18 12:48:46 multinode-015900 dockerd[679]: time="2023-12-18T12:48:46.000567809Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Dec 18 12:48:46 multinode-015900 dockerd[679]: time="2023-12-18T12:48:46.000582510Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	Dec 18 12:48:46 multinode-015900 dockerd[679]: time="2023-12-18T12:48:46.001412877Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Dec 18 12:48:46 multinode-015900 dockerd[679]: time="2023-12-18T12:48:46.001674597Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Dec 18 12:48:46 multinode-015900 dockerd[679]: time="2023-12-18T12:48:46.001736502Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Dec 18 12:48:46 multinode-015900 dockerd[679]: time="2023-12-18T12:48:46.001758004Z" level=info msg="containerd successfully booted in 0.051162s"
	Dec 18 12:48:46 multinode-015900 dockerd[672]: time="2023-12-18T12:48:46.029885463Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Dec 18 12:48:46 multinode-015900 dockerd[672]: time="2023-12-18T12:48:46.050434840Z" level=info msg="Loading containers: start."
	Dec 18 12:48:46 multinode-015900 dockerd[672]: time="2023-12-18T12:48:46.259703202Z" level=info msg="Loading containers: done."
	Dec 18 12:48:46 multinode-015900 dockerd[672]: time="2023-12-18T12:48:46.278873673Z" level=warning msg="WARNING: No blkio throttle.read_bps_device support"
	Dec 18 12:48:46 multinode-015900 dockerd[672]: time="2023-12-18T12:48:46.278979982Z" level=warning msg="WARNING: No blkio throttle.write_bps_device support"
	Dec 18 12:48:46 multinode-015900 dockerd[672]: time="2023-12-18T12:48:46.279096490Z" level=warning msg="WARNING: No blkio throttle.read_iops_device support"
	Dec 18 12:48:46 multinode-015900 dockerd[672]: time="2023-12-18T12:48:46.279192798Z" level=warning msg="WARNING: No blkio throttle.write_iops_device support"
	Dec 18 12:48:46 multinode-015900 dockerd[672]: time="2023-12-18T12:48:46.279364511Z" level=info msg="Docker daemon" commit=311b9ff graphdriver=overlay2 version=24.0.7
	Dec 18 12:48:46 multinode-015900 dockerd[672]: time="2023-12-18T12:48:46.279733839Z" level=info msg="Daemon has completed initialization"
	Dec 18 12:48:46 multinode-015900 dockerd[672]: time="2023-12-18T12:48:46.335346608Z" level=info msg="API listen on /var/run/docker.sock"
	Dec 18 12:48:46 multinode-015900 dockerd[672]: time="2023-12-18T12:48:46.335431514Z" level=info msg="API listen on [::]:2376"
	Dec 18 12:48:46 multinode-015900 systemd[1]: Started Docker Application Container Engine.
	Dec 18 12:49:16 multinode-015900 dockerd[672]: time="2023-12-18T12:49:16.375352822Z" level=info msg="Processing signal 'terminated'"
	Dec 18 12:49:16 multinode-015900 systemd[1]: Stopping Docker Application Container Engine...
	Dec 18 12:49:16 multinode-015900 dockerd[672]: time="2023-12-18T12:49:16.377314422Z" level=info msg="Daemon shutdown complete"
	Dec 18 12:49:16 multinode-015900 dockerd[672]: time="2023-12-18T12:49:16.377325322Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Dec 18 12:49:16 multinode-015900 dockerd[672]: time="2023-12-18T12:49:16.377441122Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Dec 18 12:49:16 multinode-015900 dockerd[672]: time="2023-12-18T12:49:16.377567422Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Dec 18 12:49:17 multinode-015900 systemd[1]: docker.service: Succeeded.
	Dec 18 12:49:17 multinode-015900 systemd[1]: Stopped Docker Application Container Engine.
	Dec 18 12:49:17 multinode-015900 systemd[1]: Starting Docker Application Container Engine...
	Dec 18 12:49:17 multinode-015900 dockerd[1008]: time="2023-12-18T12:49:17.460625722Z" level=info msg="Starting up"
	Dec 18 12:50:17 multinode-015900 dockerd[1008]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Dec 18 12:50:17 multinode-015900 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Dec 18 12:50:17 multinode-015900 systemd[1]: docker.service: Failed with result 'exit-code'.
	Dec 18 12:50:17 multinode-015900 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W1218 12:50:17.508877    3376 out.go:239] * 
	* 
	W1218 12:50:17.510051    3376 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1218 12:50:17.510097    3376 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:88: failed to start cluster. args "out/minikube-windows-amd64.exe start -p multinode-015900 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=hyperv" : exit status 90
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-015900 -n multinode-015900
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-015900 -n multinode-015900: exit status 6 (12.1089923s)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	W1218 12:50:17.938915   10880 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	E1218 12:50:29.815726   10880 status.go:415] kubeconfig endpoint: extract IP: "multinode-015900" does not appear in C:\Users\jenkins.minikube7\minikube-integration\kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "multinode-015900" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestMultiNode/serial/FreshStart2Nodes (217.03s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (128.44s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:509: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-015900 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:509: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p multinode-015900 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml: exit status 1 (392.1548ms)

                                                
                                                
** stderr ** 
	W1218 12:50:30.003400   14532 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	error: cluster "multinode-015900" does not exist

                                                
                                                
** /stderr **
multinode_test.go:511: failed to create busybox deployment to multinode cluster
multinode_test.go:514: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-015900 -- rollout status deployment/busybox
multinode_test.go:514: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p multinode-015900 -- rollout status deployment/busybox: exit status 1 (390.9354ms)

                                                
                                                
** stderr ** 
	W1218 12:50:30.395733    7068 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	error: no server found for cluster "multinode-015900"

                                                
                                                
** /stderr **
multinode_test.go:516: failed to deploy busybox to multinode cluster
multinode_test.go:521: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-015900 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:521: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p multinode-015900 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (376.0754ms)

                                                
                                                
** stderr ** 
	W1218 12:50:30.790687   15312 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	error: no server found for cluster "multinode-015900"

                                                
                                                
** /stderr **
multinode_test.go:524: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:521: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-015900 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:521: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p multinode-015900 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (357.5413ms)

                                                
                                                
** stderr ** 
	W1218 12:50:32.621179    8056 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	error: no server found for cluster "multinode-015900"

                                                
                                                
** /stderr **
multinode_test.go:524: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:521: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-015900 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:521: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p multinode-015900 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (360.9412ms)

                                                
                                                
** stderr ** 
	W1218 12:50:34.659854    3632 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	error: no server found for cluster "multinode-015900"

                                                
                                                
** /stderr **
multinode_test.go:524: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:521: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-015900 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:521: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p multinode-015900 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (359.8456ms)

                                                
                                                
** stderr ** 
	W1218 12:50:37.987707    9440 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	error: no server found for cluster "multinode-015900"

                                                
                                                
** /stderr **
multinode_test.go:524: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:521: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-015900 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:521: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p multinode-015900 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (408.2459ms)

                                                
                                                
** stderr ** 
	W1218 12:50:40.613922   14324 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	error: no server found for cluster "multinode-015900"

                                                
                                                
** /stderr **
multinode_test.go:524: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:521: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-015900 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:521: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p multinode-015900 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (361.1064ms)

                                                
                                                
** stderr ** 
	W1218 12:50:47.564609    6412 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	error: no server found for cluster "multinode-015900"

                                                
                                                
** /stderr **
multinode_test.go:524: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:521: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-015900 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:521: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p multinode-015900 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (375.0172ms)

                                                
                                                
** stderr ** 
	W1218 12:50:57.898509   12832 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	error: no server found for cluster "multinode-015900"

                                                
                                                
** /stderr **
multinode_test.go:524: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:521: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-015900 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:521: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p multinode-015900 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (357.9689ms)

                                                
                                                
** stderr ** 
	W1218 12:51:07.116316   12732 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	error: no server found for cluster "multinode-015900"

                                                
                                                
** /stderr **
multinode_test.go:524: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:521: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-015900 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:521: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p multinode-015900 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (346.6125ms)

                                                
                                                
** stderr ** 
	W1218 12:51:20.806507    3352 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	error: no server found for cluster "multinode-015900"

                                                
                                                
** /stderr **
multinode_test.go:524: failed to retrieve Pod IPs (may be temporary): exit status 1
E1218 12:51:45.549418   14928 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-806500\client.crt: The system cannot find the path specified.
multinode_test.go:521: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-015900 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:521: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p multinode-015900 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (361.6818ms)

                                                
                                                
** stderr ** 
	W1218 12:51:58.202057   13944 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	error: no server found for cluster "multinode-015900"

                                                
                                                
** /stderr **
multinode_test.go:524: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:521: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-015900 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:521: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p multinode-015900 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (358.249ms)

                                                
                                                
** stderr ** 
	W1218 12:52:24.639789   10448 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	error: no server found for cluster "multinode-015900"

                                                
                                                
** /stderr **
multinode_test.go:524: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:540: failed to resolve pod IPs: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:544: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-015900 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:544: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p multinode-015900 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (360.6069ms)

                                                
                                                
** stderr ** 
	W1218 12:52:24.994900    6648 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	error: no server found for cluster "multinode-015900"

                                                
                                                
** /stderr **
multinode_test.go:546: failed get Pod names
multinode_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-015900 -- exec  -- nslookup kubernetes.io
multinode_test.go:552: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p multinode-015900 -- exec  -- nslookup kubernetes.io: exit status 1 (360.9619ms)

                                                
                                                
** stderr ** 
	W1218 12:52:25.360921   14988 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	error: no server found for cluster "multinode-015900"

                                                
                                                
** /stderr **
multinode_test.go:554: Pod  could not resolve 'kubernetes.io': exit status 1
multinode_test.go:562: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-015900 -- exec  -- nslookup kubernetes.default
multinode_test.go:562: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p multinode-015900 -- exec  -- nslookup kubernetes.default: exit status 1 (375.0916ms)

                                                
                                                
** stderr ** 
	W1218 12:52:25.721905    6592 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	error: no server found for cluster "multinode-015900"

                                                
                                                
** /stderr **
multinode_test.go:564: Pod  could not resolve 'kubernetes.default': exit status 1
multinode_test.go:570: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-015900 -- exec  -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:570: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p multinode-015900 -- exec  -- nslookup kubernetes.default.svc.cluster.local: exit status 1 (371.7453ms)

                                                
                                                
** stderr ** 
	W1218 12:52:26.097519    6712 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	error: no server found for cluster "multinode-015900"

                                                
                                                
** /stderr **
multinode_test.go:572: Pod  could not resolve local service (kubernetes.default.svc.cluster.local): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-015900 -n multinode-015900
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-015900 -n multinode-015900: exit status 6 (11.9726103s)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	W1218 12:52:26.469354    1580 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	E1218 12:52:38.250418    1580 status.go:415] kubeconfig endpoint: extract IP: "multinode-015900" does not appear in C:\Users\jenkins.minikube7\minikube-integration\kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "multinode-015900" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestMultiNode/serial/DeployApp2Nodes (128.44s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (12.16s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:580: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-015900 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:580: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p multinode-015900 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (373.257ms)

                                                
                                                
** stderr ** 
	W1218 12:52:38.443515   10308 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	error: no server found for cluster "multinode-015900"

                                                
                                                
** /stderr **
multinode_test.go:582: failed to get Pod names: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-015900 -n multinode-015900
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-015900 -n multinode-015900: exit status 6 (11.7820566s)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	W1218 12:52:38.810411    1668 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	E1218 12:52:50.411833    1668 status.go:415] kubeconfig endpoint: extract IP: "multinode-015900" does not appear in C:\Users\jenkins.minikube7\minikube-integration\kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "multinode-015900" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (12.16s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (18.67s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:111: (dbg) Run:  out/minikube-windows-amd64.exe node add -p multinode-015900 -v 3 --alsologtostderr
multinode_test.go:111: (dbg) Non-zero exit: out/minikube-windows-amd64.exe node add -p multinode-015900 -v 3 --alsologtostderr: exit status 119 (7.0577883s)

                                                
                                                
-- stdout --
	* This control plane is not running! (state=Stopped)
	  To start a cluster, run: "minikube start -p multinode-015900"

                                                
                                                
-- /stdout --
** stderr ** 
	W1218 12:52:50.602910   14364 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I1218 12:52:50.684073   14364 out.go:296] Setting OutFile to fd 800 ...
	I1218 12:52:50.703949   14364 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1218 12:52:50.704034   14364 out.go:309] Setting ErrFile to fd 784...
	I1218 12:52:50.704034   14364 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1218 12:52:50.718788   14364 mustload.go:65] Loading cluster: multinode-015900
	I1218 12:52:50.719667   14364 config.go:182] Loaded profile config "multinode-015900": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1218 12:52:50.720355   14364 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-015900 ).state
	I1218 12:52:52.816763   14364 main.go:141] libmachine: [stdout =====>] : Running
	
	I1218 12:52:52.817255   14364 main.go:141] libmachine: [stderr =====>] : 
	I1218 12:52:52.817328   14364 host.go:66] Checking if "multinode-015900" exists ...
	I1218 12:52:52.817932   14364 api_server.go:166] Checking apiserver status ...
	I1218 12:52:52.834121   14364 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 12:52:52.834275   14364 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-015900 ).state
	I1218 12:52:54.931836   14364 main.go:141] libmachine: [stdout =====>] : Running
	
	I1218 12:52:54.931836   14364 main.go:141] libmachine: [stderr =====>] : 
	I1218 12:52:54.932017   14364 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-015900 ).networkadapters[0]).ipaddresses[0]
	I1218 12:52:57.382980   14364 main.go:141] libmachine: [stdout =====>] : 192.168.235.154
	
	I1218 12:52:57.383041   14364 main.go:141] libmachine: [stderr =====>] : 
	I1218 12:52:57.383494   14364 sshutil.go:53] new ssh client: &{IP:192.168.235.154 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-015900\id_rsa Username:docker}
	I1218 12:52:57.495791   14364 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (4.6616536s)
	W1218 12:52:57.495923   14364 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1218 12:52:57.497314   14364 out.go:177] * This control plane is not running! (state=Stopped)
	W1218 12:52:57.500027   14364 out.go:239] ! This is unusual - you may want to investigate using "minikube logs -p multinode-015900"
	! This is unusual - you may want to investigate using "minikube logs -p multinode-015900"
	I1218 12:52:57.501742   14364 out.go:177]   To start a cluster, run: "minikube start -p multinode-015900"

                                                
                                                
** /stderr **
multinode_test.go:113: failed to add node to current cluster. args "out/minikube-windows-amd64.exe node add -p multinode-015900 -v 3 --alsologtostderr" : exit status 119
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-015900 -n multinode-015900
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-015900 -n multinode-015900: exit status 6 (11.615221s)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	W1218 12:52:57.655892   10104 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	E1218 12:53:09.080110   10104 status.go:415] kubeconfig endpoint: extract IP: "multinode-015900" does not appear in C:\Users\jenkins.minikube7\minikube-integration\kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "multinode-015900" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestMultiNode/serial/AddNode (18.67s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (11.84s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:211: (dbg) Run:  kubectl --context multinode-015900 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
multinode_test.go:211: (dbg) Non-zero exit: kubectl --context multinode-015900 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]": exit status 1 (127.9574ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: multinode-015900

                                                
                                                
** /stderr **
multinode_test.go:213: failed to 'kubectl get nodes' with args "kubectl --context multinode-015900 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": exit status 1
multinode_test.go:220: failed to decode json from label list: args "kubectl --context multinode-015900 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": unexpected end of JSON input
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-015900 -n multinode-015900
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-015900 -n multinode-015900: exit status 6 (11.7028205s)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	W1218 12:53:09.409188   10568 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	E1218 12:53:20.927695   10568 status.go:415] kubeconfig endpoint: extract IP: "multinode-015900" does not appear in C:\Users\jenkins.minikube7\minikube-integration\kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "multinode-015900" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestMultiNode/serial/MultiNodeLabels (11.84s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (19.16s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
multinode_test.go:133: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (7.3711486s)
multinode_test.go:156: expected profile "multinode-015900" in json of 'profile list' include 3 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"multinode-015900\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"multinode-015900\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/17765/minikube-v1.32.1-1702490427-17765-amd64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702585004-17765@sha256:ba2fbb9efd5b81a443834ba0800f3bc13feea942ce199df74b0054a9bdb32bbd\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"VMDriver\":\"\",\"Driver\":\"hyperv\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHid
den\":false,\"KVMNUMACount\":1,\"APIServerPort\":0,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.28.4\",\"ClusterName\":\"multinode-015900\",\"Namespace\":\"default\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\",\"NodeIP\":\"\",\"NodePort\":8443,\"NodeName\":\"\"},\"Nodes\":[{\"Name\":\"
\",\"IP\":\"192.168.235.154\",\"Port\":8443,\"KubernetesVersion\":\"v1.28.4\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"C:\\\\Users\\\\jenkins.minikube7:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"\",\"SocketVMnetPath\":\"\",\"
StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"AutoPauseInterval\":60000000000,\"GPUs\":\"\"},\"Active\":false}]}"*. args: "out/minikube-windows-amd64.exe profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-015900 -n multinode-015900
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-015900 -n multinode-015900: exit status 6 (11.7918714s)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	W1218 12:53:28.490190   10968 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	E1218 12:53:40.094007   10968 status.go:415] kubeconfig endpoint: extract IP: "multinode-015900" does not appear in C:\Users\jenkins.minikube7\minikube-integration\kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "multinode-015900" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestMultiNode/serial/ProfileList (19.16s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (23.52s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:174: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-015900 status --output json --alsologtostderr
E1218 12:53:42.365555   14928 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-806500\client.crt: The system cannot find the path specified.
multinode_test.go:174: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-015900 status --output json --alsologtostderr: exit status 6 (11.7355676s)

                                                
                                                
-- stdout --
	{"Name":"multinode-015900","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Misconfigured","Worker":false}

                                                
                                                
-- /stdout --
** stderr ** 
	W1218 12:53:40.281398    2244 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I1218 12:53:40.356696    2244 out.go:296] Setting OutFile to fd 884 ...
	I1218 12:53:40.357681    2244 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1218 12:53:40.357681    2244 out.go:309] Setting ErrFile to fd 812...
	I1218 12:53:40.357681    2244 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1218 12:53:40.370683    2244 out.go:303] Setting JSON to true
	I1218 12:53:40.370683    2244 mustload.go:65] Loading cluster: multinode-015900
	I1218 12:53:40.370683    2244 notify.go:220] Checking for updates...
	I1218 12:53:40.371903    2244 config.go:182] Loaded profile config "multinode-015900": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1218 12:53:40.371903    2244 status.go:255] checking status of multinode-015900 ...
	I1218 12:53:40.372836    2244 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-015900 ).state
	I1218 12:53:42.489538    2244 main.go:141] libmachine: [stdout =====>] : Running
	
	I1218 12:53:42.489538    2244 main.go:141] libmachine: [stderr =====>] : 
	I1218 12:53:42.489636    2244 status.go:330] multinode-015900 host status = "Running" (err=<nil>)
	I1218 12:53:42.489636    2244 host.go:66] Checking if "multinode-015900" exists ...
	I1218 12:53:42.490365    2244 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-015900 ).state
	I1218 12:53:44.610311    2244 main.go:141] libmachine: [stdout =====>] : Running
	
	I1218 12:53:44.610371    2244 main.go:141] libmachine: [stderr =====>] : 
	I1218 12:53:44.610472    2244 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-015900 ).networkadapters[0]).ipaddresses[0]
	I1218 12:53:47.049445    2244 main.go:141] libmachine: [stdout =====>] : 192.168.235.154
	
	I1218 12:53:47.049445    2244 main.go:141] libmachine: [stderr =====>] : 
	I1218 12:53:47.049445    2244 host.go:66] Checking if "multinode-015900" exists ...
	I1218 12:53:47.066055    2244 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1218 12:53:47.066055    2244 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-015900 ).state
	I1218 12:53:49.153514    2244 main.go:141] libmachine: [stdout =====>] : Running
	
	I1218 12:53:49.153514    2244 main.go:141] libmachine: [stderr =====>] : 
	I1218 12:53:49.153514    2244 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-015900 ).networkadapters[0]).ipaddresses[0]
	I1218 12:53:51.665452    2244 main.go:141] libmachine: [stdout =====>] : 192.168.235.154
	
	I1218 12:53:51.665452    2244 main.go:141] libmachine: [stderr =====>] : 
	I1218 12:53:51.665984    2244 sshutil.go:53] new ssh client: &{IP:192.168.235.154 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-015900\id_rsa Username:docker}
	I1218 12:53:51.768450    2244 ssh_runner.go:235] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (4.7023193s)
	I1218 12:53:51.781817    2244 ssh_runner.go:195] Run: systemctl --version
	I1218 12:53:51.804845    2244 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	E1218 12:53:51.825471    2244 status.go:415] kubeconfig endpoint: extract IP: "multinode-015900" does not appear in C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	I1218 12:53:51.825549    2244 api_server.go:166] Checking apiserver status ...
	I1218 12:53:51.839206    2244 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1218 12:53:51.856049    2244 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1218 12:53:51.856049    2244 status.go:421] multinode-015900 apiserver status = Stopped (err=<nil>)
	I1218 12:53:51.856049    2244 status.go:257] multinode-015900 status: &{Name:multinode-015900 Host:Running Kubelet:Stopped APIServer:Stopped Kubeconfig:Misconfigured Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:176: failed to run minikube status. args "out/minikube-windows-amd64.exe -p multinode-015900 status --output json --alsologtostderr" : exit status 6
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-015900 -n multinode-015900
E1218 12:53:56.565374   14928 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ingress-addon-legacy-134100\client.crt: The system cannot find the path specified.
E1218 12:54:02.421239   14928 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-922300\client.crt: The system cannot find the path specified.
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-015900 -n multinode-015900: exit status 6 (11.7823161s)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	W1218 12:53:52.026475   10932 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	E1218 12:54:03.621399   10932 status.go:415] kubeconfig endpoint: extract IP: "multinode-015900" does not appear in C:\Users\jenkins.minikube7\minikube-integration\kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "multinode-015900" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestMultiNode/serial/CopyFile (23.52s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (23.89s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:238: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-015900 node stop m03
multinode_test.go:238: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-015900 node stop m03: exit status 85 (325.7314ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	W1218 12:54:03.802364    5932 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	* 
	╭──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                      │
	│    * If the above advice does not help, please let us know:                                                          │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                        │
	│                                                                                                                      │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                             │
	│    * Please also attach the following file to the GitHub issue:                                                      │
	│    * - C:\Users\jenkins.minikube7\AppData\Local\Temp\minikube_node_2c63846074b81cabd7bc8fc4aaabbe2e68888c99_0.log    │
	│                                                                                                                      │
	╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:240: node stop returned an error. args "out/minikube-windows-amd64.exe -p multinode-015900 node stop m03": exit status 85
multinode_test.go:244: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-015900 status
multinode_test.go:244: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-015900 status: exit status 6 (11.8149841s)

                                                
                                                
-- stdout --
	multinode-015900
	type: Control Plane
	host: Running
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Misconfigured
	
	
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	W1218 12:54:04.122145   11864 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	E1218 12:54:15.748501   11864 status.go:415] kubeconfig endpoint: extract IP: "multinode-015900" does not appear in C:\Users\jenkins.minikube7\minikube-integration\kubeconfig

                                                
                                                
** /stderr **
multinode_test.go:247: failed to run minikube status. args "out/minikube-windows-amd64.exe -p multinode-015900 status" : exit status 6
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-015900 -n multinode-015900
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-015900 -n multinode-015900: exit status 6 (11.7433751s)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	W1218 12:54:15.941142    2168 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	E1218 12:54:27.502946    2168 status.go:415] kubeconfig endpoint: extract IP: "multinode-015900" does not appear in C:\Users\jenkins.minikube7\minikube-integration\kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "multinode-015900" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestMultiNode/serial/StopNode (23.89s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (23.84s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-015900 node start m03 --alsologtostderr
multinode_test.go:282: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-015900 node start m03 --alsologtostderr: exit status 85 (338.7884ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	W1218 12:54:27.685557    8624 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I1218 12:54:27.760489    8624 out.go:296] Setting OutFile to fd 1020 ...
	I1218 12:54:27.778085    8624 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1218 12:54:27.778085    8624 out.go:309] Setting ErrFile to fd 844...
	I1218 12:54:27.778195    8624 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1218 12:54:27.797388    8624 mustload.go:65] Loading cluster: multinode-015900
	I1218 12:54:27.798387    8624 config.go:182] Loaded profile config "multinode-015900": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1218 12:54:27.800923    8624 out.go:177] 
	W1218 12:54:27.801671    8624 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	W1218 12:54:27.801671    8624 out.go:239] * 
	* 
	W1218 12:54:27.867091    8624 out.go:239] ╭──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                      │
	│    * If the above advice does not help, please let us know:                                                          │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                        │
	│                                                                                                                      │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                             │
	│    * Please also attach the following file to the GitHub issue:                                                      │
	│    * - C:\Users\jenkins.minikube7\AppData\Local\Temp\minikube_node_2a16899cf3c6bdbad0ea7439477e76ee9a435768_5.log    │
	│                                                                                                                      │
	╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                      │
	│    * If the above advice does not help, please let us know:                                                          │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                        │
	│                                                                                                                      │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                             │
	│    * Please also attach the following file to the GitHub issue:                                                      │
	│    * - C:\Users\jenkins.minikube7\AppData\Local\Temp\minikube_node_2a16899cf3c6bdbad0ea7439477e76ee9a435768_5.log    │
	│                                                                                                                      │
	╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I1218 12:54:27.868087    8624 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:284: W1218 12:54:27.685557    8624 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
I1218 12:54:27.760489    8624 out.go:296] Setting OutFile to fd 1020 ...
I1218 12:54:27.778085    8624 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1218 12:54:27.778085    8624 out.go:309] Setting ErrFile to fd 844...
I1218 12:54:27.778195    8624 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1218 12:54:27.797388    8624 mustload.go:65] Loading cluster: multinode-015900
I1218 12:54:27.798387    8624 config.go:182] Loaded profile config "multinode-015900": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I1218 12:54:27.800923    8624 out.go:177] 
W1218 12:54:27.801671    8624 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
W1218 12:54:27.801671    8624 out.go:239] * 
* 
W1218 12:54:27.867091    8624 out.go:239] ╭──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                      │
│    * If the above advice does not help, please let us know:                                                          │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                        │
│                                                                                                                      │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                             │
│    * Please also attach the following file to the GitHub issue:                                                      │
│    * - C:\Users\jenkins.minikube7\AppData\Local\Temp\minikube_node_2a16899cf3c6bdbad0ea7439477e76ee9a435768_5.log    │
│                                                                                                                      │
╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                      │
│    * If the above advice does not help, please let us know:                                                          │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                        │
│                                                                                                                      │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                             │
│    * Please also attach the following file to the GitHub issue:                                                      │
│    * - C:\Users\jenkins.minikube7\AppData\Local\Temp\minikube_node_2a16899cf3c6bdbad0ea7439477e76ee9a435768_5.log    │
│                                                                                                                      │
╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
I1218 12:54:27.868087    8624 out.go:177] 
multinode_test.go:285: node start returned an error. args "out/minikube-windows-amd64.exe -p multinode-015900 node start m03 --alsologtostderr": exit status 85
multinode_test.go:289: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-015900 status
multinode_test.go:289: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-015900 status: exit status 6 (11.8024539s)

                                                
                                                
-- stdout --
	multinode-015900
	type: Control Plane
	host: Running
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Misconfigured
	
	
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	W1218 12:54:28.037215   11456 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	E1218 12:54:39.642474   11456 status.go:415] kubeconfig endpoint: extract IP: "multinode-015900" does not appear in C:\Users\jenkins.minikube7\minikube-integration\kubeconfig

                                                
                                                
** /stderr **
multinode_test.go:291: failed to run minikube status. args "out/minikube-windows-amd64.exe -p multinode-015900 status" : exit status 6
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-015900 -n multinode-015900
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-015900 -n multinode-015900: exit status 6 (11.6856983s)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	W1218 12:54:39.834265   13848 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	E1218 12:54:51.329565   13848 status.go:415] kubeconfig endpoint: extract IP: "multinode-015900" does not appear in C:\Users\jenkins.minikube7\minikube-integration\kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "multinode-015900" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestMultiNode/serial/StartAfterStop (23.84s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (243.23s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe node list -p multinode-015900
multinode_test.go:318: (dbg) Run:  out/minikube-windows-amd64.exe stop -p multinode-015900
multinode_test.go:318: (dbg) Done: out/minikube-windows-amd64.exe stop -p multinode-015900: (35.0933894s)
multinode_test.go:323: (dbg) Run:  out/minikube-windows-amd64.exe start -p multinode-015900 --wait=true -v=8 --alsologtostderr
multinode_test.go:323: (dbg) Done: out/minikube-windows-amd64.exe start -p multinode-015900 --wait=true -v=8 --alsologtostderr: (2m54.0622078s)
multinode_test.go:328: (dbg) Run:  out/minikube-windows-amd64.exe node list -p multinode-015900
multinode_test.go:335: reported node list is not the same after restart. Before restart: multinode-015900	192.168.235.154

                                                
                                                
After restart: multinode-015900	192.168.238.182
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-015900 -n multinode-015900
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-015900 -n multinode-015900: (12.0203035s)
helpers_test.go:244: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/RestartKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-015900 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-015900 logs -n 25: (8.3264827s)
helpers_test.go:252: TestMultiNode/serial/RestartKeepsNodes logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------|------------------|-------------------|---------|---------------------|---------------------|
	| Command |                       Args                        |     Profile      |       User        | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------|------------------|-------------------|---------|---------------------|---------------------|
	| kubectl | -p multinode-015900 -- apply -f                   | multinode-015900 | minikube7\jenkins | v1.32.0 | 18 Dec 23 12:50 UTC |                     |
	|         | ./testdata/multinodes/multinode-pod-dns-test.yaml |                  |                   |         |                     |                     |
	| kubectl | -p multinode-015900 -- rollout                    | multinode-015900 | minikube7\jenkins | v1.32.0 | 18 Dec 23 12:50 UTC |                     |
	|         | status deployment/busybox                         |                  |                   |         |                     |                     |
	| kubectl | -p multinode-015900 -- get pods -o                | multinode-015900 | minikube7\jenkins | v1.32.0 | 18 Dec 23 12:50 UTC |                     |
	|         | jsonpath='{.items[*].status.podIP}'               |                  |                   |         |                     |                     |
	| kubectl | -p multinode-015900 -- get pods -o                | multinode-015900 | minikube7\jenkins | v1.32.0 | 18 Dec 23 12:50 UTC |                     |
	|         | jsonpath='{.items[*].status.podIP}'               |                  |                   |         |                     |                     |
	| kubectl | -p multinode-015900 -- get pods -o                | multinode-015900 | minikube7\jenkins | v1.32.0 | 18 Dec 23 12:50 UTC |                     |
	|         | jsonpath='{.items[*].status.podIP}'               |                  |                   |         |                     |                     |
	| kubectl | -p multinode-015900 -- get pods -o                | multinode-015900 | minikube7\jenkins | v1.32.0 | 18 Dec 23 12:50 UTC |                     |
	|         | jsonpath='{.items[*].status.podIP}'               |                  |                   |         |                     |                     |
	| kubectl | -p multinode-015900 -- get pods -o                | multinode-015900 | minikube7\jenkins | v1.32.0 | 18 Dec 23 12:50 UTC |                     |
	|         | jsonpath='{.items[*].status.podIP}'               |                  |                   |         |                     |                     |
	| kubectl | -p multinode-015900 -- get pods -o                | multinode-015900 | minikube7\jenkins | v1.32.0 | 18 Dec 23 12:50 UTC |                     |
	|         | jsonpath='{.items[*].status.podIP}'               |                  |                   |         |                     |                     |
	| kubectl | -p multinode-015900 -- get pods -o                | multinode-015900 | minikube7\jenkins | v1.32.0 | 18 Dec 23 12:50 UTC |                     |
	|         | jsonpath='{.items[*].status.podIP}'               |                  |                   |         |                     |                     |
	| kubectl | -p multinode-015900 -- get pods -o                | multinode-015900 | minikube7\jenkins | v1.32.0 | 18 Dec 23 12:51 UTC |                     |
	|         | jsonpath='{.items[*].status.podIP}'               |                  |                   |         |                     |                     |
	| kubectl | -p multinode-015900 -- get pods -o                | multinode-015900 | minikube7\jenkins | v1.32.0 | 18 Dec 23 12:51 UTC |                     |
	|         | jsonpath='{.items[*].status.podIP}'               |                  |                   |         |                     |                     |
	| kubectl | -p multinode-015900 -- get pods -o                | multinode-015900 | minikube7\jenkins | v1.32.0 | 18 Dec 23 12:51 UTC |                     |
	|         | jsonpath='{.items[*].status.podIP}'               |                  |                   |         |                     |                     |
	| kubectl | -p multinode-015900 -- get pods -o                | multinode-015900 | minikube7\jenkins | v1.32.0 | 18 Dec 23 12:52 UTC |                     |
	|         | jsonpath='{.items[*].status.podIP}'               |                  |                   |         |                     |                     |
	| kubectl | -p multinode-015900 -- get pods -o                | multinode-015900 | minikube7\jenkins | v1.32.0 | 18 Dec 23 12:52 UTC |                     |
	|         | jsonpath='{.items[*].metadata.name}'              |                  |                   |         |                     |                     |
	| kubectl | -p multinode-015900 -- exec                       | multinode-015900 | minikube7\jenkins | v1.32.0 | 18 Dec 23 12:52 UTC |                     |
	|         | -- nslookup kubernetes.io                         |                  |                   |         |                     |                     |
	| kubectl | -p multinode-015900 -- exec                       | multinode-015900 | minikube7\jenkins | v1.32.0 | 18 Dec 23 12:52 UTC |                     |
	|         | -- nslookup kubernetes.default                    |                  |                   |         |                     |                     |
	| kubectl | -p multinode-015900                               | multinode-015900 | minikube7\jenkins | v1.32.0 | 18 Dec 23 12:52 UTC |                     |
	|         | -- exec  -- nslookup                              |                  |                   |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                  |                   |         |                     |                     |
	| kubectl | -p multinode-015900 -- get pods -o                | multinode-015900 | minikube7\jenkins | v1.32.0 | 18 Dec 23 12:52 UTC |                     |
	|         | jsonpath='{.items[*].metadata.name}'              |                  |                   |         |                     |                     |
	| node    | add -p multinode-015900 -v 3                      | multinode-015900 | minikube7\jenkins | v1.32.0 | 18 Dec 23 12:52 UTC |                     |
	|         | --alsologtostderr                                 |                  |                   |         |                     |                     |
	| node    | multinode-015900 node stop m03                    | multinode-015900 | minikube7\jenkins | v1.32.0 | 18 Dec 23 12:54 UTC |                     |
	| node    | multinode-015900 node start                       | multinode-015900 | minikube7\jenkins | v1.32.0 | 18 Dec 23 12:54 UTC |                     |
	|         | m03 --alsologtostderr                             |                  |                   |         |                     |                     |
	| node    | list -p multinode-015900                          | multinode-015900 | minikube7\jenkins | v1.32.0 | 18 Dec 23 12:54 UTC |                     |
	| stop    | -p multinode-015900                               | multinode-015900 | minikube7\jenkins | v1.32.0 | 18 Dec 23 12:54 UTC | 18 Dec 23 12:55 UTC |
	| start   | -p multinode-015900                               | multinode-015900 | minikube7\jenkins | v1.32.0 | 18 Dec 23 12:55 UTC | 18 Dec 23 12:58 UTC |
	|         | --wait=true -v=8                                  |                  |                   |         |                     |                     |
	|         | --alsologtostderr                                 |                  |                   |         |                     |                     |
	| node    | list -p multinode-015900                          | multinode-015900 | minikube7\jenkins | v1.32.0 | 18 Dec 23 12:58 UTC |                     |
	|---------|---------------------------------------------------|------------------|-------------------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/12/18 12:55:26
	Running on machine: minikube7
	Binary: Built with gc go1.21.5 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1218 12:55:26.953249   12728 out.go:296] Setting OutFile to fd 716 ...
	I1218 12:55:26.954267   12728 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1218 12:55:26.954267   12728 out.go:309] Setting ErrFile to fd 776...
	I1218 12:55:26.954267   12728 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1218 12:55:26.974709   12728 out.go:303] Setting JSON to false
	I1218 12:55:26.977688   12728 start.go:128] hostinfo: {"hostname":"minikube7","uptime":4601,"bootTime":1702899525,"procs":191,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.3803 Build 19045.3803","kernelVersion":"10.0.19045.3803 Build 19045.3803","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f66ed2ea-6c04-4a6b-8eea-b2fb0953e990"}
	W1218 12:55:26.978381   12728 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1218 12:55:26.980821   12728 out.go:177] * [multinode-015900] minikube v1.32.0 on Microsoft Windows 10 Enterprise N 10.0.19045.3803 Build 19045.3803
	I1218 12:55:26.981650   12728 notify.go:220] Checking for updates...
	I1218 12:55:26.982916   12728 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	I1218 12:55:26.983637   12728 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1218 12:55:26.984323   12728 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube7\minikube-integration\.minikube
	I1218 12:55:26.984717   12728 out.go:177]   - MINIKUBE_LOCATION=17824
	I1218 12:55:26.985591   12728 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1218 12:55:26.987252   12728 config.go:182] Loaded profile config "multinode-015900": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1218 12:55:26.987307   12728 driver.go:392] Setting default libvirt URI to qemu:///system
	I1218 12:55:32.304298   12728 out.go:177] * Using the hyperv driver based on existing profile
	I1218 12:55:32.305054   12728 start.go:298] selected driver: hyperv
	I1218 12:55:32.305054   12728 start.go:902] validating driver "hyperv" against &{Name:multinode-015900 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17765/minikube-v1.32.1-1702490427-17765-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702585004-17765@sha256:ba2fbb9efd5b81a443834ba0800f3bc13feea942ce199df74b0054a9bdb32bbd Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCon
fig:{KubernetesVersion:v1.28.4 ClusterName:multinode-015900 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.235.154 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/minikube-host Mount9PVersion:9p2000.L MountGID:doc
ker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1218 12:55:32.305326   12728 start.go:913] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1218 12:55:32.353025   12728 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1218 12:55:32.353025   12728 cni.go:84] Creating CNI manager for ""
	I1218 12:55:32.353025   12728 cni.go:136] 1 nodes found, recommending kindnet
	I1218 12:55:32.353025   12728 start_flags.go:323] config:
	{Name:multinode-015900 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17765/minikube-v1.32.1-1702490427-17765-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702585004-17765@sha256:ba2fbb9efd5b81a443834ba0800f3bc13feea942ce199df74b0054a9bdb32bbd Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-015900 Namespace:default AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.235.154 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID
:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1218 12:55:32.353736   12728 iso.go:125] acquiring lock: {Name:mka7fdf4332632f41b99a369cc525e40499a0030 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1218 12:55:32.355103   12728 out.go:177] * Starting control plane node multinode-015900 in cluster multinode-015900
	I1218 12:55:32.355546   12728 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I1218 12:55:32.355788   12728 preload.go:148] Found local preload: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4
	I1218 12:55:32.355788   12728 cache.go:56] Caching tarball of preloaded images
	I1218 12:55:32.355929   12728 preload.go:174] Found C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1218 12:55:32.355929   12728 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on docker
	I1218 12:55:32.356649   12728 profile.go:148] Saving config to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-015900\config.json ...
	I1218 12:55:32.359265   12728 start.go:365] acquiring machines lock for multinode-015900: {Name:mk814f158b6187cc9297257c36fdbe0d2871c950 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1218 12:55:32.359265   12728 start.go:369] acquired machines lock for "multinode-015900" in 0s
	I1218 12:55:32.359265   12728 start.go:96] Skipping create...Using existing machine configuration
	I1218 12:55:32.359265   12728 fix.go:54] fixHost starting: 
	I1218 12:55:32.360294   12728 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-015900 ).state
	I1218 12:55:35.042512   12728 main.go:141] libmachine: [stdout =====>] : Off
	
	I1218 12:55:35.042512   12728 main.go:141] libmachine: [stderr =====>] : 
	I1218 12:55:35.042512   12728 fix.go:102] recreateIfNeeded on multinode-015900: state=Stopped err=<nil>
	W1218 12:55:35.042512   12728 fix.go:128] unexpected machine state, will restart: <nil>
	I1218 12:55:35.043707   12728 out.go:177] * Restarting existing hyperv VM for "multinode-015900" ...
	I1218 12:55:35.044295   12728 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-015900
	I1218 12:55:37.873716   12728 main.go:141] libmachine: [stdout =====>] : 
	I1218 12:55:37.873937   12728 main.go:141] libmachine: [stderr =====>] : 
	I1218 12:55:37.873937   12728 main.go:141] libmachine: Waiting for host to start...
	I1218 12:55:37.873969   12728 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-015900 ).state
	I1218 12:55:40.052384   12728 main.go:141] libmachine: [stdout =====>] : Running
	
	I1218 12:55:40.052384   12728 main.go:141] libmachine: [stderr =====>] : 
	I1218 12:55:40.052384   12728 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-015900 ).networkadapters[0]).ipaddresses[0]
	I1218 12:55:42.526741   12728 main.go:141] libmachine: [stdout =====>] : 
	I1218 12:55:42.526741   12728 main.go:141] libmachine: [stderr =====>] : 
	I1218 12:55:43.528799   12728 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-015900 ).state
	I1218 12:55:45.706958   12728 main.go:141] libmachine: [stdout =====>] : Running
	
	I1218 12:55:45.706958   12728 main.go:141] libmachine: [stderr =====>] : 
	I1218 12:55:45.707087   12728 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-015900 ).networkadapters[0]).ipaddresses[0]
	I1218 12:55:48.164591   12728 main.go:141] libmachine: [stdout =====>] : 
	I1218 12:55:48.164623   12728 main.go:141] libmachine: [stderr =====>] : 
	I1218 12:55:49.169959   12728 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-015900 ).state
	I1218 12:55:51.287727   12728 main.go:141] libmachine: [stdout =====>] : Running
	
	I1218 12:55:51.287727   12728 main.go:141] libmachine: [stderr =====>] : 
	I1218 12:55:51.287823   12728 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-015900 ).networkadapters[0]).ipaddresses[0]
	I1218 12:55:53.792706   12728 main.go:141] libmachine: [stdout =====>] : 
	I1218 12:55:53.792706   12728 main.go:141] libmachine: [stderr =====>] : 
	I1218 12:55:54.798995   12728 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-015900 ).state
	I1218 12:55:56.977524   12728 main.go:141] libmachine: [stdout =====>] : Running
	
	I1218 12:55:56.977563   12728 main.go:141] libmachine: [stderr =====>] : 
	I1218 12:55:56.977595   12728 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-015900 ).networkadapters[0]).ipaddresses[0]
	I1218 12:55:59.438045   12728 main.go:141] libmachine: [stdout =====>] : 
	I1218 12:55:59.438045   12728 main.go:141] libmachine: [stderr =====>] : 
	I1218 12:56:00.438781   12728 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-015900 ).state
	I1218 12:56:02.642950   12728 main.go:141] libmachine: [stdout =====>] : Running
	
	I1218 12:56:02.643135   12728 main.go:141] libmachine: [stderr =====>] : 
	I1218 12:56:02.643370   12728 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-015900 ).networkadapters[0]).ipaddresses[0]
	I1218 12:56:05.189030   12728 main.go:141] libmachine: [stdout =====>] : 192.168.238.182
	
	I1218 12:56:05.189030   12728 main.go:141] libmachine: [stderr =====>] : 
	I1218 12:56:05.192084   12728 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-015900 ).state
	I1218 12:56:07.292971   12728 main.go:141] libmachine: [stdout =====>] : Running
	
	I1218 12:56:07.293196   12728 main.go:141] libmachine: [stderr =====>] : 
	I1218 12:56:07.293346   12728 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-015900 ).networkadapters[0]).ipaddresses[0]
	I1218 12:56:09.752047   12728 main.go:141] libmachine: [stdout =====>] : 192.168.238.182
	
	I1218 12:56:09.752047   12728 main.go:141] libmachine: [stderr =====>] : 
	I1218 12:56:09.752280   12728 profile.go:148] Saving config to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-015900\config.json ...
	I1218 12:56:09.754984   12728 machine.go:88] provisioning docker machine ...
	I1218 12:56:09.754984   12728 buildroot.go:166] provisioning hostname "multinode-015900"
	I1218 12:56:09.754984   12728 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-015900 ).state
	I1218 12:56:11.872698   12728 main.go:141] libmachine: [stdout =====>] : Running
	
	I1218 12:56:11.872942   12728 main.go:141] libmachine: [stderr =====>] : 
	I1218 12:56:11.873175   12728 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-015900 ).networkadapters[0]).ipaddresses[0]
	I1218 12:56:14.343383   12728 main.go:141] libmachine: [stdout =====>] : 192.168.238.182
	
	I1218 12:56:14.343744   12728 main.go:141] libmachine: [stderr =====>] : 
	I1218 12:56:14.350406   12728 main.go:141] libmachine: Using SSH client type: native
	I1218 12:56:14.351156   12728 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xdb4f40] 0xdb7a80 <nil>  [] 0s} 192.168.238.182 22 <nil> <nil>}
	I1218 12:56:14.351156   12728 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-015900 && echo "multinode-015900" | sudo tee /etc/hostname
	I1218 12:56:14.525071   12728 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-015900
	
	I1218 12:56:14.525234   12728 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-015900 ).state
	I1218 12:56:16.569409   12728 main.go:141] libmachine: [stdout =====>] : Running
	
	I1218 12:56:16.569409   12728 main.go:141] libmachine: [stderr =====>] : 
	I1218 12:56:16.569493   12728 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-015900 ).networkadapters[0]).ipaddresses[0]
	I1218 12:56:19.068951   12728 main.go:141] libmachine: [stdout =====>] : 192.168.238.182
	
	I1218 12:56:19.068951   12728 main.go:141] libmachine: [stderr =====>] : 
	I1218 12:56:19.075204   12728 main.go:141] libmachine: Using SSH client type: native
	I1218 12:56:19.075890   12728 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xdb4f40] 0xdb7a80 <nil>  [] 0s} 192.168.238.182 22 <nil> <nil>}
	I1218 12:56:19.075890   12728 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-015900' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-015900/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-015900' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1218 12:56:19.230399   12728 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1218 12:56:19.230399   12728 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube7\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube7\minikube-integration\.minikube}
	I1218 12:56:19.230399   12728 buildroot.go:174] setting up certificates
	I1218 12:56:19.230399   12728 provision.go:83] configureAuth start
	I1218 12:56:19.230399   12728 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-015900 ).state
	I1218 12:56:21.311084   12728 main.go:141] libmachine: [stdout =====>] : Running
	
	I1218 12:56:21.311084   12728 main.go:141] libmachine: [stderr =====>] : 
	I1218 12:56:21.311084   12728 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-015900 ).networkadapters[0]).ipaddresses[0]
	I1218 12:56:23.836209   12728 main.go:141] libmachine: [stdout =====>] : 192.168.238.182
	
	I1218 12:56:23.836396   12728 main.go:141] libmachine: [stderr =====>] : 
	I1218 12:56:23.836396   12728 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-015900 ).state
	I1218 12:56:25.888827   12728 main.go:141] libmachine: [stdout =====>] : Running
	
	I1218 12:56:25.889133   12728 main.go:141] libmachine: [stderr =====>] : 
	I1218 12:56:25.889133   12728 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-015900 ).networkadapters[0]).ipaddresses[0]
	I1218 12:56:28.330634   12728 main.go:141] libmachine: [stdout =====>] : 192.168.238.182
	
	I1218 12:56:28.330883   12728 main.go:141] libmachine: [stderr =====>] : 
	I1218 12:56:28.330883   12728 provision.go:138] copyHostCerts
	I1218 12:56:28.331196   12728 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube7\minikube-integration\.minikube/key.pem
	I1218 12:56:28.331196   12728 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/key.pem, removing ...
	I1218 12:56:28.331196   12728 exec_runner.go:203] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\key.pem
	I1218 12:56:28.331929   12728 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/key.pem (1679 bytes)
	I1218 12:56:28.333346   12728 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube7\minikube-integration\.minikube/ca.pem
	I1218 12:56:28.333833   12728 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/ca.pem, removing ...
	I1218 12:56:28.333833   12728 exec_runner.go:203] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.pem
	I1218 12:56:28.334342   12728 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/ca.pem (1078 bytes)
	I1218 12:56:28.335794   12728 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube7\minikube-integration\.minikube/cert.pem
	I1218 12:56:28.336149   12728 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/cert.pem, removing ...
	I1218 12:56:28.336242   12728 exec_runner.go:203] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cert.pem
	I1218 12:56:28.336335   12728 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/cert.pem (1123 bytes)
	I1218 12:56:28.337481   12728 provision.go:112] generating server cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-015900 san=[192.168.238.182 192.168.238.182 localhost 127.0.0.1 minikube multinode-015900]
	I1218 12:56:28.727905   12728 provision.go:172] copyRemoteCerts
	I1218 12:56:28.739908   12728 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1218 12:56:28.739908   12728 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-015900 ).state
	I1218 12:56:30.827240   12728 main.go:141] libmachine: [stdout =====>] : Running
	
	I1218 12:56:30.827404   12728 main.go:141] libmachine: [stderr =====>] : 
	I1218 12:56:30.827495   12728 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-015900 ).networkadapters[0]).ipaddresses[0]
	I1218 12:56:33.327174   12728 main.go:141] libmachine: [stdout =====>] : 192.168.238.182
	
	I1218 12:56:33.327174   12728 main.go:141] libmachine: [stderr =====>] : 
	I1218 12:56:33.327826   12728 sshutil.go:53] new ssh client: &{IP:192.168.238.182 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-015900\id_rsa Username:docker}
	I1218 12:56:33.437649   12728 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.6976894s)
	I1218 12:56:33.437755   12728 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I1218 12:56:33.437755   12728 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1218 12:56:33.476571   12728 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I1218 12:56:33.476571   12728 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1224 bytes)
	I1218 12:56:33.515593   12728 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I1218 12:56:33.515593   12728 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1218 12:56:33.554491   12728 provision.go:86] duration metric: configureAuth took 14.3240421s
	I1218 12:56:33.554491   12728 buildroot.go:189] setting minikube options for container-runtime
	I1218 12:56:33.555745   12728 config.go:182] Loaded profile config "multinode-015900": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1218 12:56:33.555840   12728 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-015900 ).state
	I1218 12:56:35.653970   12728 main.go:141] libmachine: [stdout =====>] : Running
	
	I1218 12:56:35.654082   12728 main.go:141] libmachine: [stderr =====>] : 
	I1218 12:56:35.654082   12728 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-015900 ).networkadapters[0]).ipaddresses[0]
	I1218 12:56:38.096962   12728 main.go:141] libmachine: [stdout =====>] : 192.168.238.182
	
	I1218 12:56:38.096962   12728 main.go:141] libmachine: [stderr =====>] : 
	I1218 12:56:38.102257   12728 main.go:141] libmachine: Using SSH client type: native
	I1218 12:56:38.103031   12728 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xdb4f40] 0xdb7a80 <nil>  [] 0s} 192.168.238.182 22 <nil> <nil>}
	I1218 12:56:38.103031   12728 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1218 12:56:38.246154   12728 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1218 12:56:38.246233   12728 buildroot.go:70] root file system type: tmpfs
	I1218 12:56:38.246430   12728 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I1218 12:56:38.246430   12728 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-015900 ).state
	I1218 12:56:40.304852   12728 main.go:141] libmachine: [stdout =====>] : Running
	
	I1218 12:56:40.304852   12728 main.go:141] libmachine: [stderr =====>] : 
	I1218 12:56:40.305086   12728 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-015900 ).networkadapters[0]).ipaddresses[0]
	I1218 12:56:42.781965   12728 main.go:141] libmachine: [stdout =====>] : 192.168.238.182
	
	I1218 12:56:42.781965   12728 main.go:141] libmachine: [stderr =====>] : 
	I1218 12:56:42.787302   12728 main.go:141] libmachine: Using SSH client type: native
	I1218 12:56:42.788071   12728 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xdb4f40] 0xdb7a80 <nil>  [] 0s} 192.168.238.182 22 <nil> <nil>}
	I1218 12:56:42.788647   12728 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1218 12:56:42.961205   12728 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1218 12:56:42.961758   12728 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-015900 ).state
	I1218 12:56:45.043579   12728 main.go:141] libmachine: [stdout =====>] : Running
	
	I1218 12:56:45.043820   12728 main.go:141] libmachine: [stderr =====>] : 
	I1218 12:56:45.043820   12728 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-015900 ).networkadapters[0]).ipaddresses[0]
	I1218 12:56:47.543715   12728 main.go:141] libmachine: [stdout =====>] : 192.168.238.182
	
	I1218 12:56:47.543980   12728 main.go:141] libmachine: [stderr =====>] : 
	I1218 12:56:47.553171   12728 main.go:141] libmachine: Using SSH client type: native
	I1218 12:56:47.553885   12728 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xdb4f40] 0xdb7a80 <nil>  [] 0s} 192.168.238.182 22 <nil> <nil>}
	I1218 12:56:47.553885   12728 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1218 12:56:48.581386   12728 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I1218 12:56:48.581386   12728 machine.go:91] provisioned docker machine in 38.8262664s
	I1218 12:56:48.581386   12728 start.go:300] post-start starting for "multinode-015900" (driver="hyperv")
	I1218 12:56:48.581386   12728 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1218 12:56:48.595197   12728 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1218 12:56:48.595197   12728 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-015900 ).state
	I1218 12:56:50.674839   12728 main.go:141] libmachine: [stdout =====>] : Running
	
	I1218 12:56:50.674839   12728 main.go:141] libmachine: [stderr =====>] : 
	I1218 12:56:50.674935   12728 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-015900 ).networkadapters[0]).ipaddresses[0]
	I1218 12:56:53.165656   12728 main.go:141] libmachine: [stdout =====>] : 192.168.238.182
	
	I1218 12:56:53.165891   12728 main.go:141] libmachine: [stderr =====>] : 
	I1218 12:56:53.166527   12728 sshutil.go:53] new ssh client: &{IP:192.168.238.182 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-015900\id_rsa Username:docker}
	I1218 12:56:53.273775   12728 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.6784988s)
	I1218 12:56:53.294575   12728 ssh_runner.go:195] Run: cat /etc/os-release
	I1218 12:56:53.303042   12728 command_runner.go:130] > NAME=Buildroot
	I1218 12:56:53.303042   12728 command_runner.go:130] > VERSION=2021.02.12-1-g0492d51-dirty
	I1218 12:56:53.303042   12728 command_runner.go:130] > ID=buildroot
	I1218 12:56:53.303042   12728 command_runner.go:130] > VERSION_ID=2021.02.12
	I1218 12:56:53.303042   12728 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
	I1218 12:56:53.303042   12728 info.go:137] Remote host: Buildroot 2021.02.12
	I1218 12:56:53.303042   12728 filesync.go:126] Scanning C:\Users\jenkins.minikube7\minikube-integration\.minikube\addons for local assets ...
	I1218 12:56:53.303042   12728 filesync.go:126] Scanning C:\Users\jenkins.minikube7\minikube-integration\.minikube\files for local assets ...
	I1218 12:56:53.304157   12728 filesync.go:149] local asset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\149282.pem -> 149282.pem in /etc/ssl/certs
	I1218 12:56:53.304157   12728 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\149282.pem -> /etc/ssl/certs/149282.pem
	I1218 12:56:53.317316   12728 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1218 12:56:53.335402   12728 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\149282.pem --> /etc/ssl/certs/149282.pem (1708 bytes)
	I1218 12:56:53.377785   12728 start.go:303] post-start completed in 4.796382s
	I1218 12:56:53.377785   12728 fix.go:56] fixHost completed within 1m21.0182343s
	I1218 12:56:53.377896   12728 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-015900 ).state
	I1218 12:56:55.460466   12728 main.go:141] libmachine: [stdout =====>] : Running
	
	I1218 12:56:55.460797   12728 main.go:141] libmachine: [stderr =====>] : 
	I1218 12:56:55.460797   12728 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-015900 ).networkadapters[0]).ipaddresses[0]
	I1218 12:56:57.922452   12728 main.go:141] libmachine: [stdout =====>] : 192.168.238.182
	
	I1218 12:56:57.922733   12728 main.go:141] libmachine: [stderr =====>] : 
	I1218 12:56:57.931724   12728 main.go:141] libmachine: Using SSH client type: native
	I1218 12:56:57.932620   12728 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xdb4f40] 0xdb7a80 <nil>  [] 0s} 192.168.238.182 22 <nil> <nil>}
	I1218 12:56:57.932620   12728 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1218 12:56:58.070496   12728 main.go:141] libmachine: SSH cmd err, output: <nil>: 1702904218.079681508
	
	I1218 12:56:58.070496   12728 fix.go:206] guest clock: 1702904218.079681508
	I1218 12:56:58.070496   12728 fix.go:219] Guest: 2023-12-18 12:56:58.079681508 +0000 UTC Remote: 2023-12-18 12:56:53.3777852 +0000 UTC m=+86.592920601 (delta=4.701896308s)
	I1218 12:56:58.070609   12728 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-015900 ).state
	I1218 12:57:00.191850   12728 main.go:141] libmachine: [stdout =====>] : Running
	
	I1218 12:57:00.191940   12728 main.go:141] libmachine: [stderr =====>] : 
	I1218 12:57:00.191940   12728 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-015900 ).networkadapters[0]).ipaddresses[0]
	I1218 12:57:02.669395   12728 main.go:141] libmachine: [stdout =====>] : 192.168.238.182
	
	I1218 12:57:02.669500   12728 main.go:141] libmachine: [stderr =====>] : 
	I1218 12:57:02.675435   12728 main.go:141] libmachine: Using SSH client type: native
	I1218 12:57:02.676187   12728 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xdb4f40] 0xdb7a80 <nil>  [] 0s} 192.168.238.182 22 <nil> <nil>}
	I1218 12:57:02.676187   12728 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1702904218
	I1218 12:57:02.825924   12728 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Dec 18 12:56:58 UTC 2023
	
	I1218 12:57:02.825924   12728 fix.go:226] clock set: Mon Dec 18 12:56:58 UTC 2023
	 (err=<nil>)
	I1218 12:57:02.825924   12728 start.go:83] releasing machines lock for "multinode-015900", held for 1m30.4663406s
	I1218 12:57:02.826576   12728 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-015900 ).state
	I1218 12:57:04.923215   12728 main.go:141] libmachine: [stdout =====>] : Running
	
	I1218 12:57:04.923308   12728 main.go:141] libmachine: [stderr =====>] : 
	I1218 12:57:04.923308   12728 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-015900 ).networkadapters[0]).ipaddresses[0]
	I1218 12:57:07.393166   12728 main.go:141] libmachine: [stdout =====>] : 192.168.238.182
	
	I1218 12:57:07.393166   12728 main.go:141] libmachine: [stderr =====>] : 
	I1218 12:57:07.397736   12728 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1218 12:57:07.397812   12728 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-015900 ).state
	I1218 12:57:07.412510   12728 ssh_runner.go:195] Run: cat /version.json
	I1218 12:57:07.412510   12728 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-015900 ).state
	I1218 12:57:09.606449   12728 main.go:141] libmachine: [stdout =====>] : Running
	
	I1218 12:57:09.606449   12728 main.go:141] libmachine: [stdout =====>] : Running
	
	I1218 12:57:09.606567   12728 main.go:141] libmachine: [stderr =====>] : 
	I1218 12:57:09.606624   12728 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-015900 ).networkadapters[0]).ipaddresses[0]
	I1218 12:57:09.606567   12728 main.go:141] libmachine: [stderr =====>] : 
	I1218 12:57:09.606624   12728 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-015900 ).networkadapters[0]).ipaddresses[0]
	I1218 12:57:12.248392   12728 main.go:141] libmachine: [stdout =====>] : 192.168.238.182
	
	I1218 12:57:12.248567   12728 main.go:141] libmachine: [stderr =====>] : 
	I1218 12:57:12.249371   12728 sshutil.go:53] new ssh client: &{IP:192.168.238.182 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-015900\id_rsa Username:docker}
	I1218 12:57:12.268308   12728 main.go:141] libmachine: [stdout =====>] : 192.168.238.182
	
	I1218 12:57:12.268308   12728 main.go:141] libmachine: [stderr =====>] : 
	I1218 12:57:12.270402   12728 sshutil.go:53] new ssh client: &{IP:192.168.238.182 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-015900\id_rsa Username:docker}
	I1218 12:57:12.355387   12728 command_runner.go:130] > {"iso_version": "v1.32.1-1702490427-17765", "kicbase_version": "v0.0.42-1702394725-17761", "minikube_version": "v1.32.0", "commit": "2780c4af854905e5cd4b94dc93de1f9d00b9040d"}
	I1218 12:57:12.355570   12728 ssh_runner.go:235] Completed: cat /version.json: (4.9430431s)
	I1218 12:57:12.368915   12728 ssh_runner.go:195] Run: systemctl --version
	I1218 12:57:12.453441   12728 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1218 12:57:12.453975   12728 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (5.0561444s)
	I1218 12:57:12.454088   12728 command_runner.go:130] > systemd 247 (247)
	I1218 12:57:12.454088   12728 command_runner.go:130] > -PAM -AUDIT -SELINUX -IMA -APPARMOR -SMACK -SYSVINIT -UTMP -LIBCRYPTSETUP -GCRYPT -GNUTLS +ACL +XZ +LZ4 -ZSTD +SECCOMP +BLKID -ELFUTILS +KMOD -IDN2 -IDN -PCRE2 default-hierarchy=hybrid
	I1218 12:57:12.467104   12728 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1218 12:57:12.476559   12728 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1218 12:57:12.476918   12728 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1218 12:57:12.492645   12728 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1218 12:57:12.517230   12728 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I1218 12:57:12.517254   12728 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1218 12:57:12.517331   12728 start.go:475] detecting cgroup driver to use...
	I1218 12:57:12.517707   12728 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1218 12:57:12.547306   12728 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I1218 12:57:12.562505   12728 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I1218 12:57:12.591623   12728 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1218 12:57:12.607911   12728 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I1218 12:57:12.628353   12728 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1218 12:57:12.656134   12728 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1218 12:57:12.690010   12728 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1218 12:57:12.718962   12728 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1218 12:57:12.751267   12728 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1218 12:57:12.779256   12728 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1218 12:57:12.807546   12728 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1218 12:57:12.821937   12728 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1218 12:57:12.837046   12728 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1218 12:57:12.864201   12728 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1218 12:57:13.045161   12728 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1218 12:57:13.073124   12728 start.go:475] detecting cgroup driver to use...
	I1218 12:57:13.090500   12728 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1218 12:57:13.111397   12728 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I1218 12:57:13.111555   12728 command_runner.go:130] > [Unit]
	I1218 12:57:13.111555   12728 command_runner.go:130] > Description=Docker Application Container Engine
	I1218 12:57:13.111555   12728 command_runner.go:130] > Documentation=https://docs.docker.com
	I1218 12:57:13.111555   12728 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I1218 12:57:13.111555   12728 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I1218 12:57:13.111555   12728 command_runner.go:130] > StartLimitBurst=3
	I1218 12:57:13.111555   12728 command_runner.go:130] > StartLimitIntervalSec=60
	I1218 12:57:13.111555   12728 command_runner.go:130] > [Service]
	I1218 12:57:13.111668   12728 command_runner.go:130] > Type=notify
	I1218 12:57:13.111668   12728 command_runner.go:130] > Restart=on-failure
	I1218 12:57:13.111668   12728 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I1218 12:57:13.111668   12728 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I1218 12:57:13.111758   12728 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I1218 12:57:13.111871   12728 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I1218 12:57:13.111871   12728 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I1218 12:57:13.111871   12728 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I1218 12:57:13.111871   12728 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I1218 12:57:13.111871   12728 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I1218 12:57:13.111871   12728 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I1218 12:57:13.111871   12728 command_runner.go:130] > ExecStart=
	I1218 12:57:13.112004   12728 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I1218 12:57:13.112004   12728 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I1218 12:57:13.112004   12728 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I1218 12:57:13.112004   12728 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I1218 12:57:13.112004   12728 command_runner.go:130] > LimitNOFILE=infinity
	I1218 12:57:13.112004   12728 command_runner.go:130] > LimitNPROC=infinity
	I1218 12:57:13.112004   12728 command_runner.go:130] > LimitCORE=infinity
	I1218 12:57:13.112004   12728 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I1218 12:57:13.112123   12728 command_runner.go:130] > # Only systemd 226 and above support this version.
	I1218 12:57:13.112123   12728 command_runner.go:130] > TasksMax=infinity
	I1218 12:57:13.112123   12728 command_runner.go:130] > TimeoutStartSec=0
	I1218 12:57:13.112123   12728 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I1218 12:57:13.112123   12728 command_runner.go:130] > Delegate=yes
	I1218 12:57:13.112123   12728 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I1218 12:57:13.112123   12728 command_runner.go:130] > KillMode=process
	I1218 12:57:13.112123   12728 command_runner.go:130] > [Install]
	I1218 12:57:13.112232   12728 command_runner.go:130] > WantedBy=multi-user.target
	I1218 12:57:13.127067   12728 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1218 12:57:13.160908   12728 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1218 12:57:13.198534   12728 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1218 12:57:13.231182   12728 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1218 12:57:13.261718   12728 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1218 12:57:13.320690   12728 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1218 12:57:13.343469   12728 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1218 12:57:13.370708   12728 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I1218 12:57:13.386568   12728 ssh_runner.go:195] Run: which cri-dockerd
	I1218 12:57:13.392566   12728 command_runner.go:130] > /usr/bin/cri-dockerd
	I1218 12:57:13.404309   12728 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1218 12:57:13.418718   12728 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I1218 12:57:13.459161   12728 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1218 12:57:13.634470   12728 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1218 12:57:13.786913   12728 docker.go:560] configuring docker to use "cgroupfs" as cgroup driver...
	I1218 12:57:13.786913   12728 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1218 12:57:13.834139   12728 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1218 12:57:13.999980   12728 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1218 12:57:15.517916   12728 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.5177707s)
	I1218 12:57:15.530806   12728 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1218 12:57:15.692321   12728 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1218 12:57:15.866585   12728 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1218 12:57:16.038076   12728 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1218 12:57:16.202888   12728 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1218 12:57:16.238706   12728 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1218 12:57:16.399295   12728 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I1218 12:57:16.502246   12728 start.go:522] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1218 12:57:16.516636   12728 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1218 12:57:16.526384   12728 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I1218 12:57:16.526465   12728 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1218 12:57:16.526465   12728 command_runner.go:130] > Device: 16h/22d	Inode: 875         Links: 1
	I1218 12:57:16.526465   12728 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I1218 12:57:16.526548   12728 command_runner.go:130] > Access: 2023-12-18 12:57:16.430111643 +0000
	I1218 12:57:16.526548   12728 command_runner.go:130] > Modify: 2023-12-18 12:57:16.430111643 +0000
	I1218 12:57:16.526548   12728 command_runner.go:130] > Change: 2023-12-18 12:57:16.433111643 +0000
	I1218 12:57:16.526548   12728 command_runner.go:130] >  Birth: -
	I1218 12:57:16.526617   12728 start.go:543] Will wait 60s for crictl version
	I1218 12:57:16.542155   12728 ssh_runner.go:195] Run: which crictl
	I1218 12:57:16.546892   12728 command_runner.go:130] > /usr/bin/crictl
	I1218 12:57:16.561371   12728 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1218 12:57:16.625278   12728 command_runner.go:130] > Version:  0.1.0
	I1218 12:57:16.625390   12728 command_runner.go:130] > RuntimeName:  docker
	I1218 12:57:16.625390   12728 command_runner.go:130] > RuntimeVersion:  24.0.7
	I1218 12:57:16.625390   12728 command_runner.go:130] > RuntimeApiVersion:  v1
	I1218 12:57:16.625390   12728 start.go:559] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  24.0.7
	RuntimeApiVersion:  v1
	I1218 12:57:16.638872   12728 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1218 12:57:16.670929   12728 command_runner.go:130] > 24.0.7
	I1218 12:57:16.681457   12728 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1218 12:57:16.712780   12728 command_runner.go:130] > 24.0.7
	I1218 12:57:16.714689   12728 out.go:204] * Preparing Kubernetes v1.28.4 on Docker 24.0.7 ...
	I1218 12:57:16.714824   12728 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I1218 12:57:16.720334   12728 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I1218 12:57:16.720334   12728 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I1218 12:57:16.720334   12728 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I1218 12:57:16.720334   12728 ip.go:207] Found interface: {Index:8 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:ed:dc:88 Flags:up|broadcast|multicast|running}
	I1218 12:57:16.722815   12728 ip.go:210] interface addr: fe80::61bd:e46f:b0aa:cbb0/64
	I1218 12:57:16.722815   12728 ip.go:210] interface addr: 192.168.224.1/20
	I1218 12:57:16.737736   12728 ssh_runner.go:195] Run: grep 192.168.224.1	host.minikube.internal$ /etc/hosts
	I1218 12:57:16.743339   12728 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.224.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1218 12:57:16.760906   12728 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I1218 12:57:16.770735   12728 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1218 12:57:16.794361   12728 docker.go:671] Got preloaded images: 
	I1218 12:57:16.794637   12728 docker.go:677] registry.k8s.io/kube-apiserver:v1.28.4 wasn't preloaded
	I1218 12:57:16.806596   12728 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1218 12:57:16.820642   12728 command_runner.go:139] > {"Repositories":{}}
	I1218 12:57:16.834413   12728 ssh_runner.go:195] Run: which lz4
	I1218 12:57:16.839729   12728 command_runner.go:130] > /usr/bin/lz4
	I1218 12:57:16.839729   12728 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I1218 12:57:16.854127   12728 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1218 12:57:16.859336   12728 command_runner.go:130] ! stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1218 12:57:16.859336   12728 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1218 12:57:16.859336   12728 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (423165415 bytes)
	I1218 12:57:19.355691   12728 docker.go:635] Took 2.515452 seconds to copy over tarball
	I1218 12:57:19.367419   12728 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1218 12:57:28.938222   12728 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (9.570769s)
	I1218 12:57:28.938349   12728 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1218 12:57:29.005360   12728 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1218 12:57:29.026188   12728 command_runner.go:139] > {"Repositories":{"gcr.io/k8s-minikube/storage-provisioner":{"gcr.io/k8s-minikube/storage-provisioner:v5":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562"},"registry.k8s.io/coredns/coredns":{"registry.k8s.io/coredns/coredns:v1.10.1":"sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc","registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e":"sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc"},"registry.k8s.io/etcd":{"registry.k8s.io/etcd:3.5.9-0":"sha256:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9","registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3":"sha256:73deb9a3f702532592a4167455f8
bf2e5f5d900bcc959ba2fd2d35c321de1af9"},"registry.k8s.io/kube-apiserver":{"registry.k8s.io/kube-apiserver:v1.28.4":"sha256:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257","registry.k8s.io/kube-apiserver@sha256:5b28a364467cf7e134343bb3ee2c6d40682b473a743a72142c7bbe25767d36eb":"sha256:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257"},"registry.k8s.io/kube-controller-manager":{"registry.k8s.io/kube-controller-manager:v1.28.4":"sha256:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591","registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c":"sha256:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591"},"registry.k8s.io/kube-proxy":{"registry.k8s.io/kube-proxy:v1.28.4":"sha256:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e","registry.k8s.io/kube-proxy@sha256:e63408a0f5068a7e9d4b34fd72b4a2b0e5512509b53cd2123a37fc991b0ef532":"sha256:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021
a3a2899304398e"},"registry.k8s.io/kube-scheduler":{"registry.k8s.io/kube-scheduler:v1.28.4":"sha256:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1","registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba":"sha256:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1"},"registry.k8s.io/pause":{"registry.k8s.io/pause:3.9":"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097":"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c"}}}
	I1218 12:57:29.027539   12728 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2629 bytes)
	I1218 12:57:29.071382   12728 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1218 12:57:29.240879   12728 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1218 12:57:31.607210   12728 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.3663226s)
	I1218 12:57:31.618038   12728 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1218 12:57:31.643655   12728 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.28.4
	I1218 12:57:31.643655   12728 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.28.4
	I1218 12:57:31.643655   12728 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.28.4
	I1218 12:57:31.643655   12728 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.28.4
	I1218 12:57:31.643655   12728 command_runner.go:130] > registry.k8s.io/etcd:3.5.9-0
	I1218 12:57:31.643655   12728 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.10.1
	I1218 12:57:31.643655   12728 command_runner.go:130] > registry.k8s.io/pause:3.9
	I1218 12:57:31.643655   12728 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I1218 12:57:31.643655   12728 docker.go:671] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.28.4
	registry.k8s.io/kube-proxy:v1.28.4
	registry.k8s.io/kube-controller-manager:v1.28.4
	registry.k8s.io/kube-scheduler:v1.28.4
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1218 12:57:31.643655   12728 cache_images.go:84] Images are preloaded, skipping loading
	I1218 12:57:31.653524   12728 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1218 12:57:31.686234   12728 command_runner.go:130] > cgroupfs
	I1218 12:57:31.686873   12728 cni.go:84] Creating CNI manager for ""
	I1218 12:57:31.687106   12728 cni.go:136] 1 nodes found, recommending kindnet
	I1218 12:57:31.687167   12728 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1218 12:57:31.687167   12728 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.238.182 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-015900 NodeName:multinode-015900 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.238.182"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.238.182 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPod
Path:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1218 12:57:31.687515   12728 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.238.182
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "multinode-015900"
	  kubeletExtraArgs:
	    node-ip: 192.168.238.182
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.238.182"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1218 12:57:31.687742   12728 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=multinode-015900 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.238.182
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:multinode-015900 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1218 12:57:31.708421   12728 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I1218 12:57:31.725488   12728 command_runner.go:130] > kubeadm
	I1218 12:57:31.725613   12728 command_runner.go:130] > kubectl
	I1218 12:57:31.725673   12728 command_runner.go:130] > kubelet
	I1218 12:57:31.725673   12728 binaries.go:44] Found k8s binaries, skipping transfer
	I1218 12:57:31.739209   12728 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1218 12:57:31.753192   12728 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I1218 12:57:31.778528   12728 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1218 12:57:31.804238   12728 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2108 bytes)
	I1218 12:57:31.851399   12728 ssh_runner.go:195] Run: grep 192.168.238.182	control-plane.minikube.internal$ /etc/hosts
	I1218 12:57:31.857210   12728 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.238.182	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1218 12:57:31.876385   12728 certs.go:56] Setting up C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-015900 for IP: 192.168.238.182
	I1218 12:57:31.876638   12728 certs.go:190] acquiring lock for shared ca certs: {Name:mk3bd6e475aadf28590677101b36f61c132d4290 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1218 12:57:31.877499   12728 certs.go:199] skipping minikubeCA CA generation: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.key
	I1218 12:57:31.877811   12728 certs.go:199] skipping proxyClientCA CA generation: C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.key
	I1218 12:57:31.878628   12728 certs.go:319] generating minikube-user signed cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-015900\client.key
	I1218 12:57:31.878628   12728 crypto.go:68] Generating cert C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-015900\client.crt with IP's: []
	I1218 12:57:31.962683   12728 crypto.go:156] Writing cert to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-015900\client.crt ...
	I1218 12:57:31.962683   12728 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-015900\client.crt: {Name:mk443893db2ab4547173669cb5fb85af266c047f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1218 12:57:31.965175   12728 crypto.go:164] Writing key to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-015900\client.key ...
	I1218 12:57:31.965281   12728 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-015900\client.key: {Name:mkf8d591a6b02a85c501b46c227177800c278172 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1218 12:57:31.966536   12728 certs.go:319] generating minikube signed cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-015900\apiserver.key.6c162e13
	I1218 12:57:31.966762   12728 crypto.go:68] Generating cert C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-015900\apiserver.crt.6c162e13 with IP's: [192.168.238.182 10.96.0.1 127.0.0.1 10.0.0.1]
	I1218 12:57:32.126840   12728 crypto.go:156] Writing cert to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-015900\apiserver.crt.6c162e13 ...
	I1218 12:57:32.126840   12728 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-015900\apiserver.crt.6c162e13: {Name:mk12629f70b92d5152b01857c3d0d0c6fa3632c8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1218 12:57:32.128938   12728 crypto.go:164] Writing key to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-015900\apiserver.key.6c162e13 ...
	I1218 12:57:32.128938   12728 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-015900\apiserver.key.6c162e13: {Name:mkb94417e121f818ffd804c96f0443c7e09195d6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1218 12:57:32.129195   12728 certs.go:337] copying C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-015900\apiserver.crt.6c162e13 -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-015900\apiserver.crt
	I1218 12:57:32.143256   12728 certs.go:341] copying C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-015900\apiserver.key.6c162e13 -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-015900\apiserver.key
	I1218 12:57:32.144372   12728 certs.go:319] generating aggregator signed cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-015900\proxy-client.key
	I1218 12:57:32.145401   12728 crypto.go:68] Generating cert C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-015900\proxy-client.crt with IP's: []
	I1218 12:57:32.613324   12728 crypto.go:156] Writing cert to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-015900\proxy-client.crt ...
	I1218 12:57:32.613324   12728 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-015900\proxy-client.crt: {Name:mk00d4f8ae0fbc47c73383835c3cafe25f66cfa3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1218 12:57:32.615239   12728 crypto.go:164] Writing key to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-015900\proxy-client.key ...
	I1218 12:57:32.615239   12728 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-015900\proxy-client.key: {Name:mkf726965699025bc16f7e34ff9e188132cd1885 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1218 12:57:32.615724   12728 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-015900\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1218 12:57:32.615724   12728 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-015900\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1218 12:57:32.616724   12728 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-015900\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1218 12:57:32.627986   12728 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-015900\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1218 12:57:32.628633   12728 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I1218 12:57:32.628819   12728 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I1218 12:57:32.628983   12728 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1218 12:57:32.629112   12728 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1218 12:57:32.629665   12728 certs.go:437] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\14928.pem (1338 bytes)
	W1218 12:57:32.630076   12728 certs.go:433] ignoring C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\14928_empty.pem, impossibly tiny 0 bytes
	I1218 12:57:32.630237   12728 certs.go:437] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I1218 12:57:32.630519   12728 certs.go:437] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I1218 12:57:32.630957   12728 certs.go:437] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I1218 12:57:32.631001   12728 certs.go:437] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I1218 12:57:32.631726   12728 certs.go:437] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\149282.pem (1708 bytes)
	I1218 12:57:32.631904   12728 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\14928.pem -> /usr/share/ca-certificates/14928.pem
	I1218 12:57:32.632112   12728 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\149282.pem -> /usr/share/ca-certificates/149282.pem
	I1218 12:57:32.632287   12728 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1218 12:57:32.633793   12728 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-015900\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1218 12:57:32.675421   12728 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-015900\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1218 12:57:32.714222   12728 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-015900\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1218 12:57:32.754585   12728 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-015900\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1218 12:57:32.792305   12728 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1218 12:57:32.833479   12728 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1218 12:57:32.872044   12728 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1218 12:57:32.914883   12728 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1218 12:57:32.964625   12728 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\14928.pem --> /usr/share/ca-certificates/14928.pem (1338 bytes)
	I1218 12:57:33.006669   12728 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\149282.pem --> /usr/share/ca-certificates/149282.pem (1708 bytes)
	I1218 12:57:33.048560   12728 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1218 12:57:33.091541   12728 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1218 12:57:33.132432   12728 ssh_runner.go:195] Run: openssl version
	I1218 12:57:33.138589   12728 command_runner.go:130] > OpenSSL 1.1.1n  15 Mar 2022
	I1218 12:57:33.149704   12728 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14928.pem && ln -fs /usr/share/ca-certificates/14928.pem /etc/ssl/certs/14928.pem"
	I1218 12:57:33.179238   12728 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14928.pem
	I1218 12:57:33.185841   12728 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Dec 18 11:59 /usr/share/ca-certificates/14928.pem
	I1218 12:57:33.185841   12728 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Dec 18 11:59 /usr/share/ca-certificates/14928.pem
	I1218 12:57:33.198114   12728 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14928.pem
	I1218 12:57:33.205514   12728 command_runner.go:130] > 51391683
	I1218 12:57:33.217534   12728 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14928.pem /etc/ssl/certs/51391683.0"
	I1218 12:57:33.251552   12728 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/149282.pem && ln -fs /usr/share/ca-certificates/149282.pem /etc/ssl/certs/149282.pem"
	I1218 12:57:33.280643   12728 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/149282.pem
	I1218 12:57:33.287001   12728 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Dec 18 11:59 /usr/share/ca-certificates/149282.pem
	I1218 12:57:33.287001   12728 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Dec 18 11:59 /usr/share/ca-certificates/149282.pem
	I1218 12:57:33.299107   12728 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/149282.pem
	I1218 12:57:33.307818   12728 command_runner.go:130] > 3ec20f2e
	I1218 12:57:33.320269   12728 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/149282.pem /etc/ssl/certs/3ec20f2e.0"
	I1218 12:57:33.347664   12728 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1218 12:57:33.375157   12728 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1218 12:57:33.381208   12728 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Dec 18 11:45 /usr/share/ca-certificates/minikubeCA.pem
	I1218 12:57:33.381208   12728 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Dec 18 11:45 /usr/share/ca-certificates/minikubeCA.pem
	I1218 12:57:33.393591   12728 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1218 12:57:33.401438   12728 command_runner.go:130] > b5213941
	I1218 12:57:33.414432   12728 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1218 12:57:33.442869   12728 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1218 12:57:33.448167   12728 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1218 12:57:33.448167   12728 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1218 12:57:33.448766   12728 kubeadm.go:404] StartCluster: {Name:multinode-015900 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17765/minikube-v1.32.1-1702490427-17765-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702585004-17765@sha256:ba2fbb9efd5b81a443834ba0800f3bc13feea942ce199df74b0054a9bdb32bbd Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersi
on:v1.28.4 ClusterName:multinode-015900 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.238.182 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1218 12:57:33.458234   12728 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1218 12:57:33.499233   12728 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1218 12:57:33.514935   12728 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/kubeadm-flags.env': No such file or directory
	I1218 12:57:33.514935   12728 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/config.yaml': No such file or directory
	I1218 12:57:33.514935   12728 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/etcd': No such file or directory
	I1218 12:57:33.528420   12728 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1218 12:57:33.553615   12728 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1218 12:57:33.567184   12728 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I1218 12:57:33.567184   12728 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I1218 12:57:33.567283   12728 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I1218 12:57:33.567283   12728 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1218 12:57:33.567338   12728 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1218 12:57:33.567409   12728 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1218 12:57:34.324173   12728 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1218 12:57:34.324224   12728 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1218 12:57:48.133987   12728 kubeadm.go:322] [init] Using Kubernetes version: v1.28.4
	I1218 12:57:48.133987   12728 command_runner.go:130] > [init] Using Kubernetes version: v1.28.4
	I1218 12:57:48.133987   12728 kubeadm.go:322] [preflight] Running pre-flight checks
	I1218 12:57:48.133987   12728 command_runner.go:130] > [preflight] Running pre-flight checks
	I1218 12:57:48.134481   12728 command_runner.go:130] > [preflight] Pulling images required for setting up a Kubernetes cluster
	I1218 12:57:48.134481   12728 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1218 12:57:48.134481   12728 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1218 12:57:48.134481   12728 command_runner.go:130] > [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1218 12:57:48.134481   12728 command_runner.go:130] > [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1218 12:57:48.134481   12728 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1218 12:57:48.134481   12728 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1218 12:57:48.135517   12728 out.go:204]   - Generating certificates and keys ...
	I1218 12:57:48.134481   12728 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1218 12:57:48.135517   12728 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1218 12:57:48.135517   12728 command_runner.go:130] > [certs] Using existing ca certificate authority
	I1218 12:57:48.135517   12728 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I1218 12:57:48.136484   12728 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1218 12:57:48.136484   12728 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1218 12:57:48.136484   12728 command_runner.go:130] > [certs] Generating "apiserver-kubelet-client" certificate and key
	I1218 12:57:48.136484   12728 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I1218 12:57:48.136484   12728 command_runner.go:130] > [certs] Generating "front-proxy-ca" certificate and key
	I1218 12:57:48.136484   12728 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I1218 12:57:48.136484   12728 command_runner.go:130] > [certs] Generating "front-proxy-client" certificate and key
	I1218 12:57:48.136484   12728 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I1218 12:57:48.136484   12728 command_runner.go:130] > [certs] Generating "etcd/ca" certificate and key
	I1218 12:57:48.136484   12728 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I1218 12:57:48.136484   12728 command_runner.go:130] > [certs] Generating "etcd/server" certificate and key
	I1218 12:57:48.137534   12728 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [localhost multinode-015900] and IPs [192.168.238.182 127.0.0.1 ::1]
	I1218 12:57:48.137534   12728 command_runner.go:130] > [certs] etcd/server serving cert is signed for DNS names [localhost multinode-015900] and IPs [192.168.238.182 127.0.0.1 ::1]
	I1218 12:57:48.137534   12728 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I1218 12:57:48.137534   12728 command_runner.go:130] > [certs] Generating "etcd/peer" certificate and key
	I1218 12:57:48.137534   12728 command_runner.go:130] > [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-015900] and IPs [192.168.238.182 127.0.0.1 ::1]
	I1218 12:57:48.137534   12728 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-015900] and IPs [192.168.238.182 127.0.0.1 ::1]
	I1218 12:57:48.137534   12728 command_runner.go:130] > [certs] Generating "etcd/healthcheck-client" certificate and key
	I1218 12:57:48.137534   12728 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1218 12:57:48.138507   12728 command_runner.go:130] > [certs] Generating "apiserver-etcd-client" certificate and key
	I1218 12:57:48.138507   12728 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I1218 12:57:48.138507   12728 kubeadm.go:322] [certs] Generating "sa" key and public key
	I1218 12:57:48.138507   12728 command_runner.go:130] > [certs] Generating "sa" key and public key
	I1218 12:57:48.138507   12728 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1218 12:57:48.138507   12728 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1218 12:57:48.138507   12728 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1218 12:57:48.138507   12728 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I1218 12:57:48.138507   12728 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1218 12:57:48.138507   12728 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1218 12:57:48.138507   12728 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1218 12:57:48.138507   12728 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1218 12:57:48.138507   12728 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1218 12:57:48.138507   12728 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1218 12:57:48.139512   12728 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1218 12:57:48.139512   12728 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1218 12:57:48.139512   12728 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1218 12:57:48.139512   12728 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1218 12:57:48.140651   12728 out.go:204]   - Booting up control plane ...
	I1218 12:57:48.140651   12728 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1218 12:57:48.140651   12728 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1218 12:57:48.140651   12728 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1218 12:57:48.141509   12728 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1218 12:57:48.141509   12728 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1218 12:57:48.141509   12728 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1218 12:57:48.141509   12728 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1218 12:57:48.141509   12728 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1218 12:57:48.141509   12728 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1218 12:57:48.141509   12728 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1218 12:57:48.141509   12728 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I1218 12:57:48.141509   12728 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1218 12:57:48.142493   12728 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1218 12:57:48.142493   12728 command_runner.go:130] > [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1218 12:57:48.142493   12728 command_runner.go:130] > [apiclient] All control plane components are healthy after 8.006892 seconds
	I1218 12:57:48.142493   12728 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.006892 seconds
	I1218 12:57:48.142493   12728 command_runner.go:130] > [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1218 12:57:48.142493   12728 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1218 12:57:48.142493   12728 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1218 12:57:48.143494   12728 command_runner.go:130] > [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1218 12:57:48.143494   12728 command_runner.go:130] > [upload-certs] Skipping phase. Please see --upload-certs
	I1218 12:57:48.143494   12728 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1218 12:57:48.143494   12728 command_runner.go:130] > [mark-control-plane] Marking the node multinode-015900 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1218 12:57:48.143494   12728 kubeadm.go:322] [mark-control-plane] Marking the node multinode-015900 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1218 12:57:48.143494   12728 command_runner.go:130] > [bootstrap-token] Using token: wngx84.cihr8ssvap9im7kf
	I1218 12:57:48.143494   12728 kubeadm.go:322] [bootstrap-token] Using token: wngx84.cihr8ssvap9im7kf
	I1218 12:57:48.144495   12728 out.go:204]   - Configuring RBAC rules ...
	I1218 12:57:48.144495   12728 command_runner.go:130] > [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1218 12:57:48.144495   12728 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1218 12:57:48.145496   12728 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1218 12:57:48.145496   12728 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1218 12:57:48.145496   12728 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1218 12:57:48.145496   12728 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1218 12:57:48.145496   12728 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1218 12:57:48.146494   12728 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1218 12:57:48.146494   12728 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1218 12:57:48.146494   12728 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1218 12:57:48.146494   12728 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1218 12:57:48.146494   12728 command_runner.go:130] > [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1218 12:57:48.146494   12728 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1218 12:57:48.146494   12728 command_runner.go:130] > [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1218 12:57:48.146494   12728 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1218 12:57:48.146494   12728 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I1218 12:57:48.146494   12728 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1218 12:57:48.146494   12728 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I1218 12:57:48.146494   12728 kubeadm.go:322] 
	I1218 12:57:48.147489   12728 command_runner.go:130] > Your Kubernetes control-plane has initialized successfully!
	I1218 12:57:48.147489   12728 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1218 12:57:48.147489   12728 kubeadm.go:322] 
	I1218 12:57:48.147489   12728 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1218 12:57:48.147489   12728 command_runner.go:130] > To start using your cluster, you need to run the following as a regular user:
	I1218 12:57:48.147489   12728 kubeadm.go:322] 
	I1218 12:57:48.147489   12728 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1218 12:57:48.147489   12728 command_runner.go:130] >   mkdir -p $HOME/.kube
	I1218 12:57:48.147489   12728 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1218 12:57:48.147489   12728 command_runner.go:130] >   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1218 12:57:48.147489   12728 command_runner.go:130] >   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1218 12:57:48.147489   12728 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1218 12:57:48.147489   12728 kubeadm.go:322] 
	I1218 12:57:48.147489   12728 command_runner.go:130] > Alternatively, if you are the root user, you can run:
	I1218 12:57:48.147489   12728 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I1218 12:57:48.147489   12728 kubeadm.go:322] 
	I1218 12:57:48.148488   12728 command_runner.go:130] >   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1218 12:57:48.148488   12728 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1218 12:57:48.148488   12728 kubeadm.go:322] 
	I1218 12:57:48.148488   12728 command_runner.go:130] > You should now deploy a pod network to the cluster.
	I1218 12:57:48.148488   12728 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1218 12:57:48.148488   12728 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1218 12:57:48.148488   12728 command_runner.go:130] > Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1218 12:57:48.148488   12728 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1218 12:57:48.148488   12728 command_runner.go:130] >   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1218 12:57:48.148488   12728 kubeadm.go:322] 
	I1218 12:57:48.148488   12728 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1218 12:57:48.148488   12728 command_runner.go:130] > You can now join any number of control-plane nodes by copying certificate authorities
	I1218 12:57:48.148488   12728 command_runner.go:130] > and service account keys on each node and then running the following as root:
	I1218 12:57:48.148488   12728 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1218 12:57:48.148488   12728 kubeadm.go:322] 
	I1218 12:57:48.149499   12728 command_runner.go:130] >   kubeadm join control-plane.minikube.internal:8443 --token wngx84.cihr8ssvap9im7kf \
	I1218 12:57:48.149499   12728 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token wngx84.cihr8ssvap9im7kf \
	I1218 12:57:48.149499   12728 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:b2fa66f0127ff189a61b5e0d7ad6d9c9a72d2910f0374f3c179dae174436a982 \
	I1218 12:57:48.149499   12728 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:b2fa66f0127ff189a61b5e0d7ad6d9c9a72d2910f0374f3c179dae174436a982 \
	I1218 12:57:48.149499   12728 kubeadm.go:322] 	--control-plane 
	I1218 12:57:48.149499   12728 command_runner.go:130] > 	--control-plane 
	I1218 12:57:48.149499   12728 kubeadm.go:322] 
	I1218 12:57:48.149499   12728 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1218 12:57:48.149499   12728 command_runner.go:130] > Then you can join any number of worker nodes by running the following on each as root:
	I1218 12:57:48.149499   12728 kubeadm.go:322] 
	I1218 12:57:48.149499   12728 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token wngx84.cihr8ssvap9im7kf \
	I1218 12:57:48.149499   12728 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token wngx84.cihr8ssvap9im7kf \
	I1218 12:57:48.150488   12728 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:b2fa66f0127ff189a61b5e0d7ad6d9c9a72d2910f0374f3c179dae174436a982 
	I1218 12:57:48.150488   12728 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:b2fa66f0127ff189a61b5e0d7ad6d9c9a72d2910f0374f3c179dae174436a982 
	I1218 12:57:48.150488   12728 cni.go:84] Creating CNI manager for ""
	I1218 12:57:48.150488   12728 cni.go:136] 1 nodes found, recommending kindnet
	I1218 12:57:48.150488   12728 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1218 12:57:48.164489   12728 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1218 12:57:48.174940   12728 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I1218 12:57:48.175047   12728 command_runner.go:130] >   Size: 2685752   	Blocks: 5248       IO Block: 4096   regular file
	I1218 12:57:48.175047   12728 command_runner.go:130] > Device: 11h/17d	Inode: 3544        Links: 1
	I1218 12:57:48.175079   12728 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I1218 12:57:48.175079   12728 command_runner.go:130] > Access: 2023-12-18 12:56:02.700638100 +0000
	I1218 12:57:48.175079   12728 command_runner.go:130] > Modify: 2023-12-13 23:27:31.000000000 +0000
	I1218 12:57:48.175079   12728 command_runner.go:130] > Change: 2023-12-18 12:55:51.112000000 +0000
	I1218 12:57:48.175079   12728 command_runner.go:130] >  Birth: -
	I1218 12:57:48.175141   12728 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I1218 12:57:48.175141   12728 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I1218 12:57:48.233131   12728 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1218 12:57:49.706592   12728 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet created
	I1218 12:57:49.706592   12728 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet created
	I1218 12:57:49.706592   12728 command_runner.go:130] > serviceaccount/kindnet created
	I1218 12:57:49.706592   12728 command_runner.go:130] > daemonset.apps/kindnet created
	I1218 12:57:49.706733   12728 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.4735961s)
	I1218 12:57:49.706800   12728 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1218 12:57:49.722595   12728 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1218 12:57:49.723550   12728 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=30d8ecd1811578f7b9db580c501c654c189f68d4 minikube.k8s.io/name=multinode-015900 minikube.k8s.io/updated_at=2023_12_18T12_57_49_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1218 12:57:49.726641   12728 command_runner.go:130] > -16
	I1218 12:57:49.726641   12728 ops.go:34] apiserver oom_adj: -16
	I1218 12:57:49.882435   12728 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/minikube-rbac created
	I1218 12:57:49.890421   12728 command_runner.go:130] > node/multinode-015900 labeled
	I1218 12:57:49.898137   12728 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1218 12:57:50.028695   12728 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1218 12:57:50.406976   12728 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1218 12:57:50.520269   12728 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1218 12:57:50.902348   12728 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1218 12:57:51.024725   12728 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1218 12:57:51.407290   12728 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1218 12:57:51.535026   12728 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1218 12:57:51.909684   12728 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1218 12:57:52.018709   12728 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1218 12:57:52.411323   12728 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1218 12:57:52.544268   12728 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1218 12:57:52.895474   12728 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1218 12:57:53.022239   12728 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1218 12:57:53.398869   12728 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1218 12:57:53.512785   12728 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1218 12:57:53.904948   12728 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1218 12:57:54.033153   12728 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1218 12:57:54.403099   12728 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1218 12:57:54.521195   12728 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1218 12:57:54.912618   12728 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1218 12:57:55.036280   12728 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1218 12:57:55.399586   12728 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1218 12:57:55.511159   12728 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1218 12:57:55.901320   12728 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1218 12:57:56.022008   12728 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1218 12:57:56.403621   12728 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1218 12:57:56.527385   12728 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1218 12:57:56.906268   12728 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1218 12:57:57.023171   12728 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1218 12:57:57.404284   12728 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1218 12:57:57.593903   12728 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1218 12:57:57.911156   12728 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1218 12:57:58.028141   12728 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1218 12:57:58.398195   12728 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1218 12:57:58.533673   12728 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1218 12:57:58.902316   12728 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1218 12:57:59.077232   12728 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1218 12:57:59.412096   12728 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1218 12:57:59.542261   12728 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1218 12:57:59.902832   12728 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1218 12:58:00.056154   12728 command_runner.go:130] > NAME      SECRETS   AGE
	I1218 12:58:00.056154   12728 command_runner.go:130] > default   0         1s
	I1218 12:58:00.056154   12728 kubeadm.go:1088] duration metric: took 10.349206s to wait for elevateKubeSystemPrivileges.
	I1218 12:58:00.056154   12728 kubeadm.go:406] StartCluster complete in 26.6072932s
	I1218 12:58:00.056154   12728 settings.go:142] acquiring lock: {Name:mk2f48f1c2db86c45c5c20d13312e07e9c171d09 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1218 12:58:00.056154   12728 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	I1218 12:58:00.058161   12728 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\kubeconfig: {Name:mk3b9816be6135fef42a073128e9ed356868417a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1218 12:58:00.060158   12728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1218 12:58:00.060158   12728 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1218 12:58:00.060158   12728 addons.go:69] Setting storage-provisioner=true in profile "multinode-015900"
	I1218 12:58:00.060158   12728 addons.go:69] Setting default-storageclass=true in profile "multinode-015900"
	I1218 12:58:00.060158   12728 addons.go:231] Setting addon storage-provisioner=true in "multinode-015900"
	I1218 12:58:00.060158   12728 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "multinode-015900"
	I1218 12:58:00.060158   12728 host.go:66] Checking if "multinode-015900" exists ...
	I1218 12:58:00.060158   12728 config.go:182] Loaded profile config "multinode-015900": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1218 12:58:00.061160   12728 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-015900 ).state
	I1218 12:58:00.061160   12728 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-015900 ).state
	I1218 12:58:00.077160   12728 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	I1218 12:58:00.078162   12728 kapi.go:59] client config for multinode-015900: &rest.Config{Host:"https://192.168.238.182:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\profiles\\multinode-015900\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\profiles\\multinode-015900\\client.key", CAFile:"C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), C
AData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x21a1f20), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1218 12:58:00.080174   12728 cert_rotation.go:137] Starting client certificate rotation controller
	I1218 12:58:00.080174   12728 round_trippers.go:463] GET https://192.168.238.182:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I1218 12:58:00.080174   12728 round_trippers.go:469] Request Headers:
	I1218 12:58:00.080174   12728 round_trippers.go:473]     Accept: application/json, */*
	I1218 12:58:00.080174   12728 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1218 12:58:00.130451   12728 round_trippers.go:574] Response Status: 200 OK in 50 milliseconds
	I1218 12:58:00.130451   12728 round_trippers.go:577] Response Headers:
	I1218 12:58:00.130451   12728 round_trippers.go:580]     Audit-Id: 95860d47-a2e3-4982-903e-8eee393fb543
	I1218 12:58:00.130451   12728 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 12:58:00.130708   12728 round_trippers.go:580]     Content-Type: application/json
	I1218 12:58:00.130708   12728 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a04cbc8-e61e-444a-bf0e-8472e0035c2c
	I1218 12:58:00.130708   12728 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e24e6e3-cf39-4982-af10-f8c010ad1d90
	I1218 12:58:00.130708   12728 round_trippers.go:580]     Content-Length: 291
	I1218 12:58:00.130708   12728 round_trippers.go:580]     Date: Mon, 18 Dec 2023 12:58:00 GMT
	I1218 12:58:00.130842   12728 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"ac4adb10-5952-477f-b353-31b85c54eafc","resourceVersion":"343","creationTimestamp":"2023-12-18T12:57:48Z"},"spec":{"replicas":2},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I1218 12:58:00.131613   12728 request.go:1212] Request Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"ac4adb10-5952-477f-b353-31b85c54eafc","resourceVersion":"343","creationTimestamp":"2023-12-18T12:57:48Z"},"spec":{"replicas":1},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I1218 12:58:00.131729   12728 round_trippers.go:463] PUT https://192.168.238.182:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I1218 12:58:00.131729   12728 round_trippers.go:469] Request Headers:
	I1218 12:58:00.131795   12728 round_trippers.go:473]     Content-Type: application/json
	I1218 12:58:00.131795   12728 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1218 12:58:00.131795   12728 round_trippers.go:473]     Accept: application/json, */*
	I1218 12:58:00.163154   12728 round_trippers.go:574] Response Status: 200 OK in 31 milliseconds
	I1218 12:58:00.163818   12728 round_trippers.go:577] Response Headers:
	I1218 12:58:00.163818   12728 round_trippers.go:580]     Date: Mon, 18 Dec 2023 12:58:00 GMT
	I1218 12:58:00.163818   12728 round_trippers.go:580]     Audit-Id: e4fb48ea-84dd-4e06-a619-cacf4b859053
	I1218 12:58:00.163818   12728 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 12:58:00.163951   12728 round_trippers.go:580]     Content-Type: application/json
	I1218 12:58:00.163951   12728 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a04cbc8-e61e-444a-bf0e-8472e0035c2c
	I1218 12:58:00.163951   12728 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e24e6e3-cf39-4982-af10-f8c010ad1d90
	I1218 12:58:00.163951   12728 round_trippers.go:580]     Content-Length: 291
	I1218 12:58:00.163951   12728 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"ac4adb10-5952-477f-b353-31b85c54eafc","resourceVersion":"367","creationTimestamp":"2023-12-18T12:57:48Z"},"spec":{"replicas":1},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I1218 12:58:00.462038   12728 command_runner.go:130] > apiVersion: v1
	I1218 12:58:00.462038   12728 command_runner.go:130] > data:
	I1218 12:58:00.462038   12728 command_runner.go:130] >   Corefile: |
	I1218 12:58:00.462038   12728 command_runner.go:130] >     .:53 {
	I1218 12:58:00.462038   12728 command_runner.go:130] >         errors
	I1218 12:58:00.462038   12728 command_runner.go:130] >         health {
	I1218 12:58:00.462038   12728 command_runner.go:130] >            lameduck 5s
	I1218 12:58:00.462038   12728 command_runner.go:130] >         }
	I1218 12:58:00.462038   12728 command_runner.go:130] >         ready
	I1218 12:58:00.462038   12728 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I1218 12:58:00.462038   12728 command_runner.go:130] >            pods insecure
	I1218 12:58:00.462038   12728 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I1218 12:58:00.462038   12728 command_runner.go:130] >            ttl 30
	I1218 12:58:00.462038   12728 command_runner.go:130] >         }
	I1218 12:58:00.462038   12728 command_runner.go:130] >         prometheus :9153
	I1218 12:58:00.462038   12728 command_runner.go:130] >         forward . /etc/resolv.conf {
	I1218 12:58:00.462038   12728 command_runner.go:130] >            max_concurrent 1000
	I1218 12:58:00.462038   12728 command_runner.go:130] >         }
	I1218 12:58:00.462038   12728 command_runner.go:130] >         cache 30
	I1218 12:58:00.462038   12728 command_runner.go:130] >         loop
	I1218 12:58:00.462038   12728 command_runner.go:130] >         reload
	I1218 12:58:00.462038   12728 command_runner.go:130] >         loadbalance
	I1218 12:58:00.462038   12728 command_runner.go:130] >     }
	I1218 12:58:00.462038   12728 command_runner.go:130] > kind: ConfigMap
	I1218 12:58:00.462038   12728 command_runner.go:130] > metadata:
	I1218 12:58:00.462038   12728 command_runner.go:130] >   creationTimestamp: "2023-12-18T12:57:48Z"
	I1218 12:58:00.462038   12728 command_runner.go:130] >   name: coredns
	I1218 12:58:00.462038   12728 command_runner.go:130] >   namespace: kube-system
	I1218 12:58:00.462038   12728 command_runner.go:130] >   resourceVersion: "266"
	I1218 12:58:00.462038   12728 command_runner.go:130] >   uid: 40ad0019-312a-4903-9379-40e12697856d
	I1218 12:58:00.463069   12728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.224.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1218 12:58:00.586181   12728 round_trippers.go:463] GET https://192.168.238.182:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I1218 12:58:00.586281   12728 round_trippers.go:469] Request Headers:
	I1218 12:58:00.586281   12728 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1218 12:58:00.586370   12728 round_trippers.go:473]     Accept: application/json, */*
	I1218 12:58:00.597670   12728 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I1218 12:58:00.597670   12728 round_trippers.go:577] Response Headers:
	I1218 12:58:00.597670   12728 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 12:58:00.597670   12728 round_trippers.go:580]     Content-Type: application/json
	I1218 12:58:00.597670   12728 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a04cbc8-e61e-444a-bf0e-8472e0035c2c
	I1218 12:58:00.597670   12728 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e24e6e3-cf39-4982-af10-f8c010ad1d90
	I1218 12:58:00.597670   12728 round_trippers.go:580]     Content-Length: 291
	I1218 12:58:00.597670   12728 round_trippers.go:580]     Date: Mon, 18 Dec 2023 12:58:00 GMT
	I1218 12:58:00.597670   12728 round_trippers.go:580]     Audit-Id: fc5e368f-f415-447a-ba6c-63c0bfa75b8e
	I1218 12:58:00.597670   12728 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"ac4adb10-5952-477f-b353-31b85c54eafc","resourceVersion":"397","creationTimestamp":"2023-12-18T12:57:48Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I1218 12:58:00.598669   12728 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-015900" context rescaled to 1 replicas
	I1218 12:58:00.598669   12728 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.238.182 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1218 12:58:00.599665   12728 out.go:177] * Verifying Kubernetes components...
	I1218 12:58:00.623460   12728 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1218 12:58:01.344512   12728 command_runner.go:130] > configmap/coredns replaced
	I1218 12:58:01.344662   12728 start.go:929] {"host.minikube.internal": 192.168.224.1} host record injected into CoreDNS's ConfigMap
	I1218 12:58:01.345710   12728 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	I1218 12:58:01.346481   12728 kapi.go:59] client config for multinode-015900: &rest.Config{Host:"https://192.168.238.182:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\profiles\\multinode-015900\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\profiles\\multinode-015900\\client.key", CAFile:"C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), C
AData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x21a1f20), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1218 12:58:01.347375   12728 node_ready.go:35] waiting up to 6m0s for node "multinode-015900" to be "Ready" ...
	I1218 12:58:01.347559   12728 round_trippers.go:463] GET https://192.168.238.182:8443/api/v1/nodes/multinode-015900
	I1218 12:58:01.347559   12728 round_trippers.go:469] Request Headers:
	I1218 12:58:01.347559   12728 round_trippers.go:473]     Accept: application/json, */*
	I1218 12:58:01.347559   12728 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1218 12:58:01.364939   12728 round_trippers.go:574] Response Status: 200 OK in 17 milliseconds
	I1218 12:58:01.364939   12728 round_trippers.go:577] Response Headers:
	I1218 12:58:01.364939   12728 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a04cbc8-e61e-444a-bf0e-8472e0035c2c
	I1218 12:58:01.364939   12728 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e24e6e3-cf39-4982-af10-f8c010ad1d90
	I1218 12:58:01.365444   12728 round_trippers.go:580]     Date: Mon, 18 Dec 2023 12:58:01 GMT
	I1218 12:58:01.365444   12728 round_trippers.go:580]     Audit-Id: df2d7e99-38e1-4ce3-bb58-c5e44c8173a0
	I1218 12:58:01.365444   12728 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 12:58:01.365444   12728 round_trippers.go:580]     Content-Type: application/json
	I1218 12:58:01.365681   12728 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-015900","uid":"b1b226d9-f305-41e8-ae9a-d377e8d5a4c5","resourceVersion":"355","creationTimestamp":"2023-12-18T12:57:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-015900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30d8ecd1811578f7b9db580c501c654c189f68d4","minikube.k8s.io/name":"multinode-015900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_18T12_57_49_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-18T12:57:44Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I1218 12:58:01.852611   12728 round_trippers.go:463] GET https://192.168.238.182:8443/api/v1/nodes/multinode-015900
	I1218 12:58:01.852710   12728 round_trippers.go:469] Request Headers:
	I1218 12:58:01.852710   12728 round_trippers.go:473]     Accept: application/json, */*
	I1218 12:58:01.852710   12728 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1218 12:58:01.856781   12728 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1218 12:58:01.856847   12728 round_trippers.go:577] Response Headers:
	I1218 12:58:01.856847   12728 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a04cbc8-e61e-444a-bf0e-8472e0035c2c
	I1218 12:58:01.856847   12728 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e24e6e3-cf39-4982-af10-f8c010ad1d90
	I1218 12:58:01.856906   12728 round_trippers.go:580]     Date: Mon, 18 Dec 2023 12:58:01 GMT
	I1218 12:58:01.856906   12728 round_trippers.go:580]     Audit-Id: 3d88e0cf-53a6-43d7-ba5b-a27d24828ec0
	I1218 12:58:01.856906   12728 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 12:58:01.856906   12728 round_trippers.go:580]     Content-Type: application/json
	I1218 12:58:01.857098   12728 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-015900","uid":"b1b226d9-f305-41e8-ae9a-d377e8d5a4c5","resourceVersion":"355","creationTimestamp":"2023-12-18T12:57:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-015900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30d8ecd1811578f7b9db580c501c654c189f68d4","minikube.k8s.io/name":"multinode-015900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_18T12_57_49_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-18T12:57:44Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I1218 12:58:02.357190   12728 round_trippers.go:463] GET https://192.168.238.182:8443/api/v1/nodes/multinode-015900
	I1218 12:58:02.357190   12728 round_trippers.go:469] Request Headers:
	I1218 12:58:02.357190   12728 round_trippers.go:473]     Accept: application/json, */*
	I1218 12:58:02.357190   12728 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1218 12:58:02.361971   12728 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1218 12:58:02.362357   12728 round_trippers.go:577] Response Headers:
	I1218 12:58:02.362357   12728 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 12:58:02.362357   12728 round_trippers.go:580]     Content-Type: application/json
	I1218 12:58:02.362357   12728 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a04cbc8-e61e-444a-bf0e-8472e0035c2c
	I1218 12:58:02.362357   12728 main.go:141] libmachine: [stdout =====>] : Running
	
	I1218 12:58:02.362605   12728 main.go:141] libmachine: [stderr =====>] : 
	I1218 12:58:02.362357   12728 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e24e6e3-cf39-4982-af10-f8c010ad1d90
	I1218 12:58:02.363383   12728 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1218 12:58:02.362659   12728 round_trippers.go:580]     Date: Mon, 18 Dec 2023 12:58:02 GMT
	I1218 12:58:02.364354   12728 round_trippers.go:580]     Audit-Id: 375260c5-7e8e-4649-a6a2-89f01905fde1
	I1218 12:58:02.364470   12728 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1218 12:58:02.362357   12728 main.go:141] libmachine: [stdout =====>] : Running
	
	I1218 12:58:02.364470   12728 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1218 12:58:02.364595   12728 main.go:141] libmachine: [stderr =====>] : 
	I1218 12:58:02.364703   12728 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-015900 ).state
	I1218 12:58:02.364891   12728 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-015900","uid":"b1b226d9-f305-41e8-ae9a-d377e8d5a4c5","resourceVersion":"355","creationTimestamp":"2023-12-18T12:57:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-015900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30d8ecd1811578f7b9db580c501c654c189f68d4","minikube.k8s.io/name":"multinode-015900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_18T12_57_49_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-18T12:57:44Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I1218 12:58:02.365921   12728 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	I1218 12:58:02.366772   12728 kapi.go:59] client config for multinode-015900: &rest.Config{Host:"https://192.168.238.182:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\profiles\\multinode-015900\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\profiles\\multinode-015900\\client.key", CAFile:"C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), C
AData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x21a1f20), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1218 12:58:02.367991   12728 addons.go:231] Setting addon default-storageclass=true in "multinode-015900"
	I1218 12:58:02.368159   12728 host.go:66] Checking if "multinode-015900" exists ...
	I1218 12:58:02.369132   12728 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-015900 ).state
	I1218 12:58:02.853798   12728 round_trippers.go:463] GET https://192.168.238.182:8443/api/v1/nodes/multinode-015900
	I1218 12:58:02.853930   12728 round_trippers.go:469] Request Headers:
	I1218 12:58:02.853930   12728 round_trippers.go:473]     Accept: application/json, */*
	I1218 12:58:02.853930   12728 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1218 12:58:02.858485   12728 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1218 12:58:02.858955   12728 round_trippers.go:577] Response Headers:
	I1218 12:58:02.858955   12728 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 12:58:02.858955   12728 round_trippers.go:580]     Content-Type: application/json
	I1218 12:58:02.858955   12728 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a04cbc8-e61e-444a-bf0e-8472e0035c2c
	I1218 12:58:02.859025   12728 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e24e6e3-cf39-4982-af10-f8c010ad1d90
	I1218 12:58:02.859025   12728 round_trippers.go:580]     Date: Mon, 18 Dec 2023 12:58:02 GMT
	I1218 12:58:02.859025   12728 round_trippers.go:580]     Audit-Id: 1c6a59a2-abc1-46e5-a1a3-c51c5d03f7f7
	I1218 12:58:02.859025   12728 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-015900","uid":"b1b226d9-f305-41e8-ae9a-d377e8d5a4c5","resourceVersion":"355","creationTimestamp":"2023-12-18T12:57:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-015900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30d8ecd1811578f7b9db580c501c654c189f68d4","minikube.k8s.io/name":"multinode-015900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_18T12_57_49_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-18T12:57:44Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I1218 12:58:03.360906   12728 round_trippers.go:463] GET https://192.168.238.182:8443/api/v1/nodes/multinode-015900
	I1218 12:58:03.361037   12728 round_trippers.go:469] Request Headers:
	I1218 12:58:03.361170   12728 round_trippers.go:473]     Accept: application/json, */*
	I1218 12:58:03.361216   12728 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1218 12:58:03.365587   12728 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1218 12:58:03.365816   12728 round_trippers.go:577] Response Headers:
	I1218 12:58:03.365816   12728 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a04cbc8-e61e-444a-bf0e-8472e0035c2c
	I1218 12:58:03.365816   12728 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e24e6e3-cf39-4982-af10-f8c010ad1d90
	I1218 12:58:03.365816   12728 round_trippers.go:580]     Date: Mon, 18 Dec 2023 12:58:03 GMT
	I1218 12:58:03.365942   12728 round_trippers.go:580]     Audit-Id: e8385213-55a1-4f3e-bd7b-afcbc881efdc
	I1218 12:58:03.365942   12728 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 12:58:03.365942   12728 round_trippers.go:580]     Content-Type: application/json
	I1218 12:58:03.366278   12728 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-015900","uid":"b1b226d9-f305-41e8-ae9a-d377e8d5a4c5","resourceVersion":"355","creationTimestamp":"2023-12-18T12:57:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-015900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30d8ecd1811578f7b9db580c501c654c189f68d4","minikube.k8s.io/name":"multinode-015900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_18T12_57_49_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-18T12:57:44Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I1218 12:58:03.366907   12728 node_ready.go:58] node "multinode-015900" has status "Ready":"False"
	I1218 12:58:03.853148   12728 round_trippers.go:463] GET https://192.168.238.182:8443/api/v1/nodes/multinode-015900
	I1218 12:58:03.853148   12728 round_trippers.go:469] Request Headers:
	I1218 12:58:03.853148   12728 round_trippers.go:473]     Accept: application/json, */*
	I1218 12:58:03.853148   12728 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1218 12:58:03.856734   12728 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1218 12:58:03.856734   12728 round_trippers.go:577] Response Headers:
	I1218 12:58:03.857671   12728 round_trippers.go:580]     Audit-Id: 6ba5bafe-adeb-4476-8fbf-22d40862ceb4
	I1218 12:58:03.857671   12728 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 12:58:03.857671   12728 round_trippers.go:580]     Content-Type: application/json
	I1218 12:58:03.857671   12728 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a04cbc8-e61e-444a-bf0e-8472e0035c2c
	I1218 12:58:03.857720   12728 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e24e6e3-cf39-4982-af10-f8c010ad1d90
	I1218 12:58:03.857720   12728 round_trippers.go:580]     Date: Mon, 18 Dec 2023 12:58:03 GMT
	I1218 12:58:03.857958   12728 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-015900","uid":"b1b226d9-f305-41e8-ae9a-d377e8d5a4c5","resourceVersion":"355","creationTimestamp":"2023-12-18T12:57:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-015900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30d8ecd1811578f7b9db580c501c654c189f68d4","minikube.k8s.io/name":"multinode-015900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_18T12_57_49_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-18T12:57:44Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I1218 12:58:04.359760   12728 round_trippers.go:463] GET https://192.168.238.182:8443/api/v1/nodes/multinode-015900
	I1218 12:58:04.359760   12728 round_trippers.go:469] Request Headers:
	I1218 12:58:04.359853   12728 round_trippers.go:473]     Accept: application/json, */*
	I1218 12:58:04.359853   12728 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1218 12:58:04.364115   12728 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1218 12:58:04.364115   12728 round_trippers.go:577] Response Headers:
	I1218 12:58:04.364115   12728 round_trippers.go:580]     Audit-Id: fea39283-5a9b-4654-8adc-260cabf2e69e
	I1218 12:58:04.364115   12728 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 12:58:04.364115   12728 round_trippers.go:580]     Content-Type: application/json
	I1218 12:58:04.364115   12728 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a04cbc8-e61e-444a-bf0e-8472e0035c2c
	I1218 12:58:04.364115   12728 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e24e6e3-cf39-4982-af10-f8c010ad1d90
	I1218 12:58:04.364115   12728 round_trippers.go:580]     Date: Mon, 18 Dec 2023 12:58:04 GMT
	I1218 12:58:04.364527   12728 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-015900","uid":"b1b226d9-f305-41e8-ae9a-d377e8d5a4c5","resourceVersion":"355","creationTimestamp":"2023-12-18T12:57:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-015900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30d8ecd1811578f7b9db580c501c654c189f68d4","minikube.k8s.io/name":"multinode-015900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_18T12_57_49_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-18T12:57:44Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I1218 12:58:04.563630   12728 main.go:141] libmachine: [stdout =====>] : Running
	
	I1218 12:58:04.563630   12728 main.go:141] libmachine: [stderr =====>] : 
	I1218 12:58:04.563630   12728 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-015900 ).networkadapters[0]).ipaddresses[0]
	I1218 12:58:04.610443   12728 main.go:141] libmachine: [stdout =====>] : Running
	
	I1218 12:58:04.610443   12728 main.go:141] libmachine: [stderr =====>] : 
	I1218 12:58:04.610443   12728 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1218 12:58:04.610707   12728 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1218 12:58:04.610707   12728 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-015900 ).state
	I1218 12:58:04.848798   12728 round_trippers.go:463] GET https://192.168.238.182:8443/api/v1/nodes/multinode-015900
	I1218 12:58:04.848798   12728 round_trippers.go:469] Request Headers:
	I1218 12:58:04.848798   12728 round_trippers.go:473]     Accept: application/json, */*
	I1218 12:58:04.848798   12728 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1218 12:58:04.851909   12728 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1218 12:58:04.852949   12728 round_trippers.go:577] Response Headers:
	I1218 12:58:04.852949   12728 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e24e6e3-cf39-4982-af10-f8c010ad1d90
	I1218 12:58:04.852949   12728 round_trippers.go:580]     Date: Mon, 18 Dec 2023 12:58:04 GMT
	I1218 12:58:04.852949   12728 round_trippers.go:580]     Audit-Id: faa94ecb-494e-41a4-b957-b342f80a61dc
	I1218 12:58:04.853060   12728 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 12:58:04.853060   12728 round_trippers.go:580]     Content-Type: application/json
	I1218 12:58:04.853060   12728 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a04cbc8-e61e-444a-bf0e-8472e0035c2c
	I1218 12:58:04.853060   12728 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-015900","uid":"b1b226d9-f305-41e8-ae9a-d377e8d5a4c5","resourceVersion":"355","creationTimestamp":"2023-12-18T12:57:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-015900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30d8ecd1811578f7b9db580c501c654c189f68d4","minikube.k8s.io/name":"multinode-015900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_18T12_57_49_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-18T12:57:44Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I1218 12:58:05.359743   12728 round_trippers.go:463] GET https://192.168.238.182:8443/api/v1/nodes/multinode-015900
	I1218 12:58:05.359806   12728 round_trippers.go:469] Request Headers:
	I1218 12:58:05.359863   12728 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1218 12:58:05.359863   12728 round_trippers.go:473]     Accept: application/json, */*
	I1218 12:58:05.365302   12728 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1218 12:58:05.365302   12728 round_trippers.go:577] Response Headers:
	I1218 12:58:05.365302   12728 round_trippers.go:580]     Audit-Id: 2029fcf7-511e-4d47-a65e-98fa0ddd8774
	I1218 12:58:05.365302   12728 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 12:58:05.365302   12728 round_trippers.go:580]     Content-Type: application/json
	I1218 12:58:05.365302   12728 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a04cbc8-e61e-444a-bf0e-8472e0035c2c
	I1218 12:58:05.365302   12728 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e24e6e3-cf39-4982-af10-f8c010ad1d90
	I1218 12:58:05.365302   12728 round_trippers.go:580]     Date: Mon, 18 Dec 2023 12:58:05 GMT
	I1218 12:58:05.366314   12728 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-015900","uid":"b1b226d9-f305-41e8-ae9a-d377e8d5a4c5","resourceVersion":"355","creationTimestamp":"2023-12-18T12:57:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-015900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30d8ecd1811578f7b9db580c501c654c189f68d4","minikube.k8s.io/name":"multinode-015900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_18T12_57_49_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-18T12:57:44Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I1218 12:58:05.851605   12728 round_trippers.go:463] GET https://192.168.238.182:8443/api/v1/nodes/multinode-015900
	I1218 12:58:05.851605   12728 round_trippers.go:469] Request Headers:
	I1218 12:58:05.851728   12728 round_trippers.go:473]     Accept: application/json, */*
	I1218 12:58:05.851728   12728 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1218 12:58:05.856311   12728 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1218 12:58:05.856396   12728 round_trippers.go:577] Response Headers:
	I1218 12:58:05.856396   12728 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e24e6e3-cf39-4982-af10-f8c010ad1d90
	I1218 12:58:05.856396   12728 round_trippers.go:580]     Date: Mon, 18 Dec 2023 12:58:05 GMT
	I1218 12:58:05.856396   12728 round_trippers.go:580]     Audit-Id: 7a542b69-9a54-4073-b514-dad9525137cd
	I1218 12:58:05.856484   12728 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 12:58:05.856484   12728 round_trippers.go:580]     Content-Type: application/json
	I1218 12:58:05.856484   12728 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a04cbc8-e61e-444a-bf0e-8472e0035c2c
	I1218 12:58:05.856822   12728 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-015900","uid":"b1b226d9-f305-41e8-ae9a-d377e8d5a4c5","resourceVersion":"355","creationTimestamp":"2023-12-18T12:57:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-015900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30d8ecd1811578f7b9db580c501c654c189f68d4","minikube.k8s.io/name":"multinode-015900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_18T12_57_49_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-18T12:57:44Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I1218 12:58:05.857445   12728 node_ready.go:58] node "multinode-015900" has status "Ready":"False"
	I1218 12:58:06.361915   12728 round_trippers.go:463] GET https://192.168.238.182:8443/api/v1/nodes/multinode-015900
	I1218 12:58:06.361982   12728 round_trippers.go:469] Request Headers:
	I1218 12:58:06.361982   12728 round_trippers.go:473]     Accept: application/json, */*
	I1218 12:58:06.361982   12728 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1218 12:58:06.365409   12728 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1218 12:58:06.365444   12728 round_trippers.go:577] Response Headers:
	I1218 12:58:06.365444   12728 round_trippers.go:580]     Content-Type: application/json
	I1218 12:58:06.365444   12728 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a04cbc8-e61e-444a-bf0e-8472e0035c2c
	I1218 12:58:06.365444   12728 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e24e6e3-cf39-4982-af10-f8c010ad1d90
	I1218 12:58:06.365444   12728 round_trippers.go:580]     Date: Mon, 18 Dec 2023 12:58:06 GMT
	I1218 12:58:06.365504   12728 round_trippers.go:580]     Audit-Id: 5ace8068-b201-4728-be3c-509a8cffa51c
	I1218 12:58:06.365504   12728 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 12:58:06.365659   12728 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-015900","uid":"b1b226d9-f305-41e8-ae9a-d377e8d5a4c5","resourceVersion":"355","creationTimestamp":"2023-12-18T12:57:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-015900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30d8ecd1811578f7b9db580c501c654c189f68d4","minikube.k8s.io/name":"multinode-015900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_18T12_57_49_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-18T12:57:44Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I1218 12:58:06.854681   12728 main.go:141] libmachine: [stdout =====>] : Running
	
	I1218 12:58:06.854866   12728 main.go:141] libmachine: [stderr =====>] : 
	I1218 12:58:06.854866   12728 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-015900 ).networkadapters[0]).ipaddresses[0]
	I1218 12:58:06.855141   12728 round_trippers.go:463] GET https://192.168.238.182:8443/api/v1/nodes/multinode-015900
	I1218 12:58:06.855141   12728 round_trippers.go:469] Request Headers:
	I1218 12:58:06.855141   12728 round_trippers.go:473]     Accept: application/json, */*
	I1218 12:58:06.855141   12728 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1218 12:58:06.861186   12728 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1218 12:58:06.861186   12728 round_trippers.go:577] Response Headers:
	I1218 12:58:06.861186   12728 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 12:58:06.861186   12728 round_trippers.go:580]     Content-Type: application/json
	I1218 12:58:06.861186   12728 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a04cbc8-e61e-444a-bf0e-8472e0035c2c
	I1218 12:58:06.861186   12728 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e24e6e3-cf39-4982-af10-f8c010ad1d90
	I1218 12:58:06.861186   12728 round_trippers.go:580]     Date: Mon, 18 Dec 2023 12:58:06 GMT
	I1218 12:58:06.861186   12728 round_trippers.go:580]     Audit-Id: 18e81e11-c6d0-478d-9db9-c34b483ae0dd
	I1218 12:58:06.861186   12728 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-015900","uid":"b1b226d9-f305-41e8-ae9a-d377e8d5a4c5","resourceVersion":"355","creationTimestamp":"2023-12-18T12:57:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-015900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30d8ecd1811578f7b9db580c501c654c189f68d4","minikube.k8s.io/name":"multinode-015900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_18T12_57_49_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-18T12:57:44Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I1218 12:58:07.331809   12728 main.go:141] libmachine: [stdout =====>] : 192.168.238.182
	
	I1218 12:58:07.332046   12728 main.go:141] libmachine: [stderr =====>] : 
	I1218 12:58:07.332814   12728 sshutil.go:53] new ssh client: &{IP:192.168.238.182 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-015900\id_rsa Username:docker}
	I1218 12:58:07.348169   12728 round_trippers.go:463] GET https://192.168.238.182:8443/api/v1/nodes/multinode-015900
	I1218 12:58:07.348270   12728 round_trippers.go:469] Request Headers:
	I1218 12:58:07.348270   12728 round_trippers.go:473]     Accept: application/json, */*
	I1218 12:58:07.348270   12728 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1218 12:58:07.351755   12728 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1218 12:58:07.352118   12728 round_trippers.go:577] Response Headers:
	I1218 12:58:07.352118   12728 round_trippers.go:580]     Audit-Id: 39e91a6a-6a3a-45bc-aa8e-0f01aed9cfe5
	I1218 12:58:07.352118   12728 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 12:58:07.352118   12728 round_trippers.go:580]     Content-Type: application/json
	I1218 12:58:07.352118   12728 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a04cbc8-e61e-444a-bf0e-8472e0035c2c
	I1218 12:58:07.352118   12728 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e24e6e3-cf39-4982-af10-f8c010ad1d90
	I1218 12:58:07.352118   12728 round_trippers.go:580]     Date: Mon, 18 Dec 2023 12:58:07 GMT
	I1218 12:58:07.352389   12728 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-015900","uid":"b1b226d9-f305-41e8-ae9a-d377e8d5a4c5","resourceVersion":"355","creationTimestamp":"2023-12-18T12:57:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-015900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30d8ecd1811578f7b9db580c501c654c189f68d4","minikube.k8s.io/name":"multinode-015900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_18T12_57_49_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-18T12:57:44Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I1218 12:58:07.530914   12728 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1218 12:58:07.853918   12728 round_trippers.go:463] GET https://192.168.238.182:8443/api/v1/nodes/multinode-015900
	I1218 12:58:07.853971   12728 round_trippers.go:469] Request Headers:
	I1218 12:58:07.853971   12728 round_trippers.go:473]     Accept: application/json, */*
	I1218 12:58:07.853971   12728 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1218 12:58:07.857318   12728 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1218 12:58:07.857318   12728 round_trippers.go:577] Response Headers:
	I1218 12:58:07.857318   12728 round_trippers.go:580]     Content-Type: application/json
	I1218 12:58:07.857318   12728 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a04cbc8-e61e-444a-bf0e-8472e0035c2c
	I1218 12:58:07.857318   12728 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e24e6e3-cf39-4982-af10-f8c010ad1d90
	I1218 12:58:07.857318   12728 round_trippers.go:580]     Date: Mon, 18 Dec 2023 12:58:07 GMT
	I1218 12:58:07.857318   12728 round_trippers.go:580]     Audit-Id: b123a29f-976a-475e-885e-85c45e9cb965
	I1218 12:58:07.857800   12728 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 12:58:07.858033   12728 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-015900","uid":"b1b226d9-f305-41e8-ae9a-d377e8d5a4c5","resourceVersion":"355","creationTimestamp":"2023-12-18T12:57:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-015900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30d8ecd1811578f7b9db580c501c654c189f68d4","minikube.k8s.io/name":"multinode-015900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_18T12_57_49_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-18T12:57:44Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I1218 12:58:07.858529   12728 node_ready.go:58] node "multinode-015900" has status "Ready":"False"
	I1218 12:58:08.252871   12728 command_runner.go:130] > serviceaccount/storage-provisioner created
	I1218 12:58:08.252906   12728 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner created
	I1218 12:58:08.253012   12728 command_runner.go:130] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I1218 12:58:08.253012   12728 command_runner.go:130] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I1218 12:58:08.253012   12728 command_runner.go:130] > endpoints/k8s.io-minikube-hostpath created
	I1218 12:58:08.253012   12728 command_runner.go:130] > pod/storage-provisioner created
	I1218 12:58:08.363245   12728 round_trippers.go:463] GET https://192.168.238.182:8443/api/v1/nodes/multinode-015900
	I1218 12:58:08.363245   12728 round_trippers.go:469] Request Headers:
	I1218 12:58:08.363245   12728 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1218 12:58:08.363245   12728 round_trippers.go:473]     Accept: application/json, */*
	I1218 12:58:08.366750   12728 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1218 12:58:08.366750   12728 round_trippers.go:577] Response Headers:
	I1218 12:58:08.366750   12728 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e24e6e3-cf39-4982-af10-f8c010ad1d90
	I1218 12:58:08.366881   12728 round_trippers.go:580]     Date: Mon, 18 Dec 2023 12:58:08 GMT
	I1218 12:58:08.366881   12728 round_trippers.go:580]     Audit-Id: 79f20a18-bc4b-43e0-a934-e4d78a25b55f
	I1218 12:58:08.366881   12728 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 12:58:08.366881   12728 round_trippers.go:580]     Content-Type: application/json
	I1218 12:58:08.366881   12728 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a04cbc8-e61e-444a-bf0e-8472e0035c2c
	I1218 12:58:08.366881   12728 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-015900","uid":"b1b226d9-f305-41e8-ae9a-d377e8d5a4c5","resourceVersion":"355","creationTimestamp":"2023-12-18T12:57:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-015900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30d8ecd1811578f7b9db580c501c654c189f68d4","minikube.k8s.io/name":"multinode-015900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_18T12_57_49_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-18T12:57:44Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I1218 12:58:08.856243   12728 round_trippers.go:463] GET https://192.168.238.182:8443/api/v1/nodes/multinode-015900
	I1218 12:58:08.856243   12728 round_trippers.go:469] Request Headers:
	I1218 12:58:08.856243   12728 round_trippers.go:473]     Accept: application/json, */*
	I1218 12:58:08.856350   12728 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1218 12:58:08.859747   12728 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1218 12:58:08.859747   12728 round_trippers.go:577] Response Headers:
	I1218 12:58:08.859747   12728 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a04cbc8-e61e-444a-bf0e-8472e0035c2c
	I1218 12:58:08.859747   12728 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e24e6e3-cf39-4982-af10-f8c010ad1d90
	I1218 12:58:08.859747   12728 round_trippers.go:580]     Date: Mon, 18 Dec 2023 12:58:08 GMT
	I1218 12:58:08.859747   12728 round_trippers.go:580]     Audit-Id: b01fc719-597e-4bb6-b8dc-267c3fc7d73a
	I1218 12:58:08.859747   12728 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 12:58:08.859747   12728 round_trippers.go:580]     Content-Type: application/json
	I1218 12:58:08.860742   12728 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-015900","uid":"b1b226d9-f305-41e8-ae9a-d377e8d5a4c5","resourceVersion":"355","creationTimestamp":"2023-12-18T12:57:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-015900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30d8ecd1811578f7b9db580c501c654c189f68d4","minikube.k8s.io/name":"multinode-015900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_18T12_57_49_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-18T12:57:44Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I1218 12:58:09.363611   12728 round_trippers.go:463] GET https://192.168.238.182:8443/api/v1/nodes/multinode-015900
	I1218 12:58:09.363611   12728 round_trippers.go:469] Request Headers:
	I1218 12:58:09.363611   12728 round_trippers.go:473]     Accept: application/json, */*
	I1218 12:58:09.363611   12728 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1218 12:58:09.368147   12728 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1218 12:58:09.368541   12728 round_trippers.go:577] Response Headers:
	I1218 12:58:09.368541   12728 round_trippers.go:580]     Audit-Id: 0e3cbd8a-d73b-458d-af04-200c9dda1017
	I1218 12:58:09.368541   12728 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 12:58:09.368541   12728 round_trippers.go:580]     Content-Type: application/json
	I1218 12:58:09.368541   12728 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a04cbc8-e61e-444a-bf0e-8472e0035c2c
	I1218 12:58:09.368541   12728 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e24e6e3-cf39-4982-af10-f8c010ad1d90
	I1218 12:58:09.368541   12728 round_trippers.go:580]     Date: Mon, 18 Dec 2023 12:58:09 GMT
	I1218 12:58:09.368939   12728 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-015900","uid":"b1b226d9-f305-41e8-ae9a-d377e8d5a4c5","resourceVersion":"355","creationTimestamp":"2023-12-18T12:57:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-015900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30d8ecd1811578f7b9db580c501c654c189f68d4","minikube.k8s.io/name":"multinode-015900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_18T12_57_49_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-18T12:57:44Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I1218 12:58:09.457708   12728 main.go:141] libmachine: [stdout =====>] : 192.168.238.182
	
	I1218 12:58:09.457708   12728 main.go:141] libmachine: [stderr =====>] : 
	I1218 12:58:09.457708   12728 sshutil.go:53] new ssh client: &{IP:192.168.238.182 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-015900\id_rsa Username:docker}
	I1218 12:58:09.588888   12728 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1218 12:58:09.853297   12728 round_trippers.go:463] GET https://192.168.238.182:8443/api/v1/nodes/multinode-015900
	I1218 12:58:09.853352   12728 round_trippers.go:469] Request Headers:
	I1218 12:58:09.853352   12728 round_trippers.go:473]     Accept: application/json, */*
	I1218 12:58:09.853468   12728 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1218 12:58:09.856701   12728 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1218 12:58:09.856701   12728 round_trippers.go:577] Response Headers:
	I1218 12:58:09.856701   12728 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 12:58:09.856701   12728 round_trippers.go:580]     Content-Type: application/json
	I1218 12:58:09.856701   12728 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a04cbc8-e61e-444a-bf0e-8472e0035c2c
	I1218 12:58:09.857115   12728 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e24e6e3-cf39-4982-af10-f8c010ad1d90
	I1218 12:58:09.857115   12728 round_trippers.go:580]     Date: Mon, 18 Dec 2023 12:58:09 GMT
	I1218 12:58:09.857115   12728 round_trippers.go:580]     Audit-Id: fe01c91d-a259-440d-8e64-c4fd6c20b29a
	I1218 12:58:09.857717   12728 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-015900","uid":"b1b226d9-f305-41e8-ae9a-d377e8d5a4c5","resourceVersion":"355","creationTimestamp":"2023-12-18T12:57:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-015900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30d8ecd1811578f7b9db580c501c654c189f68d4","minikube.k8s.io/name":"multinode-015900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_18T12_57_49_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-18T12:57:44Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I1218 12:58:09.908690   12728 command_runner.go:130] > storageclass.storage.k8s.io/standard created
	I1218 12:58:09.909004   12728 round_trippers.go:463] GET https://192.168.238.182:8443/apis/storage.k8s.io/v1/storageclasses
	I1218 12:58:09.909066   12728 round_trippers.go:469] Request Headers:
	I1218 12:58:09.909066   12728 round_trippers.go:473]     Accept: application/json, */*
	I1218 12:58:09.909066   12728 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1218 12:58:09.912427   12728 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1218 12:58:09.912427   12728 round_trippers.go:577] Response Headers:
	I1218 12:58:09.912427   12728 round_trippers.go:580]     Audit-Id: 4ba45e5a-b95f-4367-b255-89e90284379f
	I1218 12:58:09.912427   12728 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 12:58:09.912427   12728 round_trippers.go:580]     Content-Type: application/json
	I1218 12:58:09.912427   12728 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a04cbc8-e61e-444a-bf0e-8472e0035c2c
	I1218 12:58:09.912427   12728 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e24e6e3-cf39-4982-af10-f8c010ad1d90
	I1218 12:58:09.912668   12728 round_trippers.go:580]     Content-Length: 1273
	I1218 12:58:09.912668   12728 round_trippers.go:580]     Date: Mon, 18 Dec 2023 12:58:09 GMT
	I1218 12:58:09.912668   12728 request.go:1212] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"420"},"items":[{"metadata":{"name":"standard","uid":"f52fdf98-baf2-4434-b46e-cde6993b02a6","resourceVersion":"420","creationTimestamp":"2023-12-18T12:58:09Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2023-12-18T12:58:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kuberne
tes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is- [truncated 249 chars]
	I1218 12:58:09.913363   12728 request.go:1212] Request Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"f52fdf98-baf2-4434-b46e-cde6993b02a6","resourceVersion":"420","creationTimestamp":"2023-12-18T12:58:09Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2023-12-18T12:58:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclas
s.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I1218 12:58:09.913446   12728 round_trippers.go:463] PUT https://192.168.238.182:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I1218 12:58:09.913446   12728 round_trippers.go:469] Request Headers:
	I1218 12:58:09.913500   12728 round_trippers.go:473]     Accept: application/json, */*
	I1218 12:58:09.913500   12728 round_trippers.go:473]     Content-Type: application/json
	I1218 12:58:09.913500   12728 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1218 12:58:09.916856   12728 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1218 12:58:09.916856   12728 round_trippers.go:577] Response Headers:
	I1218 12:58:09.916856   12728 round_trippers.go:580]     Content-Type: application/json
	I1218 12:58:09.916856   12728 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a04cbc8-e61e-444a-bf0e-8472e0035c2c
	I1218 12:58:09.916856   12728 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e24e6e3-cf39-4982-af10-f8c010ad1d90
	I1218 12:58:09.916856   12728 round_trippers.go:580]     Content-Length: 1220
	I1218 12:58:09.916856   12728 round_trippers.go:580]     Date: Mon, 18 Dec 2023 12:58:09 GMT
	I1218 12:58:09.916856   12728 round_trippers.go:580]     Audit-Id: c26b4cd3-e2ef-4f72-b41b-b47b472c2471
	I1218 12:58:09.916856   12728 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 12:58:09.916856   12728 request.go:1212] Response Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"f52fdf98-baf2-4434-b46e-cde6993b02a6","resourceVersion":"420","creationTimestamp":"2023-12-18T12:58:09Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2023-12-18T12:58:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storagecla
ss.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I1218 12:58:09.916856   12728 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I1218 12:58:09.919803   12728 addons.go:502] enable addons completed in 9.8596088s: enabled=[storage-provisioner default-storageclass]
	I1218 12:58:10.359995   12728 round_trippers.go:463] GET https://192.168.238.182:8443/api/v1/nodes/multinode-015900
	I1218 12:58:10.359995   12728 round_trippers.go:469] Request Headers:
	I1218 12:58:10.359995   12728 round_trippers.go:473]     Accept: application/json, */*
	I1218 12:58:10.359995   12728 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1218 12:58:10.363560   12728 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1218 12:58:10.363560   12728 round_trippers.go:577] Response Headers:
	I1218 12:58:10.363560   12728 round_trippers.go:580]     Audit-Id: 94dd893a-ec7b-4d15-bc11-fadd56160fe1
	I1218 12:58:10.363560   12728 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 12:58:10.363936   12728 round_trippers.go:580]     Content-Type: application/json
	I1218 12:58:10.363936   12728 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a04cbc8-e61e-444a-bf0e-8472e0035c2c
	I1218 12:58:10.363936   12728 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e24e6e3-cf39-4982-af10-f8c010ad1d90
	I1218 12:58:10.363936   12728 round_trippers.go:580]     Date: Mon, 18 Dec 2023 12:58:10 GMT
	I1218 12:58:10.364501   12728 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-015900","uid":"b1b226d9-f305-41e8-ae9a-d377e8d5a4c5","resourceVersion":"355","creationTimestamp":"2023-12-18T12:57:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-015900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30d8ecd1811578f7b9db580c501c654c189f68d4","minikube.k8s.io/name":"multinode-015900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_18T12_57_49_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-18T12:57:44Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I1218 12:58:10.364851   12728 node_ready.go:58] node "multinode-015900" has status "Ready":"False"
	I1218 12:58:10.852110   12728 round_trippers.go:463] GET https://192.168.238.182:8443/api/v1/nodes/multinode-015900
	I1218 12:58:10.852110   12728 round_trippers.go:469] Request Headers:
	I1218 12:58:10.852194   12728 round_trippers.go:473]     Accept: application/json, */*
	I1218 12:58:10.852194   12728 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1218 12:58:10.963534   12728 round_trippers.go:574] Response Status: 200 OK in 111 milliseconds
	I1218 12:58:10.964330   12728 round_trippers.go:577] Response Headers:
	I1218 12:58:10.964330   12728 round_trippers.go:580]     Content-Type: application/json
	I1218 12:58:10.964330   12728 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a04cbc8-e61e-444a-bf0e-8472e0035c2c
	I1218 12:58:10.964330   12728 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e24e6e3-cf39-4982-af10-f8c010ad1d90
	I1218 12:58:10.964330   12728 round_trippers.go:580]     Date: Mon, 18 Dec 2023 12:58:10 GMT
	I1218 12:58:10.964330   12728 round_trippers.go:580]     Audit-Id: b6b8a19e-ce5e-470f-b0d2-1649c7d083c4
	I1218 12:58:10.964330   12728 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 12:58:10.964651   12728 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-015900","uid":"b1b226d9-f305-41e8-ae9a-d377e8d5a4c5","resourceVersion":"355","creationTimestamp":"2023-12-18T12:57:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-015900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30d8ecd1811578f7b9db580c501c654c189f68d4","minikube.k8s.io/name":"multinode-015900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_18T12_57_49_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-18T12:57:44Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I1218 12:58:11.356286   12728 round_trippers.go:463] GET https://192.168.238.182:8443/api/v1/nodes/multinode-015900
	I1218 12:58:11.356286   12728 round_trippers.go:469] Request Headers:
	I1218 12:58:11.356286   12728 round_trippers.go:473]     Accept: application/json, */*
	I1218 12:58:11.356286   12728 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1218 12:58:11.359651   12728 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1218 12:58:11.359651   12728 round_trippers.go:577] Response Headers:
	I1218 12:58:11.360106   12728 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e24e6e3-cf39-4982-af10-f8c010ad1d90
	I1218 12:58:11.360106   12728 round_trippers.go:580]     Date: Mon, 18 Dec 2023 12:58:11 GMT
	I1218 12:58:11.360106   12728 round_trippers.go:580]     Audit-Id: 6581356e-89a9-4a5a-b46d-efa261ae2749
	I1218 12:58:11.360106   12728 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 12:58:11.360106   12728 round_trippers.go:580]     Content-Type: application/json
	I1218 12:58:11.360106   12728 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a04cbc8-e61e-444a-bf0e-8472e0035c2c
	I1218 12:58:11.360444   12728 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-015900","uid":"b1b226d9-f305-41e8-ae9a-d377e8d5a4c5","resourceVersion":"355","creationTimestamp":"2023-12-18T12:57:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-015900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30d8ecd1811578f7b9db580c501c654c189f68d4","minikube.k8s.io/name":"multinode-015900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_18T12_57_49_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-18T12:57:44Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I1218 12:58:11.854845   12728 round_trippers.go:463] GET https://192.168.238.182:8443/api/v1/nodes/multinode-015900
	I1218 12:58:11.854845   12728 round_trippers.go:469] Request Headers:
	I1218 12:58:11.854947   12728 round_trippers.go:473]     Accept: application/json, */*
	I1218 12:58:11.854947   12728 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1218 12:58:11.859853   12728 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1218 12:58:11.859947   12728 round_trippers.go:577] Response Headers:
	I1218 12:58:11.859947   12728 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a04cbc8-e61e-444a-bf0e-8472e0035c2c
	I1218 12:58:11.859947   12728 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e24e6e3-cf39-4982-af10-f8c010ad1d90
	I1218 12:58:11.859947   12728 round_trippers.go:580]     Date: Mon, 18 Dec 2023 12:58:11 GMT
	I1218 12:58:11.859947   12728 round_trippers.go:580]     Audit-Id: fcbae442-292b-409c-9c5c-803867701eff
	I1218 12:58:11.859947   12728 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 12:58:11.859947   12728 round_trippers.go:580]     Content-Type: application/json
	I1218 12:58:11.860221   12728 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-015900","uid":"b1b226d9-f305-41e8-ae9a-d377e8d5a4c5","resourceVersion":"355","creationTimestamp":"2023-12-18T12:57:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-015900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30d8ecd1811578f7b9db580c501c654c189f68d4","minikube.k8s.io/name":"multinode-015900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_18T12_57_49_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-18T12:57:44Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I1218 12:58:12.356627   12728 round_trippers.go:463] GET https://192.168.238.182:8443/api/v1/nodes/multinode-015900
	I1218 12:58:12.356627   12728 round_trippers.go:469] Request Headers:
	I1218 12:58:12.356627   12728 round_trippers.go:473]     Accept: application/json, */*
	I1218 12:58:12.356627   12728 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1218 12:58:12.360137   12728 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1218 12:58:12.360710   12728 round_trippers.go:577] Response Headers:
	I1218 12:58:12.360710   12728 round_trippers.go:580]     Date: Mon, 18 Dec 2023 12:58:12 GMT
	I1218 12:58:12.360710   12728 round_trippers.go:580]     Audit-Id: 13660282-adf7-4854-a0d8-6c139a2a4b07
	I1218 12:58:12.360710   12728 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 12:58:12.360710   12728 round_trippers.go:580]     Content-Type: application/json
	I1218 12:58:12.360710   12728 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a04cbc8-e61e-444a-bf0e-8472e0035c2c
	I1218 12:58:12.360710   12728 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e24e6e3-cf39-4982-af10-f8c010ad1d90
	I1218 12:58:12.361118   12728 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-015900","uid":"b1b226d9-f305-41e8-ae9a-d377e8d5a4c5","resourceVersion":"355","creationTimestamp":"2023-12-18T12:57:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-015900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30d8ecd1811578f7b9db580c501c654c189f68d4","minikube.k8s.io/name":"multinode-015900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_18T12_57_49_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-18T12:57:44Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I1218 12:58:12.856616   12728 round_trippers.go:463] GET https://192.168.238.182:8443/api/v1/nodes/multinode-015900
	I1218 12:58:12.856728   12728 round_trippers.go:469] Request Headers:
	I1218 12:58:12.856728   12728 round_trippers.go:473]     Accept: application/json, */*
	I1218 12:58:12.856728   12728 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1218 12:58:12.861027   12728 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1218 12:58:12.861497   12728 round_trippers.go:577] Response Headers:
	I1218 12:58:12.861497   12728 round_trippers.go:580]     Content-Type: application/json
	I1218 12:58:12.861497   12728 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a04cbc8-e61e-444a-bf0e-8472e0035c2c
	I1218 12:58:12.861497   12728 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e24e6e3-cf39-4982-af10-f8c010ad1d90
	I1218 12:58:12.861497   12728 round_trippers.go:580]     Date: Mon, 18 Dec 2023 12:58:12 GMT
	I1218 12:58:12.861497   12728 round_trippers.go:580]     Audit-Id: 1a62351b-a6f4-48a7-ada1-98cb52b203e3
	I1218 12:58:12.861497   12728 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 12:58:12.861805   12728 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-015900","uid":"b1b226d9-f305-41e8-ae9a-d377e8d5a4c5","resourceVersion":"355","creationTimestamp":"2023-12-18T12:57:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-015900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30d8ecd1811578f7b9db580c501c654c189f68d4","minikube.k8s.io/name":"multinode-015900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_18T12_57_49_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-18T12:57:44Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I1218 12:58:12.862500   12728 node_ready.go:58] node "multinode-015900" has status "Ready":"False"
	I1218 12:58:13.355856   12728 round_trippers.go:463] GET https://192.168.238.182:8443/api/v1/nodes/multinode-015900
	I1218 12:58:13.356051   12728 round_trippers.go:469] Request Headers:
	I1218 12:58:13.356051   12728 round_trippers.go:473]     Accept: application/json, */*
	I1218 12:58:13.356051   12728 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1218 12:58:13.362617   12728 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1218 12:58:13.362617   12728 round_trippers.go:577] Response Headers:
	I1218 12:58:13.362617   12728 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e24e6e3-cf39-4982-af10-f8c010ad1d90
	I1218 12:58:13.362617   12728 round_trippers.go:580]     Date: Mon, 18 Dec 2023 12:58:13 GMT
	I1218 12:58:13.362617   12728 round_trippers.go:580]     Audit-Id: fd564bd2-8737-4ba8-9faf-043e799b5c21
	I1218 12:58:13.362617   12728 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 12:58:13.362617   12728 round_trippers.go:580]     Content-Type: application/json
	I1218 12:58:13.362617   12728 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a04cbc8-e61e-444a-bf0e-8472e0035c2c
	I1218 12:58:13.362617   12728 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-015900","uid":"b1b226d9-f305-41e8-ae9a-d377e8d5a4c5","resourceVersion":"355","creationTimestamp":"2023-12-18T12:57:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-015900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30d8ecd1811578f7b9db580c501c654c189f68d4","minikube.k8s.io/name":"multinode-015900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_18T12_57_49_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-18T12:57:44Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I1218 12:58:13.854790   12728 round_trippers.go:463] GET https://192.168.238.182:8443/api/v1/nodes/multinode-015900
	I1218 12:58:13.854901   12728 round_trippers.go:469] Request Headers:
	I1218 12:58:13.854901   12728 round_trippers.go:473]     Accept: application/json, */*
	I1218 12:58:13.854901   12728 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1218 12:58:13.858486   12728 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1218 12:58:13.858591   12728 round_trippers.go:577] Response Headers:
	I1218 12:58:13.858591   12728 round_trippers.go:580]     Audit-Id: 452dd591-77bf-481d-9653-e36ae69bc47e
	I1218 12:58:13.858591   12728 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 12:58:13.858591   12728 round_trippers.go:580]     Content-Type: application/json
	I1218 12:58:13.858591   12728 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a04cbc8-e61e-444a-bf0e-8472e0035c2c
	I1218 12:58:13.858591   12728 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e24e6e3-cf39-4982-af10-f8c010ad1d90
	I1218 12:58:13.858591   12728 round_trippers.go:580]     Date: Mon, 18 Dec 2023 12:58:13 GMT
	I1218 12:58:13.858960   12728 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-015900","uid":"b1b226d9-f305-41e8-ae9a-d377e8d5a4c5","resourceVersion":"355","creationTimestamp":"2023-12-18T12:57:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-015900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30d8ecd1811578f7b9db580c501c654c189f68d4","minikube.k8s.io/name":"multinode-015900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_18T12_57_49_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-18T12:57:44Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I1218 12:58:14.354731   12728 round_trippers.go:463] GET https://192.168.238.182:8443/api/v1/nodes/multinode-015900
	I1218 12:58:14.354731   12728 round_trippers.go:469] Request Headers:
	I1218 12:58:14.354840   12728 round_trippers.go:473]     Accept: application/json, */*
	I1218 12:58:14.354840   12728 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1218 12:58:14.358230   12728 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1218 12:58:14.359055   12728 round_trippers.go:577] Response Headers:
	I1218 12:58:14.359055   12728 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 12:58:14.359055   12728 round_trippers.go:580]     Content-Type: application/json
	I1218 12:58:14.359055   12728 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a04cbc8-e61e-444a-bf0e-8472e0035c2c
	I1218 12:58:14.359055   12728 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e24e6e3-cf39-4982-af10-f8c010ad1d90
	I1218 12:58:14.359163   12728 round_trippers.go:580]     Date: Mon, 18 Dec 2023 12:58:14 GMT
	I1218 12:58:14.359163   12728 round_trippers.go:580]     Audit-Id: 26200d5f-96d6-436c-86f5-112c950fa1a1
	I1218 12:58:14.359440   12728 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-015900","uid":"b1b226d9-f305-41e8-ae9a-d377e8d5a4c5","resourceVersion":"355","creationTimestamp":"2023-12-18T12:57:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-015900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30d8ecd1811578f7b9db580c501c654c189f68d4","minikube.k8s.io/name":"multinode-015900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_18T12_57_49_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-18T12:57:44Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I1218 12:58:14.855618   12728 round_trippers.go:463] GET https://192.168.238.182:8443/api/v1/nodes/multinode-015900
	I1218 12:58:14.855733   12728 round_trippers.go:469] Request Headers:
	I1218 12:58:14.855733   12728 round_trippers.go:473]     Accept: application/json, */*
	I1218 12:58:14.855733   12728 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1218 12:58:14.860019   12728 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1218 12:58:14.860141   12728 round_trippers.go:577] Response Headers:
	I1218 12:58:14.860141   12728 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a04cbc8-e61e-444a-bf0e-8472e0035c2c
	I1218 12:58:14.860141   12728 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e24e6e3-cf39-4982-af10-f8c010ad1d90
	I1218 12:58:14.860244   12728 round_trippers.go:580]     Date: Mon, 18 Dec 2023 12:58:14 GMT
	I1218 12:58:14.860244   12728 round_trippers.go:580]     Audit-Id: f75edcba-2147-4f1d-9259-ff4e8a6e62ca
	I1218 12:58:14.860244   12728 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 12:58:14.860244   12728 round_trippers.go:580]     Content-Type: application/json
	I1218 12:58:14.860459   12728 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-015900","uid":"b1b226d9-f305-41e8-ae9a-d377e8d5a4c5","resourceVersion":"355","creationTimestamp":"2023-12-18T12:57:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-015900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30d8ecd1811578f7b9db580c501c654c189f68d4","minikube.k8s.io/name":"multinode-015900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_18T12_57_49_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-18T12:57:44Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I1218 12:58:15.354896   12728 round_trippers.go:463] GET https://192.168.238.182:8443/api/v1/nodes/multinode-015900
	I1218 12:58:15.355002   12728 round_trippers.go:469] Request Headers:
	I1218 12:58:15.355002   12728 round_trippers.go:473]     Accept: application/json, */*
	I1218 12:58:15.355002   12728 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1218 12:58:15.358394   12728 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1218 12:58:15.359266   12728 round_trippers.go:577] Response Headers:
	I1218 12:58:15.359266   12728 round_trippers.go:580]     Audit-Id: b565da38-d10c-4d80-8ba5-c88e585ee20f
	I1218 12:58:15.359266   12728 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 12:58:15.359266   12728 round_trippers.go:580]     Content-Type: application/json
	I1218 12:58:15.359266   12728 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a04cbc8-e61e-444a-bf0e-8472e0035c2c
	I1218 12:58:15.359266   12728 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e24e6e3-cf39-4982-af10-f8c010ad1d90
	I1218 12:58:15.359266   12728 round_trippers.go:580]     Date: Mon, 18 Dec 2023 12:58:15 GMT
	I1218 12:58:15.359723   12728 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-015900","uid":"b1b226d9-f305-41e8-ae9a-d377e8d5a4c5","resourceVersion":"355","creationTimestamp":"2023-12-18T12:57:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-015900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30d8ecd1811578f7b9db580c501c654c189f68d4","minikube.k8s.io/name":"multinode-015900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_18T12_57_49_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-18T12:57:44Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I1218 12:58:15.360195   12728 node_ready.go:58] node "multinode-015900" has status "Ready":"False"
	I1218 12:58:15.854535   12728 round_trippers.go:463] GET https://192.168.238.182:8443/api/v1/nodes/multinode-015900
	I1218 12:58:15.854535   12728 round_trippers.go:469] Request Headers:
	I1218 12:58:15.854535   12728 round_trippers.go:473]     Accept: application/json, */*
	I1218 12:58:15.854677   12728 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1218 12:58:15.859108   12728 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1218 12:58:15.859409   12728 round_trippers.go:577] Response Headers:
	I1218 12:58:15.859409   12728 round_trippers.go:580]     Audit-Id: 5c9435ef-78fb-49fb-a226-ede4aabb1d10
	I1218 12:58:15.859409   12728 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 12:58:15.859409   12728 round_trippers.go:580]     Content-Type: application/json
	I1218 12:58:15.859409   12728 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a04cbc8-e61e-444a-bf0e-8472e0035c2c
	I1218 12:58:15.859409   12728 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e24e6e3-cf39-4982-af10-f8c010ad1d90
	I1218 12:58:15.859409   12728 round_trippers.go:580]     Date: Mon, 18 Dec 2023 12:58:15 GMT
	I1218 12:58:15.859693   12728 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-015900","uid":"b1b226d9-f305-41e8-ae9a-d377e8d5a4c5","resourceVersion":"355","creationTimestamp":"2023-12-18T12:57:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-015900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30d8ecd1811578f7b9db580c501c654c189f68d4","minikube.k8s.io/name":"multinode-015900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_18T12_57_49_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-18T12:57:44Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I1218 12:58:16.352240   12728 round_trippers.go:463] GET https://192.168.238.182:8443/api/v1/nodes/multinode-015900
	I1218 12:58:16.352355   12728 round_trippers.go:469] Request Headers:
	I1218 12:58:16.352355   12728 round_trippers.go:473]     Accept: application/json, */*
	I1218 12:58:16.352355   12728 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1218 12:58:16.355944   12728 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1218 12:58:16.356652   12728 round_trippers.go:577] Response Headers:
	I1218 12:58:16.356652   12728 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a04cbc8-e61e-444a-bf0e-8472e0035c2c
	I1218 12:58:16.356652   12728 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e24e6e3-cf39-4982-af10-f8c010ad1d90
	I1218 12:58:16.356652   12728 round_trippers.go:580]     Date: Mon, 18 Dec 2023 12:58:16 GMT
	I1218 12:58:16.356652   12728 round_trippers.go:580]     Audit-Id: f932eb0f-c14e-4645-894d-de3a7ceec96b
	I1218 12:58:16.356652   12728 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 12:58:16.356759   12728 round_trippers.go:580]     Content-Type: application/json
	I1218 12:58:16.356995   12728 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-015900","uid":"b1b226d9-f305-41e8-ae9a-d377e8d5a4c5","resourceVersion":"355","creationTimestamp":"2023-12-18T12:57:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-015900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30d8ecd1811578f7b9db580c501c654c189f68d4","minikube.k8s.io/name":"multinode-015900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_18T12_57_49_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-18T12:57:44Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I1218 12:58:16.853523   12728 round_trippers.go:463] GET https://192.168.238.182:8443/api/v1/nodes/multinode-015900
	I1218 12:58:16.853670   12728 round_trippers.go:469] Request Headers:
	I1218 12:58:16.853670   12728 round_trippers.go:473]     Accept: application/json, */*
	I1218 12:58:16.853670   12728 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1218 12:58:16.858028   12728 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1218 12:58:16.858504   12728 round_trippers.go:577] Response Headers:
	I1218 12:58:16.858504   12728 round_trippers.go:580]     Audit-Id: ecfedde6-fd3a-49eb-916b-9e8adb556c4e
	I1218 12:58:16.858504   12728 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 12:58:16.858504   12728 round_trippers.go:580]     Content-Type: application/json
	I1218 12:58:16.858504   12728 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a04cbc8-e61e-444a-bf0e-8472e0035c2c
	I1218 12:58:16.858504   12728 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e24e6e3-cf39-4982-af10-f8c010ad1d90
	I1218 12:58:16.858504   12728 round_trippers.go:580]     Date: Mon, 18 Dec 2023 12:58:16 GMT
	I1218 12:58:16.859112   12728 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-015900","uid":"b1b226d9-f305-41e8-ae9a-d377e8d5a4c5","resourceVersion":"430","creationTimestamp":"2023-12-18T12:57:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-015900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30d8ecd1811578f7b9db580c501c654c189f68d4","minikube.k8s.io/name":"multinode-015900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_18T12_57_49_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-18T12:57:44Z","fieldsType":"FieldsV1","fi [truncated 4784 chars]
	I1218 12:58:16.859590   12728 node_ready.go:49] node "multinode-015900" has status "Ready":"True"
	I1218 12:58:16.859723   12728 node_ready.go:38] duration metric: took 15.5122919s waiting for node "multinode-015900" to be "Ready" ...
	I1218 12:58:16.859723   12728 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1218 12:58:16.860040   12728 round_trippers.go:463] GET https://192.168.238.182:8443/api/v1/namespaces/kube-system/pods
	I1218 12:58:16.860040   12728 round_trippers.go:469] Request Headers:
	I1218 12:58:16.860040   12728 round_trippers.go:473]     Accept: application/json, */*
	I1218 12:58:16.860040   12728 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1218 12:58:16.870847   12728 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I1218 12:58:16.870847   12728 round_trippers.go:577] Response Headers:
	I1218 12:58:16.870847   12728 round_trippers.go:580]     Date: Mon, 18 Dec 2023 12:58:16 GMT
	I1218 12:58:16.870847   12728 round_trippers.go:580]     Audit-Id: 848f76d7-41be-46a7-a1f5-03c26f208cbb
	I1218 12:58:16.870847   12728 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 12:58:16.870847   12728 round_trippers.go:580]     Content-Type: application/json
	I1218 12:58:16.870847   12728 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a04cbc8-e61e-444a-bf0e-8472e0035c2c
	I1218 12:58:16.870847   12728 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e24e6e3-cf39-4982-af10-f8c010ad1d90
	I1218 12:58:16.872199   12728 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"436"},"items":[{"metadata":{"name":"coredns-5dd5756b68-256fn","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"6bd59fc6-50d6-4764-823d-71232811cff2","resourceVersion":"436","creationTimestamp":"2023-12-18T12:58:00Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"d117bf5f-d14c-4a53-ad5a-250a0b115b2c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-18T12:58:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d117bf5f-d14c-4a53-ad5a-250a0b115b2c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 54052 chars]
	I1218 12:58:16.877632   12728 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-256fn" in "kube-system" namespace to be "Ready" ...
	I1218 12:58:16.877785   12728 round_trippers.go:463] GET https://192.168.238.182:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-256fn
	I1218 12:58:16.877785   12728 round_trippers.go:469] Request Headers:
	I1218 12:58:16.877785   12728 round_trippers.go:473]     Accept: application/json, */*
	I1218 12:58:16.877785   12728 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1218 12:58:16.880177   12728 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1218 12:58:16.880177   12728 round_trippers.go:577] Response Headers:
	I1218 12:58:16.880177   12728 round_trippers.go:580]     Date: Mon, 18 Dec 2023 12:58:16 GMT
	I1218 12:58:16.880177   12728 round_trippers.go:580]     Audit-Id: 1ab0faaa-6a69-47a5-9250-9493038adf9e
	I1218 12:58:16.880177   12728 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 12:58:16.880177   12728 round_trippers.go:580]     Content-Type: application/json
	I1218 12:58:16.880177   12728 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a04cbc8-e61e-444a-bf0e-8472e0035c2c
	I1218 12:58:16.880580   12728 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e24e6e3-cf39-4982-af10-f8c010ad1d90
	I1218 12:58:16.880938   12728 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-256fn","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"6bd59fc6-50d6-4764-823d-71232811cff2","resourceVersion":"436","creationTimestamp":"2023-12-18T12:58:00Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"d117bf5f-d14c-4a53-ad5a-250a0b115b2c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-18T12:58:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d117bf5f-d14c-4a53-ad5a-250a0b115b2c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6153 chars]
	I1218 12:58:16.881513   12728 round_trippers.go:463] GET https://192.168.238.182:8443/api/v1/nodes/multinode-015900
	I1218 12:58:16.881513   12728 round_trippers.go:469] Request Headers:
	I1218 12:58:16.881513   12728 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1218 12:58:16.881513   12728 round_trippers.go:473]     Accept: application/json, */*
	I1218 12:58:16.883793   12728 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1218 12:58:16.883793   12728 round_trippers.go:577] Response Headers:
	I1218 12:58:16.883793   12728 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e24e6e3-cf39-4982-af10-f8c010ad1d90
	I1218 12:58:16.883793   12728 round_trippers.go:580]     Date: Mon, 18 Dec 2023 12:58:16 GMT
	I1218 12:58:16.883793   12728 round_trippers.go:580]     Audit-Id: 22dd2b07-25c0-442b-b33b-c8fc74a48f5f
	I1218 12:58:16.883793   12728 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 12:58:16.883793   12728 round_trippers.go:580]     Content-Type: application/json
	I1218 12:58:16.884724   12728 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a04cbc8-e61e-444a-bf0e-8472e0035c2c
	I1218 12:58:16.885097   12728 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-015900","uid":"b1b226d9-f305-41e8-ae9a-d377e8d5a4c5","resourceVersion":"430","creationTimestamp":"2023-12-18T12:57:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-015900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30d8ecd1811578f7b9db580c501c654c189f68d4","minikube.k8s.io/name":"multinode-015900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_18T12_57_49_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-18T12:57:44Z","fieldsType":"FieldsV1","fi [truncated 4784 chars]
	I1218 12:58:17.393558   12728 round_trippers.go:463] GET https://192.168.238.182:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-256fn
	I1218 12:58:17.393558   12728 round_trippers.go:469] Request Headers:
	I1218 12:58:17.393633   12728 round_trippers.go:473]     Accept: application/json, */*
	I1218 12:58:17.393633   12728 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1218 12:58:17.397007   12728 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1218 12:58:17.397007   12728 round_trippers.go:577] Response Headers:
	I1218 12:58:17.397418   12728 round_trippers.go:580]     Audit-Id: 857c92b7-c0e1-4f71-aa73-440460713cd2
	I1218 12:58:17.397418   12728 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 12:58:17.397418   12728 round_trippers.go:580]     Content-Type: application/json
	I1218 12:58:17.397418   12728 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a04cbc8-e61e-444a-bf0e-8472e0035c2c
	I1218 12:58:17.397418   12728 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e24e6e3-cf39-4982-af10-f8c010ad1d90
	I1218 12:58:17.397418   12728 round_trippers.go:580]     Date: Mon, 18 Dec 2023 12:58:17 GMT
	I1218 12:58:17.397496   12728 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-256fn","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"6bd59fc6-50d6-4764-823d-71232811cff2","resourceVersion":"436","creationTimestamp":"2023-12-18T12:58:00Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"d117bf5f-d14c-4a53-ad5a-250a0b115b2c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-18T12:58:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d117bf5f-d14c-4a53-ad5a-250a0b115b2c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6153 chars]
	I1218 12:58:17.398252   12728 round_trippers.go:463] GET https://192.168.238.182:8443/api/v1/nodes/multinode-015900
	I1218 12:58:17.398355   12728 round_trippers.go:469] Request Headers:
	I1218 12:58:17.398355   12728 round_trippers.go:473]     Accept: application/json, */*
	I1218 12:58:17.398355   12728 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1218 12:58:17.400583   12728 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1218 12:58:17.400583   12728 round_trippers.go:577] Response Headers:
	I1218 12:58:17.400583   12728 round_trippers.go:580]     Audit-Id: d1e95f3c-3bbf-48d9-8c71-a54fc04ec690
	I1218 12:58:17.400583   12728 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 12:58:17.400583   12728 round_trippers.go:580]     Content-Type: application/json
	I1218 12:58:17.400583   12728 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a04cbc8-e61e-444a-bf0e-8472e0035c2c
	I1218 12:58:17.400583   12728 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e24e6e3-cf39-4982-af10-f8c010ad1d90
	I1218 12:58:17.400583   12728 round_trippers.go:580]     Date: Mon, 18 Dec 2023 12:58:17 GMT
	I1218 12:58:17.401751   12728 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-015900","uid":"b1b226d9-f305-41e8-ae9a-d377e8d5a4c5","resourceVersion":"430","creationTimestamp":"2023-12-18T12:57:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-015900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30d8ecd1811578f7b9db580c501c654c189f68d4","minikube.k8s.io/name":"multinode-015900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_18T12_57_49_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-18T12:57:44Z","fieldsType":"FieldsV1","fi [truncated 4784 chars]
	I1218 12:58:17.884993   12728 round_trippers.go:463] GET https://192.168.238.182:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-256fn
	I1218 12:58:17.884993   12728 round_trippers.go:469] Request Headers:
	I1218 12:58:17.885071   12728 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1218 12:58:17.885071   12728 round_trippers.go:473]     Accept: application/json, */*
	I1218 12:58:17.888476   12728 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1218 12:58:17.888476   12728 round_trippers.go:577] Response Headers:
	I1218 12:58:17.888476   12728 round_trippers.go:580]     Date: Mon, 18 Dec 2023 12:58:17 GMT
	I1218 12:58:17.888476   12728 round_trippers.go:580]     Audit-Id: 5b0ee975-10f8-4b69-b693-41ef1d9b3e08
	I1218 12:58:17.888988   12728 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 12:58:17.888988   12728 round_trippers.go:580]     Content-Type: application/json
	I1218 12:58:17.888988   12728 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a04cbc8-e61e-444a-bf0e-8472e0035c2c
	I1218 12:58:17.888988   12728 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e24e6e3-cf39-4982-af10-f8c010ad1d90
	I1218 12:58:17.889330   12728 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-256fn","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"6bd59fc6-50d6-4764-823d-71232811cff2","resourceVersion":"436","creationTimestamp":"2023-12-18T12:58:00Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"d117bf5f-d14c-4a53-ad5a-250a0b115b2c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-18T12:58:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d117bf5f-d14c-4a53-ad5a-250a0b115b2c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6153 chars]
	I1218 12:58:17.889830   12728 round_trippers.go:463] GET https://192.168.238.182:8443/api/v1/nodes/multinode-015900
	I1218 12:58:17.889830   12728 round_trippers.go:469] Request Headers:
	I1218 12:58:17.889830   12728 round_trippers.go:473]     Accept: application/json, */*
	I1218 12:58:17.889830   12728 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1218 12:58:17.892450   12728 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1218 12:58:17.892450   12728 round_trippers.go:577] Response Headers:
	I1218 12:58:17.892450   12728 round_trippers.go:580]     Content-Type: application/json
	I1218 12:58:17.892450   12728 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a04cbc8-e61e-444a-bf0e-8472e0035c2c
	I1218 12:58:17.892450   12728 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e24e6e3-cf39-4982-af10-f8c010ad1d90
	I1218 12:58:17.892450   12728 round_trippers.go:580]     Date: Mon, 18 Dec 2023 12:58:17 GMT
	I1218 12:58:17.892450   12728 round_trippers.go:580]     Audit-Id: 76ceb65c-2d41-4f71-b27a-b649a33d9820
	I1218 12:58:17.892450   12728 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 12:58:17.893850   12728 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-015900","uid":"b1b226d9-f305-41e8-ae9a-d377e8d5a4c5","resourceVersion":"430","creationTimestamp":"2023-12-18T12:57:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-015900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30d8ecd1811578f7b9db580c501c654c189f68d4","minikube.k8s.io/name":"multinode-015900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_18T12_57_49_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-18T12:57:44Z","fieldsType":"FieldsV1","fi [truncated 4784 chars]
	I1218 12:58:18.391653   12728 round_trippers.go:463] GET https://192.168.238.182:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-256fn
	I1218 12:58:18.391653   12728 round_trippers.go:469] Request Headers:
	I1218 12:58:18.391653   12728 round_trippers.go:473]     Accept: application/json, */*
	I1218 12:58:18.391653   12728 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1218 12:58:18.396058   12728 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1218 12:58:18.396058   12728 round_trippers.go:577] Response Headers:
	I1218 12:58:18.396058   12728 round_trippers.go:580]     Audit-Id: 87e56ad8-12db-435a-ac4b-0082a4e1fa4c
	I1218 12:58:18.396058   12728 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 12:58:18.396058   12728 round_trippers.go:580]     Content-Type: application/json
	I1218 12:58:18.396058   12728 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a04cbc8-e61e-444a-bf0e-8472e0035c2c
	I1218 12:58:18.396058   12728 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e24e6e3-cf39-4982-af10-f8c010ad1d90
	I1218 12:58:18.396058   12728 round_trippers.go:580]     Date: Mon, 18 Dec 2023 12:58:18 GMT
	I1218 12:58:18.396058   12728 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-256fn","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"6bd59fc6-50d6-4764-823d-71232811cff2","resourceVersion":"436","creationTimestamp":"2023-12-18T12:58:00Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"d117bf5f-d14c-4a53-ad5a-250a0b115b2c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-18T12:58:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d117bf5f-d14c-4a53-ad5a-250a0b115b2c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6153 chars]
	I1218 12:58:18.397639   12728 round_trippers.go:463] GET https://192.168.238.182:8443/api/v1/nodes/multinode-015900
	I1218 12:58:18.397724   12728 round_trippers.go:469] Request Headers:
	I1218 12:58:18.397724   12728 round_trippers.go:473]     Accept: application/json, */*
	I1218 12:58:18.397724   12728 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1218 12:58:18.402985   12728 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1218 12:58:18.402985   12728 round_trippers.go:577] Response Headers:
	I1218 12:58:18.402985   12728 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a04cbc8-e61e-444a-bf0e-8472e0035c2c
	I1218 12:58:18.403946   12728 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e24e6e3-cf39-4982-af10-f8c010ad1d90
	I1218 12:58:18.403946   12728 round_trippers.go:580]     Date: Mon, 18 Dec 2023 12:58:18 GMT
	I1218 12:58:18.403946   12728 round_trippers.go:580]     Audit-Id: 31c39a66-94d9-47ce-b5b5-1f358dd75c08
	I1218 12:58:18.404067   12728 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 12:58:18.404160   12728 round_trippers.go:580]     Content-Type: application/json
	I1218 12:58:18.404602   12728 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-015900","uid":"b1b226d9-f305-41e8-ae9a-d377e8d5a4c5","resourceVersion":"430","creationTimestamp":"2023-12-18T12:57:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-015900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30d8ecd1811578f7b9db580c501c654c189f68d4","minikube.k8s.io/name":"multinode-015900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_18T12_57_49_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-18T12:57:44Z","fieldsType":"FieldsV1","fi [truncated 4784 chars]
	I1218 12:58:18.889100   12728 round_trippers.go:463] GET https://192.168.238.182:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-256fn
	I1218 12:58:18.889100   12728 round_trippers.go:469] Request Headers:
	I1218 12:58:18.889100   12728 round_trippers.go:473]     Accept: application/json, */*
	I1218 12:58:18.889100   12728 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1218 12:58:18.896123   12728 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I1218 12:58:18.896123   12728 round_trippers.go:577] Response Headers:
	I1218 12:58:18.896123   12728 round_trippers.go:580]     Content-Type: application/json
	I1218 12:58:18.896123   12728 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a04cbc8-e61e-444a-bf0e-8472e0035c2c
	I1218 12:58:18.896123   12728 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e24e6e3-cf39-4982-af10-f8c010ad1d90
	I1218 12:58:18.896123   12728 round_trippers.go:580]     Date: Mon, 18 Dec 2023 12:58:18 GMT
	I1218 12:58:18.896123   12728 round_trippers.go:580]     Audit-Id: b0d89b98-922d-49c2-8063-85bc543fbed5
	I1218 12:58:18.896123   12728 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 12:58:18.896123   12728 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-256fn","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"6bd59fc6-50d6-4764-823d-71232811cff2","resourceVersion":"436","creationTimestamp":"2023-12-18T12:58:00Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"d117bf5f-d14c-4a53-ad5a-250a0b115b2c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-18T12:58:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d117bf5f-d14c-4a53-ad5a-250a0b115b2c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6153 chars]
	I1218 12:58:18.897421   12728 round_trippers.go:463] GET https://192.168.238.182:8443/api/v1/nodes/multinode-015900
	I1218 12:58:18.897421   12728 round_trippers.go:469] Request Headers:
	I1218 12:58:18.897421   12728 round_trippers.go:473]     Accept: application/json, */*
	I1218 12:58:18.897421   12728 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1218 12:58:18.900934   12728 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1218 12:58:18.901490   12728 round_trippers.go:577] Response Headers:
	I1218 12:58:18.901538   12728 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 12:58:18.901538   12728 round_trippers.go:580]     Content-Type: application/json
	I1218 12:58:18.901538   12728 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a04cbc8-e61e-444a-bf0e-8472e0035c2c
	I1218 12:58:18.901538   12728 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e24e6e3-cf39-4982-af10-f8c010ad1d90
	I1218 12:58:18.901538   12728 round_trippers.go:580]     Date: Mon, 18 Dec 2023 12:58:18 GMT
	I1218 12:58:18.901538   12728 round_trippers.go:580]     Audit-Id: c9822513-4d8e-45b9-a154-8e248b98d54e
	I1218 12:58:18.901693   12728 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-015900","uid":"b1b226d9-f305-41e8-ae9a-d377e8d5a4c5","resourceVersion":"430","creationTimestamp":"2023-12-18T12:57:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-015900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30d8ecd1811578f7b9db580c501c654c189f68d4","minikube.k8s.io/name":"multinode-015900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_18T12_57_49_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-18T12:57:44Z","fieldsType":"FieldsV1","fi [truncated 4784 chars]
	I1218 12:58:18.901693   12728 pod_ready.go:102] pod "coredns-5dd5756b68-256fn" in "kube-system" namespace has status "Ready":"False"
	I1218 12:58:19.390965   12728 round_trippers.go:463] GET https://192.168.238.182:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-256fn
	I1218 12:58:19.390965   12728 round_trippers.go:469] Request Headers:
	I1218 12:58:19.390965   12728 round_trippers.go:473]     Accept: application/json, */*
	I1218 12:58:19.390965   12728 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1218 12:58:19.394575   12728 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1218 12:58:19.394575   12728 round_trippers.go:577] Response Headers:
	I1218 12:58:19.394575   12728 round_trippers.go:580]     Content-Type: application/json
	I1218 12:58:19.395009   12728 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a04cbc8-e61e-444a-bf0e-8472e0035c2c
	I1218 12:58:19.395009   12728 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e24e6e3-cf39-4982-af10-f8c010ad1d90
	I1218 12:58:19.395009   12728 round_trippers.go:580]     Date: Mon, 18 Dec 2023 12:58:19 GMT
	I1218 12:58:19.395009   12728 round_trippers.go:580]     Audit-Id: 97ed483c-36ad-473b-87ed-2c15a3b43cd8
	I1218 12:58:19.395009   12728 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 12:58:19.395275   12728 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-256fn","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"6bd59fc6-50d6-4764-823d-71232811cff2","resourceVersion":"449","creationTimestamp":"2023-12-18T12:58:00Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"d117bf5f-d14c-4a53-ad5a-250a0b115b2c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-18T12:58:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d117bf5f-d14c-4a53-ad5a-250a0b115b2c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6285 chars]
	I1218 12:58:19.395474   12728 round_trippers.go:463] GET https://192.168.238.182:8443/api/v1/nodes/multinode-015900
	I1218 12:58:19.396026   12728 round_trippers.go:469] Request Headers:
	I1218 12:58:19.396026   12728 round_trippers.go:473]     Accept: application/json, */*
	I1218 12:58:19.396026   12728 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1218 12:58:19.399073   12728 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1218 12:58:19.399073   12728 round_trippers.go:577] Response Headers:
	I1218 12:58:19.399073   12728 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a04cbc8-e61e-444a-bf0e-8472e0035c2c
	I1218 12:58:19.399414   12728 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e24e6e3-cf39-4982-af10-f8c010ad1d90
	I1218 12:58:19.399414   12728 round_trippers.go:580]     Date: Mon, 18 Dec 2023 12:58:19 GMT
	I1218 12:58:19.399414   12728 round_trippers.go:580]     Audit-Id: f624d71f-b02b-4d58-88d7-4fc60a2f18b8
	I1218 12:58:19.399414   12728 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 12:58:19.399414   12728 round_trippers.go:580]     Content-Type: application/json
	I1218 12:58:19.399626   12728 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-015900","uid":"b1b226d9-f305-41e8-ae9a-d377e8d5a4c5","resourceVersion":"454","creationTimestamp":"2023-12-18T12:57:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-015900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30d8ecd1811578f7b9db580c501c654c189f68d4","minikube.k8s.io/name":"multinode-015900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_18T12_57_49_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-18T12:57:44Z","fieldsType":"FieldsV1","fi [truncated 4960 chars]
	I1218 12:58:19.400238   12728 pod_ready.go:92] pod "coredns-5dd5756b68-256fn" in "kube-system" namespace has status "Ready":"True"
	I1218 12:58:19.400311   12728 pod_ready.go:81] duration metric: took 2.5225973s waiting for pod "coredns-5dd5756b68-256fn" in "kube-system" namespace to be "Ready" ...
	I1218 12:58:19.400311   12728 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-015900" in "kube-system" namespace to be "Ready" ...
	I1218 12:58:19.400502   12728 round_trippers.go:463] GET https://192.168.238.182:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-015900
	I1218 12:58:19.400502   12728 round_trippers.go:469] Request Headers:
	I1218 12:58:19.400549   12728 round_trippers.go:473]     Accept: application/json, */*
	I1218 12:58:19.400549   12728 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1218 12:58:19.403294   12728 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1218 12:58:19.403294   12728 round_trippers.go:577] Response Headers:
	I1218 12:58:19.403294   12728 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 12:58:19.403294   12728 round_trippers.go:580]     Content-Type: application/json
	I1218 12:58:19.403836   12728 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a04cbc8-e61e-444a-bf0e-8472e0035c2c
	I1218 12:58:19.403836   12728 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e24e6e3-cf39-4982-af10-f8c010ad1d90
	I1218 12:58:19.403910   12728 round_trippers.go:580]     Date: Mon, 18 Dec 2023 12:58:19 GMT
	I1218 12:58:19.403937   12728 round_trippers.go:580]     Audit-Id: c4e6dce1-1b77-4217-b864-2effa97daa69
	I1218 12:58:19.404139   12728 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-015900","namespace":"kube-system","uid":"a7426d67-4ce5-4c3b-be1e-f08631877ad4","resourceVersion":"344","creationTimestamp":"2023-12-18T12:57:48Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.238.182:2379","kubernetes.io/config.hash":"0ccdce53007951dcbebf9ee828c6d414","kubernetes.io/config.mirror":"0ccdce53007951dcbebf9ee828c6d414","kubernetes.io/config.seen":"2023-12-18T12:57:48.242049787Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-015900","uid":"b1b226d9-f305-41e8-ae9a-d377e8d5a4c5","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-18T12:57:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise
-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/conf [truncated 5882 chars]
	I1218 12:58:19.404734   12728 round_trippers.go:463] GET https://192.168.238.182:8443/api/v1/nodes/multinode-015900
	I1218 12:58:19.404734   12728 round_trippers.go:469] Request Headers:
	I1218 12:58:19.404734   12728 round_trippers.go:473]     Accept: application/json, */*
	I1218 12:58:19.404734   12728 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1218 12:58:19.407389   12728 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1218 12:58:19.407389   12728 round_trippers.go:577] Response Headers:
	I1218 12:58:19.407389   12728 round_trippers.go:580]     Audit-Id: a04da8da-50e4-4d03-91ac-219c19c86890
	I1218 12:58:19.407389   12728 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 12:58:19.407389   12728 round_trippers.go:580]     Content-Type: application/json
	I1218 12:58:19.407389   12728 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a04cbc8-e61e-444a-bf0e-8472e0035c2c
	I1218 12:58:19.407389   12728 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e24e6e3-cf39-4982-af10-f8c010ad1d90
	I1218 12:58:19.407389   12728 round_trippers.go:580]     Date: Mon, 18 Dec 2023 12:58:19 GMT
	I1218 12:58:19.408373   12728 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-015900","uid":"b1b226d9-f305-41e8-ae9a-d377e8d5a4c5","resourceVersion":"454","creationTimestamp":"2023-12-18T12:57:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-015900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30d8ecd1811578f7b9db580c501c654c189f68d4","minikube.k8s.io/name":"multinode-015900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_18T12_57_49_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-18T12:57:44Z","fieldsType":"FieldsV1","fi [truncated 4960 chars]
	I1218 12:58:19.408373   12728 pod_ready.go:92] pod "etcd-multinode-015900" in "kube-system" namespace has status "Ready":"True"
	I1218 12:58:19.408373   12728 pod_ready.go:81] duration metric: took 8.0626ms waiting for pod "etcd-multinode-015900" in "kube-system" namespace to be "Ready" ...
	I1218 12:58:19.408373   12728 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-015900" in "kube-system" namespace to be "Ready" ...
	I1218 12:58:19.408373   12728 round_trippers.go:463] GET https://192.168.238.182:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-015900
	I1218 12:58:19.408373   12728 round_trippers.go:469] Request Headers:
	I1218 12:58:19.408373   12728 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1218 12:58:19.408373   12728 round_trippers.go:473]     Accept: application/json, */*
	I1218 12:58:19.411377   12728 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1218 12:58:19.411377   12728 round_trippers.go:577] Response Headers:
	I1218 12:58:19.411377   12728 round_trippers.go:580]     Date: Mon, 18 Dec 2023 12:58:19 GMT
	I1218 12:58:19.411377   12728 round_trippers.go:580]     Audit-Id: e17c3b9e-e568-45ea-afe3-6e845fe7feda
	I1218 12:58:19.411377   12728 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 12:58:19.412138   12728 round_trippers.go:580]     Content-Type: application/json
	I1218 12:58:19.412138   12728 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a04cbc8-e61e-444a-bf0e-8472e0035c2c
	I1218 12:58:19.412138   12728 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e24e6e3-cf39-4982-af10-f8c010ad1d90
	I1218 12:58:19.412477   12728 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-015900","namespace":"kube-system","uid":"fed449c3-ce1c-43a7-bedd-023eca58f1d0","resourceVersion":"417","creationTimestamp":"2023-12-18T12:57:48Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.238.182:8443","kubernetes.io/config.hash":"8f250ad179b46602df2936b85d0cd45e","kubernetes.io/config.mirror":"8f250ad179b46602df2936b85d0cd45e","kubernetes.io/config.seen":"2023-12-18T12:57:48.242055287Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-015900","uid":"b1b226d9-f305-41e8-ae9a-d377e8d5a4c5","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-18T12:57:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.k
ubernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernete [truncated 7417 chars]
	I1218 12:58:19.412997   12728 round_trippers.go:463] GET https://192.168.238.182:8443/api/v1/nodes/multinode-015900
	I1218 12:58:19.412997   12728 round_trippers.go:469] Request Headers:
	I1218 12:58:19.412997   12728 round_trippers.go:473]     Accept: application/json, */*
	I1218 12:58:19.412997   12728 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1218 12:58:19.415566   12728 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1218 12:58:19.415566   12728 round_trippers.go:577] Response Headers:
	I1218 12:58:19.415665   12728 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 12:58:19.415665   12728 round_trippers.go:580]     Content-Type: application/json
	I1218 12:58:19.415665   12728 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a04cbc8-e61e-444a-bf0e-8472e0035c2c
	I1218 12:58:19.415665   12728 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e24e6e3-cf39-4982-af10-f8c010ad1d90
	I1218 12:58:19.415665   12728 round_trippers.go:580]     Date: Mon, 18 Dec 2023 12:58:19 GMT
	I1218 12:58:19.415665   12728 round_trippers.go:580]     Audit-Id: fe6c7132-ee96-4d44-966a-0999d536721a
	I1218 12:58:19.415729   12728 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-015900","uid":"b1b226d9-f305-41e8-ae9a-d377e8d5a4c5","resourceVersion":"454","creationTimestamp":"2023-12-18T12:57:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-015900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30d8ecd1811578f7b9db580c501c654c189f68d4","minikube.k8s.io/name":"multinode-015900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_18T12_57_49_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-18T12:57:44Z","fieldsType":"FieldsV1","fi [truncated 4960 chars]
	I1218 12:58:19.415729   12728 pod_ready.go:92] pod "kube-apiserver-multinode-015900" in "kube-system" namespace has status "Ready":"True"
	I1218 12:58:19.416269   12728 pod_ready.go:81] duration metric: took 7.8953ms waiting for pod "kube-apiserver-multinode-015900" in "kube-system" namespace to be "Ready" ...
	I1218 12:58:19.416269   12728 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-015900" in "kube-system" namespace to be "Ready" ...
	I1218 12:58:19.416269   12728 round_trippers.go:463] GET https://192.168.238.182:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-015900
	I1218 12:58:19.416269   12728 round_trippers.go:469] Request Headers:
	I1218 12:58:19.416269   12728 round_trippers.go:473]     Accept: application/json, */*
	I1218 12:58:19.416482   12728 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1218 12:58:19.418300   12728 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1218 12:58:19.418300   12728 round_trippers.go:577] Response Headers:
	I1218 12:58:19.418300   12728 round_trippers.go:580]     Date: Mon, 18 Dec 2023 12:58:19 GMT
	I1218 12:58:19.418300   12728 round_trippers.go:580]     Audit-Id: 93414872-e49f-4727-a1f8-e90e317701ca
	I1218 12:58:19.419106   12728 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 12:58:19.419106   12728 round_trippers.go:580]     Content-Type: application/json
	I1218 12:58:19.419106   12728 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a04cbc8-e61e-444a-bf0e-8472e0035c2c
	I1218 12:58:19.419106   12728 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e24e6e3-cf39-4982-af10-f8c010ad1d90
	I1218 12:58:19.419402   12728 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-015900","namespace":"kube-system","uid":"adc96380-4e5f-4486-a471-23e8dad2a63b","resourceVersion":"418","creationTimestamp":"2023-12-18T12:57:48Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"a757899e3fc62b58934ce911dce4fad5","kubernetes.io/config.mirror":"a757899e3fc62b58934ce911dce4fad5","kubernetes.io/config.seen":"2023-12-18T12:57:48.242056787Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-015900","uid":"b1b226d9-f305-41e8-ae9a-d377e8d5a4c5","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-18T12:57:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 6977 chars]
	I1218 12:58:19.419700   12728 round_trippers.go:463] GET https://192.168.238.182:8443/api/v1/nodes/multinode-015900
	I1218 12:58:19.419700   12728 round_trippers.go:469] Request Headers:
	I1218 12:58:19.419700   12728 round_trippers.go:473]     Accept: application/json, */*
	I1218 12:58:19.419700   12728 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1218 12:58:19.424165   12728 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1218 12:58:19.424165   12728 round_trippers.go:577] Response Headers:
	I1218 12:58:19.424165   12728 round_trippers.go:580]     Content-Type: application/json
	I1218 12:58:19.424165   12728 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a04cbc8-e61e-444a-bf0e-8472e0035c2c
	I1218 12:58:19.424165   12728 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e24e6e3-cf39-4982-af10-f8c010ad1d90
	I1218 12:58:19.424165   12728 round_trippers.go:580]     Date: Mon, 18 Dec 2023 12:58:19 GMT
	I1218 12:58:19.424165   12728 round_trippers.go:580]     Audit-Id: ca993a19-fd86-49f0-b224-804e22eb857e
	I1218 12:58:19.424165   12728 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 12:58:19.424818   12728 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-015900","uid":"b1b226d9-f305-41e8-ae9a-d377e8d5a4c5","resourceVersion":"454","creationTimestamp":"2023-12-18T12:57:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-015900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30d8ecd1811578f7b9db580c501c654c189f68d4","minikube.k8s.io/name":"multinode-015900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_18T12_57_49_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-18T12:57:44Z","fieldsType":"FieldsV1","fi [truncated 4960 chars]
	I1218 12:58:19.425510   12728 pod_ready.go:92] pod "kube-controller-manager-multinode-015900" in "kube-system" namespace has status "Ready":"True"
	I1218 12:58:19.425550   12728 pod_ready.go:81] duration metric: took 9.281ms waiting for pod "kube-controller-manager-multinode-015900" in "kube-system" namespace to be "Ready" ...
	I1218 12:58:19.425663   12728 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-xpxz2" in "kube-system" namespace to be "Ready" ...
	I1218 12:58:19.425767   12728 round_trippers.go:463] GET https://192.168.238.182:8443/api/v1/namespaces/kube-system/pods/kube-proxy-xpxz2
	I1218 12:58:19.425767   12728 round_trippers.go:469] Request Headers:
	I1218 12:58:19.425832   12728 round_trippers.go:473]     Accept: application/json, */*
	I1218 12:58:19.425899   12728 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1218 12:58:19.430625   12728 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1218 12:58:19.430679   12728 round_trippers.go:577] Response Headers:
	I1218 12:58:19.430734   12728 round_trippers.go:580]     Audit-Id: 16e18aa0-5dc4-4009-9160-b4588d662b62
	I1218 12:58:19.430734   12728 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 12:58:19.430734   12728 round_trippers.go:580]     Content-Type: application/json
	I1218 12:58:19.430734   12728 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a04cbc8-e61e-444a-bf0e-8472e0035c2c
	I1218 12:58:19.430734   12728 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e24e6e3-cf39-4982-af10-f8c010ad1d90
	I1218 12:58:19.430734   12728 round_trippers.go:580]     Date: Mon, 18 Dec 2023 12:58:19 GMT
	I1218 12:58:19.431017   12728 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-xpxz2","generateName":"kube-proxy-","namespace":"kube-system","uid":"6070d8c7-5af2-4e9f-b737-760782b764a6","resourceVersion":"403","creationTimestamp":"2023-12-18T12:58:00Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"02d626d1-4faa-4d32-9f7c-aa1c56272dc4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-18T12:58:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"02d626d1-4faa-4d32-9f7c-aa1c56272dc4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5541 chars]
	I1218 12:58:19.431194   12728 round_trippers.go:463] GET https://192.168.238.182:8443/api/v1/nodes/multinode-015900
	I1218 12:58:19.431194   12728 round_trippers.go:469] Request Headers:
	I1218 12:58:19.431194   12728 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1218 12:58:19.431194   12728 round_trippers.go:473]     Accept: application/json, */*
	I1218 12:58:19.434442   12728 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1218 12:58:19.434442   12728 round_trippers.go:577] Response Headers:
	I1218 12:58:19.434442   12728 round_trippers.go:580]     Audit-Id: 54968a2a-1829-4342-b2cc-207ff60de35e
	I1218 12:58:19.434442   12728 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 12:58:19.434442   12728 round_trippers.go:580]     Content-Type: application/json
	I1218 12:58:19.434442   12728 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a04cbc8-e61e-444a-bf0e-8472e0035c2c
	I1218 12:58:19.434442   12728 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e24e6e3-cf39-4982-af10-f8c010ad1d90
	I1218 12:58:19.434442   12728 round_trippers.go:580]     Date: Mon, 18 Dec 2023 12:58:19 GMT
	I1218 12:58:19.434442   12728 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-015900","uid":"b1b226d9-f305-41e8-ae9a-d377e8d5a4c5","resourceVersion":"454","creationTimestamp":"2023-12-18T12:57:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-015900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30d8ecd1811578f7b9db580c501c654c189f68d4","minikube.k8s.io/name":"multinode-015900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_18T12_57_49_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-18T12:57:44Z","fieldsType":"FieldsV1","fi [truncated 4960 chars]
	I1218 12:58:19.435230   12728 pod_ready.go:92] pod "kube-proxy-xpxz2" in "kube-system" namespace has status "Ready":"True"
	I1218 12:58:19.435230   12728 pod_ready.go:81] duration metric: took 9.5661ms waiting for pod "kube-proxy-xpxz2" in "kube-system" namespace to be "Ready" ...
	I1218 12:58:19.435230   12728 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-015900" in "kube-system" namespace to be "Ready" ...
	I1218 12:58:19.595257   12728 request.go:629] Waited for 160.0265ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.238.182:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-015900
	I1218 12:58:19.595257   12728 round_trippers.go:463] GET https://192.168.238.182:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-015900
	I1218 12:58:19.595649   12728 round_trippers.go:469] Request Headers:
	I1218 12:58:19.595649   12728 round_trippers.go:473]     Accept: application/json, */*
	I1218 12:58:19.595649   12728 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1218 12:58:19.601683   12728 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1218 12:58:19.601725   12728 round_trippers.go:577] Response Headers:
	I1218 12:58:19.601725   12728 round_trippers.go:580]     Date: Mon, 18 Dec 2023 12:58:19 GMT
	I1218 12:58:19.601766   12728 round_trippers.go:580]     Audit-Id: 2f344028-78ec-41c2-a41b-f94c65cc94f9
	I1218 12:58:19.601766   12728 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 12:58:19.601766   12728 round_trippers.go:580]     Content-Type: application/json
	I1218 12:58:19.601766   12728 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a04cbc8-e61e-444a-bf0e-8472e0035c2c
	I1218 12:58:19.601766   12728 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e24e6e3-cf39-4982-af10-f8c010ad1d90
	I1218 12:58:19.601862   12728 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-015900","namespace":"kube-system","uid":"45ab2fd7-20c1-4148-8989-51a285e6b7d5","resourceVersion":"416","creationTimestamp":"2023-12-18T12:57:48Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"33558618a35e7b5da7a13bdc2f198c7e","kubernetes.io/config.mirror":"33558618a35e7b5da7a13bdc2f198c7e","kubernetes.io/config.seen":"2023-12-18T12:57:48.242057987Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-015900","uid":"b1b226d9-f305-41e8-ae9a-d377e8d5a4c5","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-18T12:57:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4707 chars]
	I1218 12:58:19.798168   12728 request.go:629] Waited for 195.8772ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.238.182:8443/api/v1/nodes/multinode-015900
	I1218 12:58:19.798251   12728 round_trippers.go:463] GET https://192.168.238.182:8443/api/v1/nodes/multinode-015900
	I1218 12:58:19.798251   12728 round_trippers.go:469] Request Headers:
	I1218 12:58:19.798251   12728 round_trippers.go:473]     Accept: application/json, */*
	I1218 12:58:19.798251   12728 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1218 12:58:19.801889   12728 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1218 12:58:19.801889   12728 round_trippers.go:577] Response Headers:
	I1218 12:58:19.801889   12728 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 12:58:19.802215   12728 round_trippers.go:580]     Content-Type: application/json
	I1218 12:58:19.802215   12728 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a04cbc8-e61e-444a-bf0e-8472e0035c2c
	I1218 12:58:19.802215   12728 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e24e6e3-cf39-4982-af10-f8c010ad1d90
	I1218 12:58:19.802215   12728 round_trippers.go:580]     Date: Mon, 18 Dec 2023 12:58:19 GMT
	I1218 12:58:19.802280   12728 round_trippers.go:580]     Audit-Id: 7a557ba5-0ff0-4818-80b2-185354b9e5f7
	I1218 12:58:19.802280   12728 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-015900","uid":"b1b226d9-f305-41e8-ae9a-d377e8d5a4c5","resourceVersion":"454","creationTimestamp":"2023-12-18T12:57:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-015900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30d8ecd1811578f7b9db580c501c654c189f68d4","minikube.k8s.io/name":"multinode-015900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_18T12_57_49_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-18T12:57:44Z","fieldsType":"FieldsV1","fi [truncated 4960 chars]
	I1218 12:58:19.803271   12728 pod_ready.go:92] pod "kube-scheduler-multinode-015900" in "kube-system" namespace has status "Ready":"True"
	I1218 12:58:19.803366   12728 pod_ready.go:81] duration metric: took 368.04ms waiting for pod "kube-scheduler-multinode-015900" in "kube-system" namespace to be "Ready" ...
	I1218 12:58:19.803366   12728 pod_ready.go:38] duration metric: took 2.9436322s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1218 12:58:19.803366   12728 api_server.go:52] waiting for apiserver process to appear ...
	I1218 12:58:19.816108   12728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 12:58:19.836837   12728 command_runner.go:130] > 2085
	I1218 12:58:19.837051   12728 api_server.go:72] duration metric: took 19.2382663s to wait for apiserver process to appear ...
	I1218 12:58:19.837051   12728 api_server.go:88] waiting for apiserver healthz status ...
	I1218 12:58:19.837109   12728 api_server.go:253] Checking apiserver healthz at https://192.168.238.182:8443/healthz ...
	I1218 12:58:19.848793   12728 api_server.go:279] https://192.168.238.182:8443/healthz returned 200:
	ok
	I1218 12:58:19.849124   12728 round_trippers.go:463] GET https://192.168.238.182:8443/version
	I1218 12:58:19.849159   12728 round_trippers.go:469] Request Headers:
	I1218 12:58:19.849159   12728 round_trippers.go:473]     Accept: application/json, */*
	I1218 12:58:19.849159   12728 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1218 12:58:19.851338   12728 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1218 12:58:19.851338   12728 round_trippers.go:577] Response Headers:
	I1218 12:58:19.851338   12728 round_trippers.go:580]     Audit-Id: 50a1cdd2-7c30-4beb-9ee8-ce73e14dc929
	I1218 12:58:19.851415   12728 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 12:58:19.851415   12728 round_trippers.go:580]     Content-Type: application/json
	I1218 12:58:19.851415   12728 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a04cbc8-e61e-444a-bf0e-8472e0035c2c
	I1218 12:58:19.851415   12728 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e24e6e3-cf39-4982-af10-f8c010ad1d90
	I1218 12:58:19.851415   12728 round_trippers.go:580]     Content-Length: 264
	I1218 12:58:19.851415   12728 round_trippers.go:580]     Date: Mon, 18 Dec 2023 12:58:19 GMT
	I1218 12:58:19.851415   12728 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "28",
	  "gitVersion": "v1.28.4",
	  "gitCommit": "bae2c62678db2b5053817bc97181fcc2e8388103",
	  "gitTreeState": "clean",
	  "buildDate": "2023-11-15T16:48:54Z",
	  "goVersion": "go1.20.11",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I1218 12:58:19.851594   12728 api_server.go:141] control plane version: v1.28.4
	I1218 12:58:19.851690   12728 api_server.go:131] duration metric: took 14.6388ms to wait for apiserver health ...
	I1218 12:58:19.851690   12728 system_pods.go:43] waiting for kube-system pods to appear ...
	I1218 12:58:20.000483   12728 request.go:629] Waited for 148.7057ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.238.182:8443/api/v1/namespaces/kube-system/pods
	I1218 12:58:20.001041   12728 round_trippers.go:463] GET https://192.168.238.182:8443/api/v1/namespaces/kube-system/pods
	I1218 12:58:20.001108   12728 round_trippers.go:469] Request Headers:
	I1218 12:58:20.001108   12728 round_trippers.go:473]     Accept: application/json, */*
	I1218 12:58:20.001108   12728 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1218 12:58:20.007691   12728 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1218 12:58:20.007691   12728 round_trippers.go:577] Response Headers:
	I1218 12:58:20.008661   12728 round_trippers.go:580]     Content-Type: application/json
	I1218 12:58:20.008683   12728 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a04cbc8-e61e-444a-bf0e-8472e0035c2c
	I1218 12:58:20.008683   12728 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e24e6e3-cf39-4982-af10-f8c010ad1d90
	I1218 12:58:20.008683   12728 round_trippers.go:580]     Date: Mon, 18 Dec 2023 12:58:20 GMT
	I1218 12:58:20.008683   12728 round_trippers.go:580]     Audit-Id: 6acb52d2-ad73-4027-90ac-9893beb34f60
	I1218 12:58:20.008683   12728 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 12:58:20.010821   12728 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"455"},"items":[{"metadata":{"name":"coredns-5dd5756b68-256fn","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"6bd59fc6-50d6-4764-823d-71232811cff2","resourceVersion":"449","creationTimestamp":"2023-12-18T12:58:00Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"d117bf5f-d14c-4a53-ad5a-250a0b115b2c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-18T12:58:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d117bf5f-d14c-4a53-ad5a-250a0b115b2c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 54168 chars]
	I1218 12:58:20.012928   12728 system_pods.go:59] 8 kube-system pods found
	I1218 12:58:20.013483   12728 system_pods.go:61] "coredns-5dd5756b68-256fn" [6bd59fc6-50d6-4764-823d-71232811cff2] Running
	I1218 12:58:20.013483   12728 system_pods.go:61] "etcd-multinode-015900" [a7426d67-4ce5-4c3b-be1e-f08631877ad4] Running
	I1218 12:58:20.013483   12728 system_pods.go:61] "kindnet-bfllh" [f376dae1-8132-46e0-a367-7a4764b6138b] Running
	I1218 12:58:20.013540   12728 system_pods.go:61] "kube-apiserver-multinode-015900" [fed449c3-ce1c-43a7-bedd-023eca58f1d0] Running
	I1218 12:58:20.013540   12728 system_pods.go:61] "kube-controller-manager-multinode-015900" [adc96380-4e5f-4486-a471-23e8dad2a63b] Running
	I1218 12:58:20.013540   12728 system_pods.go:61] "kube-proxy-xpxz2" [6070d8c7-5af2-4e9f-b737-760782b764a6] Running
	I1218 12:58:20.013540   12728 system_pods.go:61] "kube-scheduler-multinode-015900" [45ab2fd7-20c1-4148-8989-51a285e6b7d5] Running
	I1218 12:58:20.013540   12728 system_pods.go:61] "storage-provisioner" [9b6ddc85-8b7a-45d0-9867-3be6bd8085e6] Running
	I1218 12:58:20.013540   12728 system_pods.go:74] duration metric: took 161.8499ms to wait for pod list to return data ...
	I1218 12:58:20.013629   12728 default_sa.go:34] waiting for default service account to be created ...
	I1218 12:58:20.200439   12728 request.go:629] Waited for 186.5423ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.238.182:8443/api/v1/namespaces/default/serviceaccounts
	I1218 12:58:20.200679   12728 round_trippers.go:463] GET https://192.168.238.182:8443/api/v1/namespaces/default/serviceaccounts
	I1218 12:58:20.200679   12728 round_trippers.go:469] Request Headers:
	I1218 12:58:20.200741   12728 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1218 12:58:20.200741   12728 round_trippers.go:473]     Accept: application/json, */*
	I1218 12:58:20.205461   12728 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1218 12:58:20.205461   12728 round_trippers.go:577] Response Headers:
	I1218 12:58:20.205461   12728 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e24e6e3-cf39-4982-af10-f8c010ad1d90
	I1218 12:58:20.205461   12728 round_trippers.go:580]     Content-Length: 261
	I1218 12:58:20.205461   12728 round_trippers.go:580]     Date: Mon, 18 Dec 2023 12:58:20 GMT
	I1218 12:58:20.205461   12728 round_trippers.go:580]     Audit-Id: 4f1257b4-3473-4b69-a357-66a177920eec
	I1218 12:58:20.205461   12728 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 12:58:20.205461   12728 round_trippers.go:580]     Content-Type: application/json
	I1218 12:58:20.205461   12728 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a04cbc8-e61e-444a-bf0e-8472e0035c2c
	I1218 12:58:20.205461   12728 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"456"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"da3c6889-4b7c-416c-9129-f3bb36ad7663","resourceVersion":"351","creationTimestamp":"2023-12-18T12:57:59Z"}}]}
	I1218 12:58:20.206171   12728 default_sa.go:45] found service account: "default"
	I1218 12:58:20.206171   12728 default_sa.go:55] duration metric: took 192.5421ms for default service account to be created ...
	I1218 12:58:20.206171   12728 system_pods.go:116] waiting for k8s-apps to be running ...
	I1218 12:58:20.405893   12728 request.go:629] Waited for 199.7213ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.238.182:8443/api/v1/namespaces/kube-system/pods
	I1218 12:58:20.406241   12728 round_trippers.go:463] GET https://192.168.238.182:8443/api/v1/namespaces/kube-system/pods
	I1218 12:58:20.406241   12728 round_trippers.go:469] Request Headers:
	I1218 12:58:20.406241   12728 round_trippers.go:473]     Accept: application/json, */*
	I1218 12:58:20.406325   12728 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1218 12:58:20.411682   12728 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1218 12:58:20.411682   12728 round_trippers.go:577] Response Headers:
	I1218 12:58:20.412047   12728 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a04cbc8-e61e-444a-bf0e-8472e0035c2c
	I1218 12:58:20.412047   12728 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e24e6e3-cf39-4982-af10-f8c010ad1d90
	I1218 12:58:20.412047   12728 round_trippers.go:580]     Date: Mon, 18 Dec 2023 12:58:20 GMT
	I1218 12:58:20.412047   12728 round_trippers.go:580]     Audit-Id: 3b02e2e4-9153-40b3-8f1e-2ad3ceef290a
	I1218 12:58:20.412047   12728 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 12:58:20.412047   12728 round_trippers.go:580]     Content-Type: application/json
	I1218 12:58:20.414263   12728 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"456"},"items":[{"metadata":{"name":"coredns-5dd5756b68-256fn","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"6bd59fc6-50d6-4764-823d-71232811cff2","resourceVersion":"449","creationTimestamp":"2023-12-18T12:58:00Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"d117bf5f-d14c-4a53-ad5a-250a0b115b2c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-18T12:58:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d117bf5f-d14c-4a53-ad5a-250a0b115b2c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 54168 chars]
	I1218 12:58:20.417288   12728 system_pods.go:86] 8 kube-system pods found
	I1218 12:58:20.417288   12728 system_pods.go:89] "coredns-5dd5756b68-256fn" [6bd59fc6-50d6-4764-823d-71232811cff2] Running
	I1218 12:58:20.417288   12728 system_pods.go:89] "etcd-multinode-015900" [a7426d67-4ce5-4c3b-be1e-f08631877ad4] Running
	I1218 12:58:20.417288   12728 system_pods.go:89] "kindnet-bfllh" [f376dae1-8132-46e0-a367-7a4764b6138b] Running
	I1218 12:58:20.417288   12728 system_pods.go:89] "kube-apiserver-multinode-015900" [fed449c3-ce1c-43a7-bedd-023eca58f1d0] Running
	I1218 12:58:20.417288   12728 system_pods.go:89] "kube-controller-manager-multinode-015900" [adc96380-4e5f-4486-a471-23e8dad2a63b] Running
	I1218 12:58:20.417288   12728 system_pods.go:89] "kube-proxy-xpxz2" [6070d8c7-5af2-4e9f-b737-760782b764a6] Running
	I1218 12:58:20.417288   12728 system_pods.go:89] "kube-scheduler-multinode-015900" [45ab2fd7-20c1-4148-8989-51a285e6b7d5] Running
	I1218 12:58:20.417288   12728 system_pods.go:89] "storage-provisioner" [9b6ddc85-8b7a-45d0-9867-3be6bd8085e6] Running
	I1218 12:58:20.417288   12728 system_pods.go:126] duration metric: took 211.1159ms to wait for k8s-apps to be running ...
	I1218 12:58:20.417288   12728 system_svc.go:44] waiting for kubelet service to be running ....
	I1218 12:58:20.430983   12728 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1218 12:58:20.451905   12728 system_svc.go:56] duration metric: took 34.6168ms WaitForService to wait for kubelet.
	I1218 12:58:20.452068   12728 kubeadm.go:581] duration metric: took 19.8533276s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1218 12:58:20.452380   12728 node_conditions.go:102] verifying NodePressure condition ...
	I1218 12:58:20.592289   12728 request.go:629] Waited for 139.6643ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.238.182:8443/api/v1/nodes
	I1218 12:58:20.592613   12728 round_trippers.go:463] GET https://192.168.238.182:8443/api/v1/nodes
	I1218 12:58:20.592690   12728 round_trippers.go:469] Request Headers:
	I1218 12:58:20.592690   12728 round_trippers.go:473]     Accept: application/json, */*
	I1218 12:58:20.592690   12728 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1218 12:58:20.596093   12728 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1218 12:58:20.596093   12728 round_trippers.go:577] Response Headers:
	I1218 12:58:20.596093   12728 round_trippers.go:580]     Date: Mon, 18 Dec 2023 12:58:20 GMT
	I1218 12:58:20.596093   12728 round_trippers.go:580]     Audit-Id: 2bac7cc7-78ea-4813-922a-94e527a12c46
	I1218 12:58:20.596093   12728 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 12:58:20.596425   12728 round_trippers.go:580]     Content-Type: application/json
	I1218 12:58:20.596425   12728 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a04cbc8-e61e-444a-bf0e-8472e0035c2c
	I1218 12:58:20.596425   12728 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e24e6e3-cf39-4982-af10-f8c010ad1d90
	I1218 12:58:20.596715   12728 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"456"},"items":[{"metadata":{"name":"multinode-015900","uid":"b1b226d9-f305-41e8-ae9a-d377e8d5a4c5","resourceVersion":"454","creationTimestamp":"2023-12-18T12:57:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-015900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30d8ecd1811578f7b9db580c501c654c189f68d4","minikube.k8s.io/name":"multinode-015900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_18T12_57_49_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFiel
ds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time" [truncated 5013 chars]
	I1218 12:58:20.597516   12728 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1218 12:58:20.597640   12728 node_conditions.go:123] node cpu capacity is 2
	I1218 12:58:20.597726   12728 node_conditions.go:105] duration metric: took 145.1916ms to run NodePressure ...
	I1218 12:58:20.597726   12728 start.go:228] waiting for startup goroutines ...
	I1218 12:58:20.597786   12728 start.go:233] waiting for cluster config update ...
	I1218 12:58:20.597786   12728 start.go:242] writing updated cluster config ...
	I1218 12:58:20.614848   12728 ssh_runner.go:195] Run: rm -f paused
	I1218 12:58:20.763418   12728 start.go:600] kubectl: 1.29.0, cluster: 1.28.4 (minor skew: 1)
	I1218 12:58:20.764421   12728 out.go:177] * Done! kubectl is now configured to use "multinode-015900" cluster and "default" namespace by default
	
	* 
	* ==> Docker <==
	* -- Journal begins at Mon 2023-12-18 12:55:53 UTC, ends at Mon 2023-12-18 12:58:40 UTC. --
	Dec 18 12:58:01 multinode-015900 dockerd[1344]: time="2023-12-18T12:58:01.154093281Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 18 12:58:05 multinode-015900 cri-dockerd[1228]: time="2023-12-18T12:58:05Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/edbac0d907de9c24ba1961ed7a92cc8209455aada6074e050909d9df92d3f558/resolv.conf as [nameserver 192.168.224.1]"
	Dec 18 12:58:10 multinode-015900 cri-dockerd[1228]: time="2023-12-18T12:58:10Z" level=info msg="Stop pulling image docker.io/kindest/kindnetd:v20230809-80a64d96: Status: Downloaded newer image for kindest/kindnetd:v20230809-80a64d96"
	Dec 18 12:58:11 multinode-015900 dockerd[1344]: time="2023-12-18T12:58:11.145274686Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Dec 18 12:58:11 multinode-015900 dockerd[1344]: time="2023-12-18T12:58:11.145416587Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 18 12:58:11 multinode-015900 dockerd[1344]: time="2023-12-18T12:58:11.145444887Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Dec 18 12:58:11 multinode-015900 dockerd[1344]: time="2023-12-18T12:58:11.145456087Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 18 12:58:17 multinode-015900 dockerd[1344]: time="2023-12-18T12:58:17.090105292Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Dec 18 12:58:17 multinode-015900 dockerd[1344]: time="2023-12-18T12:58:17.092918302Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Dec 18 12:58:17 multinode-015900 dockerd[1344]: time="2023-12-18T12:58:17.093147803Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 18 12:58:17 multinode-015900 dockerd[1344]: time="2023-12-18T12:58:17.093265003Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Dec 18 12:58:17 multinode-015900 dockerd[1344]: time="2023-12-18T12:58:17.091467797Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 18 12:58:17 multinode-015900 dockerd[1344]: time="2023-12-18T12:58:17.093390704Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 18 12:58:17 multinode-015900 dockerd[1344]: time="2023-12-18T12:58:17.093585504Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Dec 18 12:58:17 multinode-015900 dockerd[1344]: time="2023-12-18T12:58:17.093620904Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 18 12:58:17 multinode-015900 cri-dockerd[1228]: time="2023-12-18T12:58:17Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/3844ace09fc880146b66e177e06d623398d2d6294b6b4567d7408eb789d95a9c/resolv.conf as [nameserver 192.168.224.1]"
	Dec 18 12:58:17 multinode-015900 cri-dockerd[1228]: time="2023-12-18T12:58:17Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/83b7f24a350e12a0393bfee9379b82efd391afe1f3f144683858ea37d0304250/resolv.conf as [nameserver 192.168.224.1]"
	Dec 18 12:58:17 multinode-015900 dockerd[1344]: time="2023-12-18T12:58:17.799524528Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Dec 18 12:58:17 multinode-015900 dockerd[1344]: time="2023-12-18T12:58:17.799802729Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 18 12:58:17 multinode-015900 dockerd[1344]: time="2023-12-18T12:58:17.799844029Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Dec 18 12:58:17 multinode-015900 dockerd[1344]: time="2023-12-18T12:58:17.799905029Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 18 12:58:17 multinode-015900 dockerd[1344]: time="2023-12-18T12:58:17.892561660Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Dec 18 12:58:17 multinode-015900 dockerd[1344]: time="2023-12-18T12:58:17.900835390Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 18 12:58:17 multinode-015900 dockerd[1344]: time="2023-12-18T12:58:17.901163791Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Dec 18 12:58:17 multinode-015900 dockerd[1344]: time="2023-12-18T12:58:17.901569892Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                      CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	ce5fda4e192f9       ead0a4a53df89                                                                              23 seconds ago       Running             coredns                   0                   83b7f24a350e1       coredns-5dd5756b68-256fn
	6803f1209e6c8       6e38f40d628db                                                                              23 seconds ago       Running             storage-provisioner       0                   3844ace09fc88       storage-provisioner
	1bc4da30bccba       kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052   30 seconds ago       Running             kindnet-cni               0                   edbac0d907de9       kindnet-bfllh
	cbb2ed8a2d451       83f6cc407eed8                                                                              39 seconds ago       Running             kube-proxy                0                   176319065ee74       kube-proxy-xpxz2
	b4091c55c3174       73deb9a3f7025                                                                              59 seconds ago       Running             etcd                      0                   eb1751282a0b3       etcd-multinode-015900
	3b0c7b029fe22       d058aa5ab969c                                                                              59 seconds ago       Running             kube-controller-manager   0                   6dc6c4e8b8bca       kube-controller-manager-multinode-015900
	11172ef348e40       e3db313c6dbc0                                                                              59 seconds ago       Running             kube-scheduler            0                   05ab95d27e3db       kube-scheduler-multinode-015900
	6eb3af9836893       7fe0e6f37db33                                                                              About a minute ago   Running             kube-apiserver            0                   b24923dba9040       kube-apiserver-multinode-015900
	
	* 
	* ==> coredns [ce5fda4e192f] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = e48cc74d4d4792b6e037fc6364095f03dd97c499e20d6def56cab70b374eb190d7fd9d3720ca48b7382edb6d6fbe7d631f96f64e38a41e6bd8617ab8ab6ece2c
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:59375 - 22306 "HINFO IN 712720368267907743.4018947577463859218. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.082850178s
	
	* 
	* ==> describe nodes <==
	* Name:               multinode-015900
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-015900
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=30d8ecd1811578f7b9db580c501c654c189f68d4
	                    minikube.k8s.io/name=multinode-015900
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_12_18T12_57_49_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 18 Dec 2023 12:57:44 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-015900
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 18 Dec 2023 12:58:39 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 18 Dec 2023 12:58:19 +0000   Mon, 18 Dec 2023 12:57:42 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 18 Dec 2023 12:58:19 +0000   Mon, 18 Dec 2023 12:57:42 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 18 Dec 2023 12:58:19 +0000   Mon, 18 Dec 2023 12:57:42 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 18 Dec 2023 12:58:19 +0000   Mon, 18 Dec 2023 12:58:16 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.238.182
	  Hostname:    multinode-015900
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165980Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165980Ki
	  pods:               110
	System Info:
	  Machine ID:                 9642ce71bc18407dae06f7ff0a55c3e9
	  System UUID:                b492d3f0-2f33-7042-bc48-d29ef920286a
	  Boot ID:                    0cf02454-6747-47df-823d-76cb15e13fd1
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://24.0.7
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-5dd5756b68-256fn                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     41s
	  kube-system                 etcd-multinode-015900                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         53s
	  kube-system                 kindnet-bfllh                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      41s
	  kube-system                 kube-apiserver-multinode-015900             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         53s
	  kube-system                 kube-controller-manager-multinode-015900    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         53s
	  kube-system                 kube-proxy-xpxz2                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         41s
	  kube-system                 kube-scheduler-multinode-015900             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         53s
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         33s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 39s                kube-proxy       
	  Normal  Starting                 62s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  62s (x8 over 62s)  kubelet          Node multinode-015900 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    62s (x8 over 62s)  kubelet          Node multinode-015900 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     62s (x7 over 62s)  kubelet          Node multinode-015900 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  62s                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 53s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  53s                kubelet          Node multinode-015900 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    53s                kubelet          Node multinode-015900 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     53s                kubelet          Node multinode-015900 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  53s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           42s                node-controller  Node multinode-015900 event: Registered Node multinode-015900 in Controller
	  Normal  NodeReady                25s                kubelet          Node multinode-015900 status is now: NodeReady
	
	* 
	* ==> dmesg <==
	* [  +1.315146] psmouse serio1: trackpoint: failed to get extended button data, assuming 3 buttons
	[  +1.117498] systemd-fstab-generator[113]: Ignoring "noauto" for root device
	[  +1.236956] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000003] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[Dec18 12:56] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[ +44.242867] systemd-fstab-generator[648]: Ignoring "noauto" for root device
	[  +0.148619] systemd-fstab-generator[659]: Ignoring "noauto" for root device
	[Dec18 12:57] systemd-fstab-generator[952]: Ignoring "noauto" for root device
	[  +0.592149] systemd-fstab-generator[991]: Ignoring "noauto" for root device
	[  +0.167881] systemd-fstab-generator[1002]: Ignoring "noauto" for root device
	[  +0.194951] systemd-fstab-generator[1015]: Ignoring "noauto" for root device
	[  +1.358713] kauditd_printk_skb: 28 callbacks suppressed
	[  +0.340651] systemd-fstab-generator[1173]: Ignoring "noauto" for root device
	[  +0.165769] systemd-fstab-generator[1184]: Ignoring "noauto" for root device
	[  +0.182075] systemd-fstab-generator[1195]: Ignoring "noauto" for root device
	[  +0.158810] systemd-fstab-generator[1206]: Ignoring "noauto" for root device
	[  +0.193105] systemd-fstab-generator[1220]: Ignoring "noauto" for root device
	[ +12.845013] systemd-fstab-generator[1328]: Ignoring "noauto" for root device
	[  +2.225688] kauditd_printk_skb: 29 callbacks suppressed
	[  +7.125974] systemd-fstab-generator[1709]: Ignoring "noauto" for root device
	[  +0.807148] kauditd_printk_skb: 29 callbacks suppressed
	[  +8.554721] systemd-fstab-generator[2669]: Ignoring "noauto" for root device
	[Dec18 12:58] kauditd_printk_skb: 14 callbacks suppressed
	
	* 
	* ==> etcd [b4091c55c317] <==
	* {"level":"info","ts":"2023-12-18T12:57:42.149157Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2023-12-18T12:57:42.149245Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.238.182:2380"}
	{"level":"info","ts":"2023-12-18T12:57:42.149504Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.238.182:2380"}
	{"level":"info","ts":"2023-12-18T12:57:42.150478Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"41ffe37f8ac56e1b","initial-advertise-peer-urls":["https://192.168.238.182:2380"],"listen-peer-urls":["https://192.168.238.182:2380"],"advertise-client-urls":["https://192.168.238.182:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.238.182:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-12-18T12:57:42.152748Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-12-18T12:57:42.161741Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"41ffe37f8ac56e1b is starting a new election at term 1"}
	{"level":"info","ts":"2023-12-18T12:57:42.162034Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"41ffe37f8ac56e1b became pre-candidate at term 1"}
	{"level":"info","ts":"2023-12-18T12:57:42.162203Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"41ffe37f8ac56e1b received MsgPreVoteResp from 41ffe37f8ac56e1b at term 1"}
	{"level":"info","ts":"2023-12-18T12:57:42.162357Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"41ffe37f8ac56e1b became candidate at term 2"}
	{"level":"info","ts":"2023-12-18T12:57:42.162595Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"41ffe37f8ac56e1b received MsgVoteResp from 41ffe37f8ac56e1b at term 2"}
	{"level":"info","ts":"2023-12-18T12:57:42.162821Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"41ffe37f8ac56e1b became leader at term 2"}
	{"level":"info","ts":"2023-12-18T12:57:42.162975Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 41ffe37f8ac56e1b elected leader 41ffe37f8ac56e1b at term 2"}
	{"level":"info","ts":"2023-12-18T12:57:42.166883Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"41ffe37f8ac56e1b","local-member-attributes":"{Name:multinode-015900 ClientURLs:[https://192.168.238.182:2379]}","request-path":"/0/members/41ffe37f8ac56e1b/attributes","cluster-id":"d139d8f891842dfc","publish-timeout":"7s"}
	{"level":"info","ts":"2023-12-18T12:57:42.167088Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-12-18T12:57:42.170001Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.238.182:2379"}
	{"level":"info","ts":"2023-12-18T12:57:42.167206Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-12-18T12:57:42.16742Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2023-12-18T12:57:42.167505Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-12-18T12:57:42.182853Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-12-18T12:57:42.186757Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"d139d8f891842dfc","local-member-id":"41ffe37f8ac56e1b","cluster-version":"3.5"}
	{"level":"info","ts":"2023-12-18T12:57:42.189906Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-12-18T12:57:42.190037Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2023-12-18T12:57:42.197821Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"warn","ts":"2023-12-18T12:58:10.973504Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"106.8128ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-015900\" ","response":"range_response_count:1 size:4488"}
	{"level":"info","ts":"2023-12-18T12:58:10.973868Z","caller":"traceutil/trace.go:171","msg":"trace[1681957784] range","detail":"{range_begin:/registry/minions/multinode-015900; range_end:; response_count:1; response_revision:421; }","duration":"107.204003ms","start":"2023-12-18T12:58:10.866644Z","end":"2023-12-18T12:58:10.973848Z","steps":["trace[1681957784] 'range keys from in-memory index tree'  (duration: 106.618099ms)"],"step_count":1}
	
	* 
	* ==> kernel <==
	*  12:58:41 up 2 min,  0 users,  load average: 1.49, 0.57, 0.21
	Linux multinode-015900 5.10.57 #1 SMP Wed Dec 13 22:38:26 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kindnet [1bc4da30bccb] <==
	* I1218 12:58:11.613915       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I1218 12:58:11.614240       1 main.go:107] hostIP = 192.168.238.182
	podIP = 192.168.238.182
	I1218 12:58:11.614448       1 main.go:116] setting mtu 1500 for CNI 
	I1218 12:58:11.614465       1 main.go:146] kindnetd IP family: "ipv4"
	I1218 12:58:11.614483       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I1218 12:58:12.215365       1 main.go:223] Handling node with IPs: map[192.168.238.182:{}]
	I1218 12:58:12.215432       1 main.go:227] handling current node
	I1218 12:58:22.226935       1 main.go:223] Handling node with IPs: map[192.168.238.182:{}]
	I1218 12:58:22.227047       1 main.go:227] handling current node
	I1218 12:58:32.234161       1 main.go:223] Handling node with IPs: map[192.168.238.182:{}]
	I1218 12:58:32.234287       1 main.go:227] handling current node
	
	* 
	* ==> kube-apiserver [6eb3af983689] <==
	* I1218 12:57:44.444287       1 controller.go:624] quota admission added evaluator for: namespaces
	I1218 12:57:44.451095       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1218 12:57:44.451111       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1218 12:57:44.451295       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1218 12:57:44.451442       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1218 12:57:44.451612       1 aggregator.go:166] initial CRD sync complete...
	I1218 12:57:44.451762       1 autoregister_controller.go:141] Starting autoregister controller
	I1218 12:57:44.451941       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1218 12:57:44.452038       1 cache.go:39] Caches are synced for autoregister controller
	I1218 12:57:44.541375       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1218 12:57:45.251967       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1218 12:57:45.262520       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1218 12:57:45.262610       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1218 12:57:46.063110       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1218 12:57:46.132034       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1218 12:57:46.220907       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1218 12:57:46.229961       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.238.182]
	I1218 12:57:46.231438       1 controller.go:624] quota admission added evaluator for: endpoints
	I1218 12:57:46.238355       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1218 12:57:46.365982       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1218 12:57:48.067539       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1218 12:57:48.083280       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1218 12:57:48.098551       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I1218 12:57:59.371448       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I1218 12:58:00.070881       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	
	* 
	* ==> kube-controller-manager [3b0c7b029fe2] <==
	* I1218 12:57:59.307288       1 shared_informer.go:318] Caches are synced for endpoint_slice_mirroring
	I1218 12:57:59.326210       1 shared_informer.go:318] Caches are synced for resource quota
	I1218 12:57:59.369923       1 shared_informer.go:318] Caches are synced for endpoint_slice
	I1218 12:57:59.376893       1 shared_informer.go:318] Caches are synced for resource quota
	I1218 12:57:59.379895       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-5dd5756b68 to 2"
	I1218 12:57:59.387923       1 shared_informer.go:318] Caches are synced for disruption
	I1218 12:57:59.764072       1 shared_informer.go:318] Caches are synced for garbage collector
	I1218 12:57:59.764229       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1218 12:57:59.789470       1 shared_informer.go:318] Caches are synced for garbage collector
	I1218 12:58:00.090922       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-bfllh"
	I1218 12:58:00.095676       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-xpxz2"
	I1218 12:58:00.208362       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-5dd5756b68 to 1 from 2"
	I1218 12:58:00.298916       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-jf64x"
	I1218 12:58:00.345879       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-256fn"
	I1218 12:58:00.375518       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="995.424402ms"
	I1218 12:58:00.389464       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-5dd5756b68-jf64x"
	I1218 12:58:00.405536       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="29.86682ms"
	I1218 12:58:00.427988       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="22.40744ms"
	I1218 12:58:00.428594       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="216.602µs"
	I1218 12:58:16.597503       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="250.401µs"
	I1218 12:58:16.640495       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="51.701µs"
	I1218 12:58:19.032207       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="91.5µs"
	I1218 12:58:19.073070       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="11.560837ms"
	I1218 12:58:19.074978       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="290.701µs"
	I1218 12:58:19.266042       1 node_lifecycle_controller.go:1048] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	
	* 
	* ==> kube-proxy [cbb2ed8a2d45] <==
	* I1218 12:58:01.407065       1 server_others.go:69] "Using iptables proxy"
	I1218 12:58:01.431322       1 node.go:141] Successfully retrieved node IP: 192.168.238.182
	I1218 12:58:01.484627       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I1218 12:58:01.484653       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1218 12:58:01.489185       1 server_others.go:152] "Using iptables Proxier"
	I1218 12:58:01.489391       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1218 12:58:01.490772       1 server.go:846] "Version info" version="v1.28.4"
	I1218 12:58:01.491259       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1218 12:58:01.494167       1 config.go:188] "Starting service config controller"
	I1218 12:58:01.494300       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1218 12:58:01.494488       1 config.go:97] "Starting endpoint slice config controller"
	I1218 12:58:01.494636       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1218 12:58:01.497871       1 config.go:315] "Starting node config controller"
	I1218 12:58:01.497885       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1218 12:58:01.597755       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1218 12:58:01.603076       1 shared_informer.go:318] Caches are synced for node config
	I1218 12:58:01.597939       1 shared_informer.go:318] Caches are synced for service config
	
	* 
	* ==> kube-scheduler [11172ef348e4] <==
	* E1218 12:57:44.421204       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1218 12:57:44.421758       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1218 12:57:44.425651       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1218 12:57:44.425662       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1218 12:57:44.425668       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1218 12:57:44.425674       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1218 12:57:45.237083       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1218 12:57:45.237113       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W1218 12:57:45.258302       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1218 12:57:45.258342       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1218 12:57:45.363853       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1218 12:57:45.364833       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W1218 12:57:45.392943       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1218 12:57:45.393313       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W1218 12:57:45.476378       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1218 12:57:45.476547       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1218 12:57:45.525478       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1218 12:57:45.525523       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W1218 12:57:45.546560       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1218 12:57:45.547049       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1218 12:57:45.546922       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1218 12:57:45.547771       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W1218 12:57:45.628817       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1218 12:57:45.628929       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I1218 12:57:47.687467       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Mon 2023-12-18 12:55:53 UTC, ends at Mon 2023-12-18 12:58:41 UTC. --
	Dec 18 12:57:59 multinode-015900 kubelet[2696]: I1218 12:57:59.317589    2696 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Dec 18 12:58:00 multinode-015900 kubelet[2696]: I1218 12:58:00.132607    2696 topology_manager.go:215] "Topology Admit Handler" podUID="f376dae1-8132-46e0-a367-7a4764b6138b" podNamespace="kube-system" podName="kindnet-bfllh"
	Dec 18 12:58:00 multinode-015900 kubelet[2696]: I1218 12:58:00.186046    2696 topology_manager.go:215] "Topology Admit Handler" podUID="6070d8c7-5af2-4e9f-b737-760782b764a6" podNamespace="kube-system" podName="kube-proxy-xpxz2"
	Dec 18 12:58:00 multinode-015900 kubelet[2696]: I1218 12:58:00.196507    2696 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f376dae1-8132-46e0-a367-7a4764b6138b-lib-modules\") pod \"kindnet-bfllh\" (UID: \"f376dae1-8132-46e0-a367-7a4764b6138b\") " pod="kube-system/kindnet-bfllh"
	Dec 18 12:58:00 multinode-015900 kubelet[2696]: I1218 12:58:00.208182    2696 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f2ftm\" (UniqueName: \"kubernetes.io/projected/f376dae1-8132-46e0-a367-7a4764b6138b-kube-api-access-f2ftm\") pod \"kindnet-bfllh\" (UID: \"f376dae1-8132-46e0-a367-7a4764b6138b\") " pod="kube-system/kindnet-bfllh"
	Dec 18 12:58:00 multinode-015900 kubelet[2696]: I1218 12:58:00.208641    2696 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f376dae1-8132-46e0-a367-7a4764b6138b-xtables-lock\") pod \"kindnet-bfllh\" (UID: \"f376dae1-8132-46e0-a367-7a4764b6138b\") " pod="kube-system/kindnet-bfllh"
	Dec 18 12:58:00 multinode-015900 kubelet[2696]: I1218 12:58:00.208851    2696 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/f376dae1-8132-46e0-a367-7a4764b6138b-cni-cfg\") pod \"kindnet-bfllh\" (UID: \"f376dae1-8132-46e0-a367-7a4764b6138b\") " pod="kube-system/kindnet-bfllh"
	Dec 18 12:58:00 multinode-015900 kubelet[2696]: I1218 12:58:00.310466    2696 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6070d8c7-5af2-4e9f-b737-760782b764a6-xtables-lock\") pod \"kube-proxy-xpxz2\" (UID: \"6070d8c7-5af2-4e9f-b737-760782b764a6\") " pod="kube-system/kube-proxy-xpxz2"
	Dec 18 12:58:00 multinode-015900 kubelet[2696]: I1218 12:58:00.310511    2696 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6070d8c7-5af2-4e9f-b737-760782b764a6-lib-modules\") pod \"kube-proxy-xpxz2\" (UID: \"6070d8c7-5af2-4e9f-b737-760782b764a6\") " pod="kube-system/kube-proxy-xpxz2"
	Dec 18 12:58:00 multinode-015900 kubelet[2696]: I1218 12:58:00.310554    2696 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6lm2q\" (UniqueName: \"kubernetes.io/projected/6070d8c7-5af2-4e9f-b737-760782b764a6-kube-api-access-6lm2q\") pod \"kube-proxy-xpxz2\" (UID: \"6070d8c7-5af2-4e9f-b737-760782b764a6\") " pod="kube-system/kube-proxy-xpxz2"
	Dec 18 12:58:00 multinode-015900 kubelet[2696]: I1218 12:58:00.310620    2696 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/6070d8c7-5af2-4e9f-b737-760782b764a6-kube-proxy\") pod \"kube-proxy-xpxz2\" (UID: \"6070d8c7-5af2-4e9f-b737-760782b764a6\") " pod="kube-system/kube-proxy-xpxz2"
	Dec 18 12:58:05 multinode-015900 kubelet[2696]: I1218 12:58:05.950746    2696 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="edbac0d907de9c24ba1961ed7a92cc8209455aada6074e050909d9df92d3f558"
	Dec 18 12:58:08 multinode-015900 kubelet[2696]: I1218 12:58:08.434639    2696 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-xpxz2" podStartSLOduration=8.4345963 podCreationTimestamp="2023-12-18 12:58:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-12-18 12:58:01.642913388 +0000 UTC m=+13.615243282" watchObservedRunningTime="2023-12-18 12:58:08.4345963 +0000 UTC m=+20.406926194"
	Dec 18 12:58:16 multinode-015900 kubelet[2696]: I1218 12:58:16.551208    2696 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
	Dec 18 12:58:16 multinode-015900 kubelet[2696]: I1218 12:58:16.592113    2696 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kindnet-bfllh" podStartSLOduration=11.852970895 podCreationTimestamp="2023-12-18 12:58:00 +0000 UTC" firstStartedPulling="2023-12-18 12:58:05.952845271 +0000 UTC m=+17.925175065" lastFinishedPulling="2023-12-18 12:58:10.69194979 +0000 UTC m=+22.664279584" observedRunningTime="2023-12-18 12:58:12.047985923 +0000 UTC m=+24.020315717" watchObservedRunningTime="2023-12-18 12:58:16.592075414 +0000 UTC m=+28.564405308"
	Dec 18 12:58:16 multinode-015900 kubelet[2696]: I1218 12:58:16.592277    2696 topology_manager.go:215] "Topology Admit Handler" podUID="9b6ddc85-8b7a-45d0-9867-3be6bd8085e6" podNamespace="kube-system" podName="storage-provisioner"
	Dec 18 12:58:16 multinode-015900 kubelet[2696]: I1218 12:58:16.596449    2696 topology_manager.go:215] "Topology Admit Handler" podUID="6bd59fc6-50d6-4764-823d-71232811cff2" podNamespace="kube-system" podName="coredns-5dd5756b68-256fn"
	Dec 18 12:58:16 multinode-015900 kubelet[2696]: I1218 12:58:16.762786    2696 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vkh2p\" (UniqueName: \"kubernetes.io/projected/9b6ddc85-8b7a-45d0-9867-3be6bd8085e6-kube-api-access-vkh2p\") pod \"storage-provisioner\" (UID: \"9b6ddc85-8b7a-45d0-9867-3be6bd8085e6\") " pod="kube-system/storage-provisioner"
	Dec 18 12:58:16 multinode-015900 kubelet[2696]: I1218 12:58:16.762853    2696 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n9f8l\" (UniqueName: \"kubernetes.io/projected/6bd59fc6-50d6-4764-823d-71232811cff2-kube-api-access-n9f8l\") pod \"coredns-5dd5756b68-256fn\" (UID: \"6bd59fc6-50d6-4764-823d-71232811cff2\") " pod="kube-system/coredns-5dd5756b68-256fn"
	Dec 18 12:58:16 multinode-015900 kubelet[2696]: I1218 12:58:16.762880    2696 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/9b6ddc85-8b7a-45d0-9867-3be6bd8085e6-tmp\") pod \"storage-provisioner\" (UID: \"9b6ddc85-8b7a-45d0-9867-3be6bd8085e6\") " pod="kube-system/storage-provisioner"
	Dec 18 12:58:16 multinode-015900 kubelet[2696]: I1218 12:58:16.762904    2696 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6bd59fc6-50d6-4764-823d-71232811cff2-config-volume\") pod \"coredns-5dd5756b68-256fn\" (UID: \"6bd59fc6-50d6-4764-823d-71232811cff2\") " pod="kube-system/coredns-5dd5756b68-256fn"
	Dec 18 12:58:17 multinode-015900 kubelet[2696]: I1218 12:58:17.727884    2696 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="83b7f24a350e12a0393bfee9379b82efd391afe1f3f144683858ea37d0304250"
	Dec 18 12:58:17 multinode-015900 kubelet[2696]: I1218 12:58:17.974783    2696 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3844ace09fc880146b66e177e06d623398d2d6294b6b4567d7408eb789d95a9c"
	Dec 18 12:58:19 multinode-015900 kubelet[2696]: I1218 12:58:19.036466    2696 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=11.036424511 podCreationTimestamp="2023-12-18 12:58:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-12-18 12:58:19.009167825 +0000 UTC m=+30.981497719" watchObservedRunningTime="2023-12-18 12:58:19.036424511 +0000 UTC m=+31.008754305"
	Dec 18 12:58:19 multinode-015900 kubelet[2696]: I1218 12:58:19.060990    2696 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-256fn" podStartSLOduration=19.060951088 podCreationTimestamp="2023-12-18 12:58:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-12-18 12:58:19.037843415 +0000 UTC m=+31.010173209" watchObservedRunningTime="2023-12-18 12:58:19.060951088 +0000 UTC m=+31.033280882"
	
	* 
	* ==> storage-provisioner [6803f1209e6c] <==
	* I1218 12:58:18.094468       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1218 12:58:18.126636       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1218 12:58:18.129320       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1218 12:58:18.148124       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1218 12:58:18.149008       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_multinode-015900_dcfef51d-f585-43fa-bb0e-238906d259a2!
	I1218 12:58:18.149616       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"347a87ae-8581-4703-a4c5-aacd517d4214", APIVersion:"v1", ResourceVersion:"443", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' multinode-015900_dcfef51d-f585-43fa-bb0e-238906d259a2 became leader
	I1218 12:58:18.257838       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_multinode-015900_dcfef51d-f585-43fa-bb0e-238906d259a2!
	

                                                
                                                
-- /stdout --
** stderr ** 
	W1218 12:58:33.214253    2792 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p multinode-015900 -n multinode-015900
E1218 12:58:42.371892   14928 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-806500\client.crt: The system cannot find the path specified.
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p multinode-015900 -n multinode-015900: (12.1206696s)
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-015900 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (243.23s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (53.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:422: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-015900 node delete m03
E1218 12:58:56.553680   14928 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ingress-addon-legacy-134100\client.crt: The system cannot find the path specified.
multinode_test.go:422: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-015900 node delete m03: exit status 80 (7.4003841s)

                                                
                                                
-- stdout --
	* Deleting node m03 from cluster multinode-015900
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	W1218 12:58:54.758549   11456 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	X Exiting due to GUEST_NODE_DELETE: deleting node: retrieve: Could not find node m03
	* 
	╭──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                      │
	│    * If the above advice does not help, please let us know:                                                          │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                        │
	│                                                                                                                      │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                             │
	│    * Please also attach the following file to the GitHub issue:                                                      │
	│    * - C:\Users\jenkins.minikube7\AppData\Local\Temp\minikube_node_8f4e751a46f277db0872d89337abd49e62cd2e48_0.log    │
	│                                                                                                                      │
	╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:424: node stop returned an error. args "out/minikube-windows-amd64.exe -p multinode-015900 node delete m03": exit status 80
multinode_test.go:428: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-015900 status --alsologtostderr
E1218 12:59:02.424005   14928 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-922300\client.crt: The system cannot find the path specified.
multinode_test.go:428: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-015900 status --alsologtostderr: (12.0253484s)
multinode_test.go:434: status says both hosts are not running: args "out/minikube-windows-amd64.exe -p multinode-015900 status --alsologtostderr": multinode-015900
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
multinode_test.go:438: status says both kubelets are not running: args "out/minikube-windows-amd64.exe -p multinode-015900 status --alsologtostderr": multinode-015900
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
multinode_test.go:452: (dbg) Run:  kubectl get nodes
multinode_test.go:460: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
multinode_test.go:465: expected 2 nodes Ready status to be True, got 
-- stdout --
	' True
	'

                                                
                                                
-- /stdout --
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-015900 -n multinode-015900
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-015900 -n multinode-015900: (11.9652203s)
helpers_test.go:244: <<< TestMultiNode/serial/DeleteNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/DeleteNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-015900 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-015900 logs -n 25: (8.2524847s)
helpers_test.go:252: TestMultiNode/serial/DeleteNode logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------------|------------------|-------------------|---------|---------------------|---------------------|
	| Command |                 Args                 |     Profile      |       User        | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|------------------|-------------------|---------|---------------------|---------------------|
	| kubectl | -p multinode-015900 -- rollout       | multinode-015900 | minikube7\jenkins | v1.32.0 | 18 Dec 23 12:50 UTC |                     |
	|         | status deployment/busybox            |                  |                   |         |                     |                     |
	| kubectl | -p multinode-015900 -- get pods -o   | multinode-015900 | minikube7\jenkins | v1.32.0 | 18 Dec 23 12:50 UTC |                     |
	|         | jsonpath='{.items[*].status.podIP}'  |                  |                   |         |                     |                     |
	| kubectl | -p multinode-015900 -- get pods -o   | multinode-015900 | minikube7\jenkins | v1.32.0 | 18 Dec 23 12:50 UTC |                     |
	|         | jsonpath='{.items[*].status.podIP}'  |                  |                   |         |                     |                     |
	| kubectl | -p multinode-015900 -- get pods -o   | multinode-015900 | minikube7\jenkins | v1.32.0 | 18 Dec 23 12:50 UTC |                     |
	|         | jsonpath='{.items[*].status.podIP}'  |                  |                   |         |                     |                     |
	| kubectl | -p multinode-015900 -- get pods -o   | multinode-015900 | minikube7\jenkins | v1.32.0 | 18 Dec 23 12:50 UTC |                     |
	|         | jsonpath='{.items[*].status.podIP}'  |                  |                   |         |                     |                     |
	| kubectl | -p multinode-015900 -- get pods -o   | multinode-015900 | minikube7\jenkins | v1.32.0 | 18 Dec 23 12:50 UTC |                     |
	|         | jsonpath='{.items[*].status.podIP}'  |                  |                   |         |                     |                     |
	| kubectl | -p multinode-015900 -- get pods -o   | multinode-015900 | minikube7\jenkins | v1.32.0 | 18 Dec 23 12:50 UTC |                     |
	|         | jsonpath='{.items[*].status.podIP}'  |                  |                   |         |                     |                     |
	| kubectl | -p multinode-015900 -- get pods -o   | multinode-015900 | minikube7\jenkins | v1.32.0 | 18 Dec 23 12:50 UTC |                     |
	|         | jsonpath='{.items[*].status.podIP}'  |                  |                   |         |                     |                     |
	| kubectl | -p multinode-015900 -- get pods -o   | multinode-015900 | minikube7\jenkins | v1.32.0 | 18 Dec 23 12:51 UTC |                     |
	|         | jsonpath='{.items[*].status.podIP}'  |                  |                   |         |                     |                     |
	| kubectl | -p multinode-015900 -- get pods -o   | multinode-015900 | minikube7\jenkins | v1.32.0 | 18 Dec 23 12:51 UTC |                     |
	|         | jsonpath='{.items[*].status.podIP}'  |                  |                   |         |                     |                     |
	| kubectl | -p multinode-015900 -- get pods -o   | multinode-015900 | minikube7\jenkins | v1.32.0 | 18 Dec 23 12:51 UTC |                     |
	|         | jsonpath='{.items[*].status.podIP}'  |                  |                   |         |                     |                     |
	| kubectl | -p multinode-015900 -- get pods -o   | multinode-015900 | minikube7\jenkins | v1.32.0 | 18 Dec 23 12:52 UTC |                     |
	|         | jsonpath='{.items[*].status.podIP}'  |                  |                   |         |                     |                     |
	| kubectl | -p multinode-015900 -- get pods -o   | multinode-015900 | minikube7\jenkins | v1.32.0 | 18 Dec 23 12:52 UTC |                     |
	|         | jsonpath='{.items[*].metadata.name}' |                  |                   |         |                     |                     |
	| kubectl | -p multinode-015900 -- exec          | multinode-015900 | minikube7\jenkins | v1.32.0 | 18 Dec 23 12:52 UTC |                     |
	|         | -- nslookup kubernetes.io            |                  |                   |         |                     |                     |
	| kubectl | -p multinode-015900 -- exec          | multinode-015900 | minikube7\jenkins | v1.32.0 | 18 Dec 23 12:52 UTC |                     |
	|         | -- nslookup kubernetes.default       |                  |                   |         |                     |                     |
	| kubectl | -p multinode-015900                  | multinode-015900 | minikube7\jenkins | v1.32.0 | 18 Dec 23 12:52 UTC |                     |
	|         | -- exec  -- nslookup                 |                  |                   |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |                  |                   |         |                     |                     |
	| kubectl | -p multinode-015900 -- get pods -o   | multinode-015900 | minikube7\jenkins | v1.32.0 | 18 Dec 23 12:52 UTC |                     |
	|         | jsonpath='{.items[*].metadata.name}' |                  |                   |         |                     |                     |
	| node    | add -p multinode-015900 -v 3         | multinode-015900 | minikube7\jenkins | v1.32.0 | 18 Dec 23 12:52 UTC |                     |
	|         | --alsologtostderr                    |                  |                   |         |                     |                     |
	| node    | multinode-015900 node stop m03       | multinode-015900 | minikube7\jenkins | v1.32.0 | 18 Dec 23 12:54 UTC |                     |
	| node    | multinode-015900 node start          | multinode-015900 | minikube7\jenkins | v1.32.0 | 18 Dec 23 12:54 UTC |                     |
	|         | m03 --alsologtostderr                |                  |                   |         |                     |                     |
	| node    | list -p multinode-015900             | multinode-015900 | minikube7\jenkins | v1.32.0 | 18 Dec 23 12:54 UTC |                     |
	| stop    | -p multinode-015900                  | multinode-015900 | minikube7\jenkins | v1.32.0 | 18 Dec 23 12:54 UTC | 18 Dec 23 12:55 UTC |
	| start   | -p multinode-015900                  | multinode-015900 | minikube7\jenkins | v1.32.0 | 18 Dec 23 12:55 UTC | 18 Dec 23 12:58 UTC |
	|         | --wait=true -v=8                     |                  |                   |         |                     |                     |
	|         | --alsologtostderr                    |                  |                   |         |                     |                     |
	| node    | list -p multinode-015900             | multinode-015900 | minikube7\jenkins | v1.32.0 | 18 Dec 23 12:58 UTC |                     |
	| node    | multinode-015900 node delete         | multinode-015900 | minikube7\jenkins | v1.32.0 | 18 Dec 23 12:58 UTC |                     |
	|         | m03                                  |                  |                   |         |                     |                     |
	|---------|--------------------------------------|------------------|-------------------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/12/18 12:55:26
	Running on machine: minikube7
	Binary: Built with gc go1.21.5 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1218 12:55:26.953249   12728 out.go:296] Setting OutFile to fd 716 ...
	I1218 12:55:26.954267   12728 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1218 12:55:26.954267   12728 out.go:309] Setting ErrFile to fd 776...
	I1218 12:55:26.954267   12728 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1218 12:55:26.974709   12728 out.go:303] Setting JSON to false
	I1218 12:55:26.977688   12728 start.go:128] hostinfo: {"hostname":"minikube7","uptime":4601,"bootTime":1702899525,"procs":191,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.3803 Build 19045.3803","kernelVersion":"10.0.19045.3803 Build 19045.3803","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f66ed2ea-6c04-4a6b-8eea-b2fb0953e990"}
	W1218 12:55:26.978381   12728 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1218 12:55:26.980821   12728 out.go:177] * [multinode-015900] minikube v1.32.0 on Microsoft Windows 10 Enterprise N 10.0.19045.3803 Build 19045.3803
	I1218 12:55:26.981650   12728 notify.go:220] Checking for updates...
	I1218 12:55:26.982916   12728 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	I1218 12:55:26.983637   12728 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1218 12:55:26.984323   12728 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube7\minikube-integration\.minikube
	I1218 12:55:26.984717   12728 out.go:177]   - MINIKUBE_LOCATION=17824
	I1218 12:55:26.985591   12728 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1218 12:55:26.987252   12728 config.go:182] Loaded profile config "multinode-015900": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1218 12:55:26.987307   12728 driver.go:392] Setting default libvirt URI to qemu:///system
	I1218 12:55:32.304298   12728 out.go:177] * Using the hyperv driver based on existing profile
	I1218 12:55:32.305054   12728 start.go:298] selected driver: hyperv
	I1218 12:55:32.305054   12728 start.go:902] validating driver "hyperv" against &{Name:multinode-015900 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17765/minikube-v1.32.1-1702490427-17765-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702585004-17765@sha256:ba2fbb9efd5b81a443834ba0800f3bc13feea942ce199df74b0054a9bdb32bbd Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCon
fig:{KubernetesVersion:v1.28.4 ClusterName:multinode-015900 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.235.154 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/minikube-host Mount9PVersion:9p2000.L MountGID:doc
ker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1218 12:55:32.305326   12728 start.go:913] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1218 12:55:32.353025   12728 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1218 12:55:32.353025   12728 cni.go:84] Creating CNI manager for ""
	I1218 12:55:32.353025   12728 cni.go:136] 1 nodes found, recommending kindnet
	I1218 12:55:32.353025   12728 start_flags.go:323] config:
	{Name:multinode-015900 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17765/minikube-v1.32.1-1702490427-17765-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702585004-17765@sha256:ba2fbb9efd5b81a443834ba0800f3bc13feea942ce199df74b0054a9bdb32bbd Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-015900 Namespace:default AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.235.154 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID
:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1218 12:55:32.353736   12728 iso.go:125] acquiring lock: {Name:mka7fdf4332632f41b99a369cc525e40499a0030 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1218 12:55:32.355103   12728 out.go:177] * Starting control plane node multinode-015900 in cluster multinode-015900
	I1218 12:55:32.355546   12728 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I1218 12:55:32.355788   12728 preload.go:148] Found local preload: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4
	I1218 12:55:32.355788   12728 cache.go:56] Caching tarball of preloaded images
	I1218 12:55:32.355929   12728 preload.go:174] Found C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1218 12:55:32.355929   12728 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on docker
	I1218 12:55:32.356649   12728 profile.go:148] Saving config to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-015900\config.json ...
	I1218 12:55:32.359265   12728 start.go:365] acquiring machines lock for multinode-015900: {Name:mk814f158b6187cc9297257c36fdbe0d2871c950 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1218 12:55:32.359265   12728 start.go:369] acquired machines lock for "multinode-015900" in 0s
	I1218 12:55:32.359265   12728 start.go:96] Skipping create...Using existing machine configuration
	I1218 12:55:32.359265   12728 fix.go:54] fixHost starting: 
	I1218 12:55:32.360294   12728 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-015900 ).state
	I1218 12:55:35.042512   12728 main.go:141] libmachine: [stdout =====>] : Off
	
	I1218 12:55:35.042512   12728 main.go:141] libmachine: [stderr =====>] : 
	I1218 12:55:35.042512   12728 fix.go:102] recreateIfNeeded on multinode-015900: state=Stopped err=<nil>
	W1218 12:55:35.042512   12728 fix.go:128] unexpected machine state, will restart: <nil>
	I1218 12:55:35.043707   12728 out.go:177] * Restarting existing hyperv VM for "multinode-015900" ...
	I1218 12:55:35.044295   12728 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-015900
	I1218 12:55:37.873716   12728 main.go:141] libmachine: [stdout =====>] : 
	I1218 12:55:37.873937   12728 main.go:141] libmachine: [stderr =====>] : 
	I1218 12:55:37.873937   12728 main.go:141] libmachine: Waiting for host to start...
	I1218 12:55:37.873969   12728 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-015900 ).state
	I1218 12:55:40.052384   12728 main.go:141] libmachine: [stdout =====>] : Running
	
	I1218 12:55:40.052384   12728 main.go:141] libmachine: [stderr =====>] : 
	I1218 12:55:40.052384   12728 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-015900 ).networkadapters[0]).ipaddresses[0]
	I1218 12:55:42.526741   12728 main.go:141] libmachine: [stdout =====>] : 
	I1218 12:55:42.526741   12728 main.go:141] libmachine: [stderr =====>] : 
	I1218 12:55:43.528799   12728 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-015900 ).state
	I1218 12:55:45.706958   12728 main.go:141] libmachine: [stdout =====>] : Running
	
	I1218 12:55:45.706958   12728 main.go:141] libmachine: [stderr =====>] : 
	I1218 12:55:45.707087   12728 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-015900 ).networkadapters[0]).ipaddresses[0]
	I1218 12:55:48.164591   12728 main.go:141] libmachine: [stdout =====>] : 
	I1218 12:55:48.164623   12728 main.go:141] libmachine: [stderr =====>] : 
	I1218 12:55:49.169959   12728 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-015900 ).state
	I1218 12:55:51.287727   12728 main.go:141] libmachine: [stdout =====>] : Running
	
	I1218 12:55:51.287727   12728 main.go:141] libmachine: [stderr =====>] : 
	I1218 12:55:51.287823   12728 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-015900 ).networkadapters[0]).ipaddresses[0]
	I1218 12:55:53.792706   12728 main.go:141] libmachine: [stdout =====>] : 
	I1218 12:55:53.792706   12728 main.go:141] libmachine: [stderr =====>] : 
	I1218 12:55:54.798995   12728 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-015900 ).state
	I1218 12:55:56.977524   12728 main.go:141] libmachine: [stdout =====>] : Running
	
	I1218 12:55:56.977563   12728 main.go:141] libmachine: [stderr =====>] : 
	I1218 12:55:56.977595   12728 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-015900 ).networkadapters[0]).ipaddresses[0]
	I1218 12:55:59.438045   12728 main.go:141] libmachine: [stdout =====>] : 
	I1218 12:55:59.438045   12728 main.go:141] libmachine: [stderr =====>] : 
	I1218 12:56:00.438781   12728 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-015900 ).state
	I1218 12:56:02.642950   12728 main.go:141] libmachine: [stdout =====>] : Running
	
	I1218 12:56:02.643135   12728 main.go:141] libmachine: [stderr =====>] : 
	I1218 12:56:02.643370   12728 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-015900 ).networkadapters[0]).ipaddresses[0]
	I1218 12:56:05.189030   12728 main.go:141] libmachine: [stdout =====>] : 192.168.238.182
	
	I1218 12:56:05.189030   12728 main.go:141] libmachine: [stderr =====>] : 
	I1218 12:56:05.192084   12728 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-015900 ).state
	I1218 12:56:07.292971   12728 main.go:141] libmachine: [stdout =====>] : Running
	
	I1218 12:56:07.293196   12728 main.go:141] libmachine: [stderr =====>] : 
	I1218 12:56:07.293346   12728 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-015900 ).networkadapters[0]).ipaddresses[0]
	I1218 12:56:09.752047   12728 main.go:141] libmachine: [stdout =====>] : 192.168.238.182
	
	I1218 12:56:09.752047   12728 main.go:141] libmachine: [stderr =====>] : 
	I1218 12:56:09.752280   12728 profile.go:148] Saving config to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-015900\config.json ...
	I1218 12:56:09.754984   12728 machine.go:88] provisioning docker machine ...
	I1218 12:56:09.754984   12728 buildroot.go:166] provisioning hostname "multinode-015900"
	I1218 12:56:09.754984   12728 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-015900 ).state
	I1218 12:56:11.872698   12728 main.go:141] libmachine: [stdout =====>] : Running
	
	I1218 12:56:11.872942   12728 main.go:141] libmachine: [stderr =====>] : 
	I1218 12:56:11.873175   12728 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-015900 ).networkadapters[0]).ipaddresses[0]
	I1218 12:56:14.343383   12728 main.go:141] libmachine: [stdout =====>] : 192.168.238.182
	
	I1218 12:56:14.343744   12728 main.go:141] libmachine: [stderr =====>] : 
	I1218 12:56:14.350406   12728 main.go:141] libmachine: Using SSH client type: native
	I1218 12:56:14.351156   12728 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xdb4f40] 0xdb7a80 <nil>  [] 0s} 192.168.238.182 22 <nil> <nil>}
	I1218 12:56:14.351156   12728 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-015900 && echo "multinode-015900" | sudo tee /etc/hostname
	I1218 12:56:14.525071   12728 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-015900
	
	I1218 12:56:14.525234   12728 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-015900 ).state
	I1218 12:56:16.569409   12728 main.go:141] libmachine: [stdout =====>] : Running
	
	I1218 12:56:16.569409   12728 main.go:141] libmachine: [stderr =====>] : 
	I1218 12:56:16.569493   12728 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-015900 ).networkadapters[0]).ipaddresses[0]
	I1218 12:56:19.068951   12728 main.go:141] libmachine: [stdout =====>] : 192.168.238.182
	
	I1218 12:56:19.068951   12728 main.go:141] libmachine: [stderr =====>] : 
	I1218 12:56:19.075204   12728 main.go:141] libmachine: Using SSH client type: native
	I1218 12:56:19.075890   12728 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xdb4f40] 0xdb7a80 <nil>  [] 0s} 192.168.238.182 22 <nil> <nil>}
	I1218 12:56:19.075890   12728 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-015900' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-015900/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-015900' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1218 12:56:19.230399   12728 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1218 12:56:19.230399   12728 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube7\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube7\minikube-integration\.minikube}
	I1218 12:56:19.230399   12728 buildroot.go:174] setting up certificates
	I1218 12:56:19.230399   12728 provision.go:83] configureAuth start
	I1218 12:56:19.230399   12728 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-015900 ).state
	I1218 12:56:21.311084   12728 main.go:141] libmachine: [stdout =====>] : Running
	
	I1218 12:56:21.311084   12728 main.go:141] libmachine: [stderr =====>] : 
	I1218 12:56:21.311084   12728 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-015900 ).networkadapters[0]).ipaddresses[0]
	I1218 12:56:23.836209   12728 main.go:141] libmachine: [stdout =====>] : 192.168.238.182
	
	I1218 12:56:23.836396   12728 main.go:141] libmachine: [stderr =====>] : 
	I1218 12:56:23.836396   12728 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-015900 ).state
	I1218 12:56:25.888827   12728 main.go:141] libmachine: [stdout =====>] : Running
	
	I1218 12:56:25.889133   12728 main.go:141] libmachine: [stderr =====>] : 
	I1218 12:56:25.889133   12728 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-015900 ).networkadapters[0]).ipaddresses[0]
	I1218 12:56:28.330634   12728 main.go:141] libmachine: [stdout =====>] : 192.168.238.182
	
	I1218 12:56:28.330883   12728 main.go:141] libmachine: [stderr =====>] : 
	I1218 12:56:28.330883   12728 provision.go:138] copyHostCerts
	I1218 12:56:28.331196   12728 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube7\minikube-integration\.minikube/key.pem
	I1218 12:56:28.331196   12728 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/key.pem, removing ...
	I1218 12:56:28.331196   12728 exec_runner.go:203] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\key.pem
	I1218 12:56:28.331929   12728 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/key.pem (1679 bytes)
	I1218 12:56:28.333346   12728 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube7\minikube-integration\.minikube/ca.pem
	I1218 12:56:28.333833   12728 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/ca.pem, removing ...
	I1218 12:56:28.333833   12728 exec_runner.go:203] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.pem
	I1218 12:56:28.334342   12728 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/ca.pem (1078 bytes)
	I1218 12:56:28.335794   12728 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube7\minikube-integration\.minikube/cert.pem
	I1218 12:56:28.336149   12728 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/cert.pem, removing ...
	I1218 12:56:28.336242   12728 exec_runner.go:203] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cert.pem
	I1218 12:56:28.336335   12728 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/cert.pem (1123 bytes)
	I1218 12:56:28.337481   12728 provision.go:112] generating server cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-015900 san=[192.168.238.182 192.168.238.182 localhost 127.0.0.1 minikube multinode-015900]
	I1218 12:56:28.727905   12728 provision.go:172] copyRemoteCerts
	I1218 12:56:28.739908   12728 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1218 12:56:28.739908   12728 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-015900 ).state
	I1218 12:56:30.827240   12728 main.go:141] libmachine: [stdout =====>] : Running
	
	I1218 12:56:30.827404   12728 main.go:141] libmachine: [stderr =====>] : 
	I1218 12:56:30.827495   12728 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-015900 ).networkadapters[0]).ipaddresses[0]
	I1218 12:56:33.327174   12728 main.go:141] libmachine: [stdout =====>] : 192.168.238.182
	
	I1218 12:56:33.327174   12728 main.go:141] libmachine: [stderr =====>] : 
	I1218 12:56:33.327826   12728 sshutil.go:53] new ssh client: &{IP:192.168.238.182 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-015900\id_rsa Username:docker}
	I1218 12:56:33.437649   12728 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.6976894s)
	I1218 12:56:33.437755   12728 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I1218 12:56:33.437755   12728 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1218 12:56:33.476571   12728 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I1218 12:56:33.476571   12728 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1224 bytes)
	I1218 12:56:33.515593   12728 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I1218 12:56:33.515593   12728 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1218 12:56:33.554491   12728 provision.go:86] duration metric: configureAuth took 14.3240421s
	I1218 12:56:33.554491   12728 buildroot.go:189] setting minikube options for container-runtime
	I1218 12:56:33.555745   12728 config.go:182] Loaded profile config "multinode-015900": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1218 12:56:33.555840   12728 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-015900 ).state
	I1218 12:56:35.653970   12728 main.go:141] libmachine: [stdout =====>] : Running
	
	I1218 12:56:35.654082   12728 main.go:141] libmachine: [stderr =====>] : 
	I1218 12:56:35.654082   12728 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-015900 ).networkadapters[0]).ipaddresses[0]
	I1218 12:56:38.096962   12728 main.go:141] libmachine: [stdout =====>] : 192.168.238.182
	
	I1218 12:56:38.096962   12728 main.go:141] libmachine: [stderr =====>] : 
	I1218 12:56:38.102257   12728 main.go:141] libmachine: Using SSH client type: native
	I1218 12:56:38.103031   12728 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xdb4f40] 0xdb7a80 <nil>  [] 0s} 192.168.238.182 22 <nil> <nil>}
	I1218 12:56:38.103031   12728 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1218 12:56:38.246154   12728 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1218 12:56:38.246233   12728 buildroot.go:70] root file system type: tmpfs
	I1218 12:56:38.246430   12728 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I1218 12:56:38.246430   12728 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-015900 ).state
	I1218 12:56:40.304852   12728 main.go:141] libmachine: [stdout =====>] : Running
	
	I1218 12:56:40.304852   12728 main.go:141] libmachine: [stderr =====>] : 
	I1218 12:56:40.305086   12728 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-015900 ).networkadapters[0]).ipaddresses[0]
	I1218 12:56:42.781965   12728 main.go:141] libmachine: [stdout =====>] : 192.168.238.182
	
	I1218 12:56:42.781965   12728 main.go:141] libmachine: [stderr =====>] : 
	I1218 12:56:42.787302   12728 main.go:141] libmachine: Using SSH client type: native
	I1218 12:56:42.788071   12728 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xdb4f40] 0xdb7a80 <nil>  [] 0s} 192.168.238.182 22 <nil> <nil>}
	I1218 12:56:42.788647   12728 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1218 12:56:42.961205   12728 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1218 12:56:42.961758   12728 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-015900 ).state
	I1218 12:56:45.043579   12728 main.go:141] libmachine: [stdout =====>] : Running
	
	I1218 12:56:45.043820   12728 main.go:141] libmachine: [stderr =====>] : 
	I1218 12:56:45.043820   12728 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-015900 ).networkadapters[0]).ipaddresses[0]
	I1218 12:56:47.543715   12728 main.go:141] libmachine: [stdout =====>] : 192.168.238.182
	
	I1218 12:56:47.543980   12728 main.go:141] libmachine: [stderr =====>] : 
	I1218 12:56:47.553171   12728 main.go:141] libmachine: Using SSH client type: native
	I1218 12:56:47.553885   12728 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xdb4f40] 0xdb7a80 <nil>  [] 0s} 192.168.238.182 22 <nil> <nil>}
	I1218 12:56:47.553885   12728 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1218 12:56:48.581386   12728 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I1218 12:56:48.581386   12728 machine.go:91] provisioned docker machine in 38.8262664s
	I1218 12:56:48.581386   12728 start.go:300] post-start starting for "multinode-015900" (driver="hyperv")
	I1218 12:56:48.581386   12728 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1218 12:56:48.595197   12728 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1218 12:56:48.595197   12728 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-015900 ).state
	I1218 12:56:50.674839   12728 main.go:141] libmachine: [stdout =====>] : Running
	
	I1218 12:56:50.674839   12728 main.go:141] libmachine: [stderr =====>] : 
	I1218 12:56:50.674935   12728 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-015900 ).networkadapters[0]).ipaddresses[0]
	I1218 12:56:53.165656   12728 main.go:141] libmachine: [stdout =====>] : 192.168.238.182
	
	I1218 12:56:53.165891   12728 main.go:141] libmachine: [stderr =====>] : 
	I1218 12:56:53.166527   12728 sshutil.go:53] new ssh client: &{IP:192.168.238.182 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-015900\id_rsa Username:docker}
	I1218 12:56:53.273775   12728 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.6784988s)
	I1218 12:56:53.294575   12728 ssh_runner.go:195] Run: cat /etc/os-release
	I1218 12:56:53.303042   12728 command_runner.go:130] > NAME=Buildroot
	I1218 12:56:53.303042   12728 command_runner.go:130] > VERSION=2021.02.12-1-g0492d51-dirty
	I1218 12:56:53.303042   12728 command_runner.go:130] > ID=buildroot
	I1218 12:56:53.303042   12728 command_runner.go:130] > VERSION_ID=2021.02.12
	I1218 12:56:53.303042   12728 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
	I1218 12:56:53.303042   12728 info.go:137] Remote host: Buildroot 2021.02.12
	I1218 12:56:53.303042   12728 filesync.go:126] Scanning C:\Users\jenkins.minikube7\minikube-integration\.minikube\addons for local assets ...
	I1218 12:56:53.303042   12728 filesync.go:126] Scanning C:\Users\jenkins.minikube7\minikube-integration\.minikube\files for local assets ...
	I1218 12:56:53.304157   12728 filesync.go:149] local asset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\149282.pem -> 149282.pem in /etc/ssl/certs
	I1218 12:56:53.304157   12728 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\149282.pem -> /etc/ssl/certs/149282.pem
	I1218 12:56:53.317316   12728 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1218 12:56:53.335402   12728 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\149282.pem --> /etc/ssl/certs/149282.pem (1708 bytes)
	I1218 12:56:53.377785   12728 start.go:303] post-start completed in 4.796382s
	I1218 12:56:53.377785   12728 fix.go:56] fixHost completed within 1m21.0182343s
	I1218 12:56:53.377896   12728 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-015900 ).state
	I1218 12:56:55.460466   12728 main.go:141] libmachine: [stdout =====>] : Running
	
	I1218 12:56:55.460797   12728 main.go:141] libmachine: [stderr =====>] : 
	I1218 12:56:55.460797   12728 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-015900 ).networkadapters[0]).ipaddresses[0]
	I1218 12:56:57.922452   12728 main.go:141] libmachine: [stdout =====>] : 192.168.238.182
	
	I1218 12:56:57.922733   12728 main.go:141] libmachine: [stderr =====>] : 
	I1218 12:56:57.931724   12728 main.go:141] libmachine: Using SSH client type: native
	I1218 12:56:57.932620   12728 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xdb4f40] 0xdb7a80 <nil>  [] 0s} 192.168.238.182 22 <nil> <nil>}
	I1218 12:56:57.932620   12728 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1218 12:56:58.070496   12728 main.go:141] libmachine: SSH cmd err, output: <nil>: 1702904218.079681508
	
	I1218 12:56:58.070496   12728 fix.go:206] guest clock: 1702904218.079681508
	I1218 12:56:58.070496   12728 fix.go:219] Guest: 2023-12-18 12:56:58.079681508 +0000 UTC Remote: 2023-12-18 12:56:53.3777852 +0000 UTC m=+86.592920601 (delta=4.701896308s)
	I1218 12:56:58.070609   12728 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-015900 ).state
	I1218 12:57:00.191850   12728 main.go:141] libmachine: [stdout =====>] : Running
	
	I1218 12:57:00.191940   12728 main.go:141] libmachine: [stderr =====>] : 
	I1218 12:57:00.191940   12728 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-015900 ).networkadapters[0]).ipaddresses[0]
	I1218 12:57:02.669395   12728 main.go:141] libmachine: [stdout =====>] : 192.168.238.182
	
	I1218 12:57:02.669500   12728 main.go:141] libmachine: [stderr =====>] : 
	I1218 12:57:02.675435   12728 main.go:141] libmachine: Using SSH client type: native
	I1218 12:57:02.676187   12728 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xdb4f40] 0xdb7a80 <nil>  [] 0s} 192.168.238.182 22 <nil> <nil>}
	I1218 12:57:02.676187   12728 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1702904218
	I1218 12:57:02.825924   12728 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Dec 18 12:56:58 UTC 2023
	
	I1218 12:57:02.825924   12728 fix.go:226] clock set: Mon Dec 18 12:56:58 UTC 2023
	 (err=<nil>)
	I1218 12:57:02.825924   12728 start.go:83] releasing machines lock for "multinode-015900", held for 1m30.4663406s
	I1218 12:57:02.826576   12728 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-015900 ).state
	I1218 12:57:04.923215   12728 main.go:141] libmachine: [stdout =====>] : Running
	
	I1218 12:57:04.923308   12728 main.go:141] libmachine: [stderr =====>] : 
	I1218 12:57:04.923308   12728 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-015900 ).networkadapters[0]).ipaddresses[0]
	I1218 12:57:07.393166   12728 main.go:141] libmachine: [stdout =====>] : 192.168.238.182
	
	I1218 12:57:07.393166   12728 main.go:141] libmachine: [stderr =====>] : 
	I1218 12:57:07.397736   12728 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1218 12:57:07.397812   12728 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-015900 ).state
	I1218 12:57:07.412510   12728 ssh_runner.go:195] Run: cat /version.json
	I1218 12:57:07.412510   12728 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-015900 ).state
	I1218 12:57:09.606449   12728 main.go:141] libmachine: [stdout =====>] : Running
	
	I1218 12:57:09.606449   12728 main.go:141] libmachine: [stdout =====>] : Running
	
	I1218 12:57:09.606567   12728 main.go:141] libmachine: [stderr =====>] : 
	I1218 12:57:09.606624   12728 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-015900 ).networkadapters[0]).ipaddresses[0]
	I1218 12:57:09.606567   12728 main.go:141] libmachine: [stderr =====>] : 
	I1218 12:57:09.606624   12728 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-015900 ).networkadapters[0]).ipaddresses[0]
	I1218 12:57:12.248392   12728 main.go:141] libmachine: [stdout =====>] : 192.168.238.182
	
	I1218 12:57:12.248567   12728 main.go:141] libmachine: [stderr =====>] : 
	I1218 12:57:12.249371   12728 sshutil.go:53] new ssh client: &{IP:192.168.238.182 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-015900\id_rsa Username:docker}
	I1218 12:57:12.268308   12728 main.go:141] libmachine: [stdout =====>] : 192.168.238.182
	
	I1218 12:57:12.268308   12728 main.go:141] libmachine: [stderr =====>] : 
	I1218 12:57:12.270402   12728 sshutil.go:53] new ssh client: &{IP:192.168.238.182 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-015900\id_rsa Username:docker}
	I1218 12:57:12.355387   12728 command_runner.go:130] > {"iso_version": "v1.32.1-1702490427-17765", "kicbase_version": "v0.0.42-1702394725-17761", "minikube_version": "v1.32.0", "commit": "2780c4af854905e5cd4b94dc93de1f9d00b9040d"}
	I1218 12:57:12.355570   12728 ssh_runner.go:235] Completed: cat /version.json: (4.9430431s)
	I1218 12:57:12.368915   12728 ssh_runner.go:195] Run: systemctl --version
	I1218 12:57:12.453441   12728 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1218 12:57:12.453975   12728 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (5.0561444s)
	I1218 12:57:12.454088   12728 command_runner.go:130] > systemd 247 (247)
	I1218 12:57:12.454088   12728 command_runner.go:130] > -PAM -AUDIT -SELINUX -IMA -APPARMOR -SMACK -SYSVINIT -UTMP -LIBCRYPTSETUP -GCRYPT -GNUTLS +ACL +XZ +LZ4 -ZSTD +SECCOMP +BLKID -ELFUTILS +KMOD -IDN2 -IDN -PCRE2 default-hierarchy=hybrid
	I1218 12:57:12.467104   12728 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1218 12:57:12.476559   12728 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1218 12:57:12.476918   12728 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1218 12:57:12.492645   12728 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1218 12:57:12.517230   12728 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I1218 12:57:12.517254   12728 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1218 12:57:12.517331   12728 start.go:475] detecting cgroup driver to use...
	I1218 12:57:12.517707   12728 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1218 12:57:12.547306   12728 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I1218 12:57:12.562505   12728 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I1218 12:57:12.591623   12728 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1218 12:57:12.607911   12728 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I1218 12:57:12.628353   12728 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1218 12:57:12.656134   12728 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1218 12:57:12.690010   12728 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1218 12:57:12.718962   12728 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1218 12:57:12.751267   12728 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1218 12:57:12.779256   12728 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1218 12:57:12.807546   12728 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1218 12:57:12.821937   12728 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1218 12:57:12.837046   12728 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1218 12:57:12.864201   12728 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1218 12:57:13.045161   12728 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1218 12:57:13.073124   12728 start.go:475] detecting cgroup driver to use...
	I1218 12:57:13.090500   12728 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1218 12:57:13.111397   12728 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I1218 12:57:13.111555   12728 command_runner.go:130] > [Unit]
	I1218 12:57:13.111555   12728 command_runner.go:130] > Description=Docker Application Container Engine
	I1218 12:57:13.111555   12728 command_runner.go:130] > Documentation=https://docs.docker.com
	I1218 12:57:13.111555   12728 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I1218 12:57:13.111555   12728 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I1218 12:57:13.111555   12728 command_runner.go:130] > StartLimitBurst=3
	I1218 12:57:13.111555   12728 command_runner.go:130] > StartLimitIntervalSec=60
	I1218 12:57:13.111555   12728 command_runner.go:130] > [Service]
	I1218 12:57:13.111668   12728 command_runner.go:130] > Type=notify
	I1218 12:57:13.111668   12728 command_runner.go:130] > Restart=on-failure
	I1218 12:57:13.111668   12728 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I1218 12:57:13.111668   12728 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I1218 12:57:13.111758   12728 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I1218 12:57:13.111871   12728 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I1218 12:57:13.111871   12728 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I1218 12:57:13.111871   12728 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I1218 12:57:13.111871   12728 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I1218 12:57:13.111871   12728 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I1218 12:57:13.111871   12728 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I1218 12:57:13.111871   12728 command_runner.go:130] > ExecStart=
	I1218 12:57:13.112004   12728 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I1218 12:57:13.112004   12728 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I1218 12:57:13.112004   12728 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I1218 12:57:13.112004   12728 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I1218 12:57:13.112004   12728 command_runner.go:130] > LimitNOFILE=infinity
	I1218 12:57:13.112004   12728 command_runner.go:130] > LimitNPROC=infinity
	I1218 12:57:13.112004   12728 command_runner.go:130] > LimitCORE=infinity
	I1218 12:57:13.112004   12728 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I1218 12:57:13.112123   12728 command_runner.go:130] > # Only systemd 226 and above support this version.
	I1218 12:57:13.112123   12728 command_runner.go:130] > TasksMax=infinity
	I1218 12:57:13.112123   12728 command_runner.go:130] > TimeoutStartSec=0
	I1218 12:57:13.112123   12728 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I1218 12:57:13.112123   12728 command_runner.go:130] > Delegate=yes
	I1218 12:57:13.112123   12728 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I1218 12:57:13.112123   12728 command_runner.go:130] > KillMode=process
	I1218 12:57:13.112123   12728 command_runner.go:130] > [Install]
	I1218 12:57:13.112232   12728 command_runner.go:130] > WantedBy=multi-user.target
	I1218 12:57:13.127067   12728 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1218 12:57:13.160908   12728 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1218 12:57:13.198534   12728 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1218 12:57:13.231182   12728 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1218 12:57:13.261718   12728 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1218 12:57:13.320690   12728 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1218 12:57:13.343469   12728 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1218 12:57:13.370708   12728 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I1218 12:57:13.386568   12728 ssh_runner.go:195] Run: which cri-dockerd
	I1218 12:57:13.392566   12728 command_runner.go:130] > /usr/bin/cri-dockerd
	I1218 12:57:13.404309   12728 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1218 12:57:13.418718   12728 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I1218 12:57:13.459161   12728 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1218 12:57:13.634470   12728 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1218 12:57:13.786913   12728 docker.go:560] configuring docker to use "cgroupfs" as cgroup driver...
	I1218 12:57:13.786913   12728 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1218 12:57:13.834139   12728 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1218 12:57:13.999980   12728 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1218 12:57:15.517916   12728 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.5177707s)
	I1218 12:57:15.530806   12728 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1218 12:57:15.692321   12728 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1218 12:57:15.866585   12728 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1218 12:57:16.038076   12728 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1218 12:57:16.202888   12728 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1218 12:57:16.238706   12728 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1218 12:57:16.399295   12728 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I1218 12:57:16.502246   12728 start.go:522] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1218 12:57:16.516636   12728 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1218 12:57:16.526384   12728 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I1218 12:57:16.526465   12728 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1218 12:57:16.526465   12728 command_runner.go:130] > Device: 16h/22d	Inode: 875         Links: 1
	I1218 12:57:16.526465   12728 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I1218 12:57:16.526548   12728 command_runner.go:130] > Access: 2023-12-18 12:57:16.430111643 +0000
	I1218 12:57:16.526548   12728 command_runner.go:130] > Modify: 2023-12-18 12:57:16.430111643 +0000
	I1218 12:57:16.526548   12728 command_runner.go:130] > Change: 2023-12-18 12:57:16.433111643 +0000
	I1218 12:57:16.526548   12728 command_runner.go:130] >  Birth: -
	I1218 12:57:16.526617   12728 start.go:543] Will wait 60s for crictl version
	I1218 12:57:16.542155   12728 ssh_runner.go:195] Run: which crictl
	I1218 12:57:16.546892   12728 command_runner.go:130] > /usr/bin/crictl
	I1218 12:57:16.561371   12728 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1218 12:57:16.625278   12728 command_runner.go:130] > Version:  0.1.0
	I1218 12:57:16.625390   12728 command_runner.go:130] > RuntimeName:  docker
	I1218 12:57:16.625390   12728 command_runner.go:130] > RuntimeVersion:  24.0.7
	I1218 12:57:16.625390   12728 command_runner.go:130] > RuntimeApiVersion:  v1
	I1218 12:57:16.625390   12728 start.go:559] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  24.0.7
	RuntimeApiVersion:  v1
	I1218 12:57:16.638872   12728 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1218 12:57:16.670929   12728 command_runner.go:130] > 24.0.7
	I1218 12:57:16.681457   12728 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1218 12:57:16.712780   12728 command_runner.go:130] > 24.0.7
	I1218 12:57:16.714689   12728 out.go:204] * Preparing Kubernetes v1.28.4 on Docker 24.0.7 ...
	I1218 12:57:16.714824   12728 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I1218 12:57:16.720334   12728 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I1218 12:57:16.720334   12728 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I1218 12:57:16.720334   12728 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I1218 12:57:16.720334   12728 ip.go:207] Found interface: {Index:8 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:ed:dc:88 Flags:up|broadcast|multicast|running}
	I1218 12:57:16.722815   12728 ip.go:210] interface addr: fe80::61bd:e46f:b0aa:cbb0/64
	I1218 12:57:16.722815   12728 ip.go:210] interface addr: 192.168.224.1/20
	I1218 12:57:16.737736   12728 ssh_runner.go:195] Run: grep 192.168.224.1	host.minikube.internal$ /etc/hosts
	I1218 12:57:16.743339   12728 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.224.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1218 12:57:16.760906   12728 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I1218 12:57:16.770735   12728 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1218 12:57:16.794361   12728 docker.go:671] Got preloaded images: 
	I1218 12:57:16.794637   12728 docker.go:677] registry.k8s.io/kube-apiserver:v1.28.4 wasn't preloaded
	I1218 12:57:16.806596   12728 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1218 12:57:16.820642   12728 command_runner.go:139] > {"Repositories":{}}
	I1218 12:57:16.834413   12728 ssh_runner.go:195] Run: which lz4
	I1218 12:57:16.839729   12728 command_runner.go:130] > /usr/bin/lz4
	I1218 12:57:16.839729   12728 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I1218 12:57:16.854127   12728 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1218 12:57:16.859336   12728 command_runner.go:130] ! stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1218 12:57:16.859336   12728 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1218 12:57:16.859336   12728 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (423165415 bytes)
	I1218 12:57:19.355691   12728 docker.go:635] Took 2.515452 seconds to copy over tarball
	I1218 12:57:19.367419   12728 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1218 12:57:28.938222   12728 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (9.570769s)
	I1218 12:57:28.938349   12728 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1218 12:57:29.005360   12728 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1218 12:57:29.026188   12728 command_runner.go:139] > {"Repositories":{"gcr.io/k8s-minikube/storage-provisioner":{"gcr.io/k8s-minikube/storage-provisioner:v5":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562"},"registry.k8s.io/coredns/coredns":{"registry.k8s.io/coredns/coredns:v1.10.1":"sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc","registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e":"sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc"},"registry.k8s.io/etcd":{"registry.k8s.io/etcd:3.5.9-0":"sha256:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9","registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3":"sha256:73deb9a3f702532592a4167455f8
bf2e5f5d900bcc959ba2fd2d35c321de1af9"},"registry.k8s.io/kube-apiserver":{"registry.k8s.io/kube-apiserver:v1.28.4":"sha256:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257","registry.k8s.io/kube-apiserver@sha256:5b28a364467cf7e134343bb3ee2c6d40682b473a743a72142c7bbe25767d36eb":"sha256:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257"},"registry.k8s.io/kube-controller-manager":{"registry.k8s.io/kube-controller-manager:v1.28.4":"sha256:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591","registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c":"sha256:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591"},"registry.k8s.io/kube-proxy":{"registry.k8s.io/kube-proxy:v1.28.4":"sha256:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e","registry.k8s.io/kube-proxy@sha256:e63408a0f5068a7e9d4b34fd72b4a2b0e5512509b53cd2123a37fc991b0ef532":"sha256:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021
a3a2899304398e"},"registry.k8s.io/kube-scheduler":{"registry.k8s.io/kube-scheduler:v1.28.4":"sha256:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1","registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba":"sha256:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1"},"registry.k8s.io/pause":{"registry.k8s.io/pause:3.9":"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097":"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c"}}}
	I1218 12:57:29.027539   12728 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2629 bytes)
	I1218 12:57:29.071382   12728 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1218 12:57:29.240879   12728 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1218 12:57:31.607210   12728 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.3663226s)
	I1218 12:57:31.618038   12728 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1218 12:57:31.643655   12728 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.28.4
	I1218 12:57:31.643655   12728 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.28.4
	I1218 12:57:31.643655   12728 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.28.4
	I1218 12:57:31.643655   12728 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.28.4
	I1218 12:57:31.643655   12728 command_runner.go:130] > registry.k8s.io/etcd:3.5.9-0
	I1218 12:57:31.643655   12728 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.10.1
	I1218 12:57:31.643655   12728 command_runner.go:130] > registry.k8s.io/pause:3.9
	I1218 12:57:31.643655   12728 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I1218 12:57:31.643655   12728 docker.go:671] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.28.4
	registry.k8s.io/kube-proxy:v1.28.4
	registry.k8s.io/kube-controller-manager:v1.28.4
	registry.k8s.io/kube-scheduler:v1.28.4
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1218 12:57:31.643655   12728 cache_images.go:84] Images are preloaded, skipping loading
	I1218 12:57:31.653524   12728 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1218 12:57:31.686234   12728 command_runner.go:130] > cgroupfs
	I1218 12:57:31.686873   12728 cni.go:84] Creating CNI manager for ""
	I1218 12:57:31.687106   12728 cni.go:136] 1 nodes found, recommending kindnet
	I1218 12:57:31.687167   12728 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1218 12:57:31.687167   12728 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.238.182 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-015900 NodeName:multinode-015900 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.238.182"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.238.182 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPod
Path:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1218 12:57:31.687515   12728 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.238.182
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "multinode-015900"
	  kubeletExtraArgs:
	    node-ip: 192.168.238.182
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.238.182"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1218 12:57:31.687742   12728 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=multinode-015900 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.238.182
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:multinode-015900 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1218 12:57:31.708421   12728 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I1218 12:57:31.725488   12728 command_runner.go:130] > kubeadm
	I1218 12:57:31.725613   12728 command_runner.go:130] > kubectl
	I1218 12:57:31.725673   12728 command_runner.go:130] > kubelet
	I1218 12:57:31.725673   12728 binaries.go:44] Found k8s binaries, skipping transfer
	I1218 12:57:31.739209   12728 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1218 12:57:31.753192   12728 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I1218 12:57:31.778528   12728 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1218 12:57:31.804238   12728 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2108 bytes)
	I1218 12:57:31.851399   12728 ssh_runner.go:195] Run: grep 192.168.238.182	control-plane.minikube.internal$ /etc/hosts
	I1218 12:57:31.857210   12728 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.238.182	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1218 12:57:31.876385   12728 certs.go:56] Setting up C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-015900 for IP: 192.168.238.182
	I1218 12:57:31.876638   12728 certs.go:190] acquiring lock for shared ca certs: {Name:mk3bd6e475aadf28590677101b36f61c132d4290 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1218 12:57:31.877499   12728 certs.go:199] skipping minikubeCA CA generation: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.key
	I1218 12:57:31.877811   12728 certs.go:199] skipping proxyClientCA CA generation: C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.key
	I1218 12:57:31.878628   12728 certs.go:319] generating minikube-user signed cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-015900\client.key
	I1218 12:57:31.878628   12728 crypto.go:68] Generating cert C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-015900\client.crt with IP's: []
	I1218 12:57:31.962683   12728 crypto.go:156] Writing cert to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-015900\client.crt ...
	I1218 12:57:31.962683   12728 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-015900\client.crt: {Name:mk443893db2ab4547173669cb5fb85af266c047f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1218 12:57:31.965175   12728 crypto.go:164] Writing key to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-015900\client.key ...
	I1218 12:57:31.965281   12728 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-015900\client.key: {Name:mkf8d591a6b02a85c501b46c227177800c278172 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1218 12:57:31.966536   12728 certs.go:319] generating minikube signed cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-015900\apiserver.key.6c162e13
	I1218 12:57:31.966762   12728 crypto.go:68] Generating cert C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-015900\apiserver.crt.6c162e13 with IP's: [192.168.238.182 10.96.0.1 127.0.0.1 10.0.0.1]
	I1218 12:57:32.126840   12728 crypto.go:156] Writing cert to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-015900\apiserver.crt.6c162e13 ...
	I1218 12:57:32.126840   12728 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-015900\apiserver.crt.6c162e13: {Name:mk12629f70b92d5152b01857c3d0d0c6fa3632c8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1218 12:57:32.128938   12728 crypto.go:164] Writing key to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-015900\apiserver.key.6c162e13 ...
	I1218 12:57:32.128938   12728 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-015900\apiserver.key.6c162e13: {Name:mkb94417e121f818ffd804c96f0443c7e09195d6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1218 12:57:32.129195   12728 certs.go:337] copying C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-015900\apiserver.crt.6c162e13 -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-015900\apiserver.crt
	I1218 12:57:32.143256   12728 certs.go:341] copying C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-015900\apiserver.key.6c162e13 -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-015900\apiserver.key
	I1218 12:57:32.144372   12728 certs.go:319] generating aggregator signed cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-015900\proxy-client.key
	I1218 12:57:32.145401   12728 crypto.go:68] Generating cert C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-015900\proxy-client.crt with IP's: []
	I1218 12:57:32.613324   12728 crypto.go:156] Writing cert to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-015900\proxy-client.crt ...
	I1218 12:57:32.613324   12728 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-015900\proxy-client.crt: {Name:mk00d4f8ae0fbc47c73383835c3cafe25f66cfa3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1218 12:57:32.615239   12728 crypto.go:164] Writing key to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-015900\proxy-client.key ...
	I1218 12:57:32.615239   12728 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-015900\proxy-client.key: {Name:mkf726965699025bc16f7e34ff9e188132cd1885 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1218 12:57:32.615724   12728 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-015900\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1218 12:57:32.615724   12728 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-015900\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1218 12:57:32.616724   12728 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-015900\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1218 12:57:32.627986   12728 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-015900\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1218 12:57:32.628633   12728 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I1218 12:57:32.628819   12728 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I1218 12:57:32.628983   12728 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1218 12:57:32.629112   12728 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1218 12:57:32.629665   12728 certs.go:437] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\14928.pem (1338 bytes)
	W1218 12:57:32.630076   12728 certs.go:433] ignoring C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\14928_empty.pem, impossibly tiny 0 bytes
	I1218 12:57:32.630237   12728 certs.go:437] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I1218 12:57:32.630519   12728 certs.go:437] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I1218 12:57:32.630957   12728 certs.go:437] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I1218 12:57:32.631001   12728 certs.go:437] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I1218 12:57:32.631726   12728 certs.go:437] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\149282.pem (1708 bytes)
	I1218 12:57:32.631904   12728 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\14928.pem -> /usr/share/ca-certificates/14928.pem
	I1218 12:57:32.632112   12728 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\149282.pem -> /usr/share/ca-certificates/149282.pem
	I1218 12:57:32.632287   12728 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1218 12:57:32.633793   12728 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-015900\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1218 12:57:32.675421   12728 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-015900\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1218 12:57:32.714222   12728 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-015900\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1218 12:57:32.754585   12728 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-015900\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1218 12:57:32.792305   12728 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1218 12:57:32.833479   12728 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1218 12:57:32.872044   12728 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1218 12:57:32.914883   12728 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1218 12:57:32.964625   12728 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\14928.pem --> /usr/share/ca-certificates/14928.pem (1338 bytes)
	I1218 12:57:33.006669   12728 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\149282.pem --> /usr/share/ca-certificates/149282.pem (1708 bytes)
	I1218 12:57:33.048560   12728 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1218 12:57:33.091541   12728 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1218 12:57:33.132432   12728 ssh_runner.go:195] Run: openssl version
	I1218 12:57:33.138589   12728 command_runner.go:130] > OpenSSL 1.1.1n  15 Mar 2022
	I1218 12:57:33.149704   12728 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14928.pem && ln -fs /usr/share/ca-certificates/14928.pem /etc/ssl/certs/14928.pem"
	I1218 12:57:33.179238   12728 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14928.pem
	I1218 12:57:33.185841   12728 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Dec 18 11:59 /usr/share/ca-certificates/14928.pem
	I1218 12:57:33.185841   12728 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Dec 18 11:59 /usr/share/ca-certificates/14928.pem
	I1218 12:57:33.198114   12728 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14928.pem
	I1218 12:57:33.205514   12728 command_runner.go:130] > 51391683
	I1218 12:57:33.217534   12728 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14928.pem /etc/ssl/certs/51391683.0"
	I1218 12:57:33.251552   12728 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/149282.pem && ln -fs /usr/share/ca-certificates/149282.pem /etc/ssl/certs/149282.pem"
	I1218 12:57:33.280643   12728 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/149282.pem
	I1218 12:57:33.287001   12728 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Dec 18 11:59 /usr/share/ca-certificates/149282.pem
	I1218 12:57:33.287001   12728 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Dec 18 11:59 /usr/share/ca-certificates/149282.pem
	I1218 12:57:33.299107   12728 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/149282.pem
	I1218 12:57:33.307818   12728 command_runner.go:130] > 3ec20f2e
	I1218 12:57:33.320269   12728 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/149282.pem /etc/ssl/certs/3ec20f2e.0"
	I1218 12:57:33.347664   12728 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1218 12:57:33.375157   12728 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1218 12:57:33.381208   12728 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Dec 18 11:45 /usr/share/ca-certificates/minikubeCA.pem
	I1218 12:57:33.381208   12728 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Dec 18 11:45 /usr/share/ca-certificates/minikubeCA.pem
	I1218 12:57:33.393591   12728 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1218 12:57:33.401438   12728 command_runner.go:130] > b5213941
	I1218 12:57:33.414432   12728 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1218 12:57:33.442869   12728 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1218 12:57:33.448167   12728 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1218 12:57:33.448167   12728 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1218 12:57:33.448766   12728 kubeadm.go:404] StartCluster: {Name:multinode-015900 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17765/minikube-v1.32.1-1702490427-17765-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702585004-17765@sha256:ba2fbb9efd5b81a443834ba0800f3bc13feea942ce199df74b0054a9bdb32bbd Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersi
on:v1.28.4 ClusterName:multinode-015900 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.238.182 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1218 12:57:33.458234   12728 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1218 12:57:33.499233   12728 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1218 12:57:33.514935   12728 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/kubeadm-flags.env': No such file or directory
	I1218 12:57:33.514935   12728 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/config.yaml': No such file or directory
	I1218 12:57:33.514935   12728 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/etcd': No such file or directory
	I1218 12:57:33.528420   12728 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1218 12:57:33.553615   12728 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1218 12:57:33.567184   12728 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I1218 12:57:33.567184   12728 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I1218 12:57:33.567283   12728 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I1218 12:57:33.567283   12728 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1218 12:57:33.567338   12728 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1218 12:57:33.567409   12728 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1218 12:57:34.324173   12728 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1218 12:57:34.324224   12728 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1218 12:57:48.133987   12728 kubeadm.go:322] [init] Using Kubernetes version: v1.28.4
	I1218 12:57:48.133987   12728 command_runner.go:130] > [init] Using Kubernetes version: v1.28.4
	I1218 12:57:48.133987   12728 kubeadm.go:322] [preflight] Running pre-flight checks
	I1218 12:57:48.133987   12728 command_runner.go:130] > [preflight] Running pre-flight checks
	I1218 12:57:48.134481   12728 command_runner.go:130] > [preflight] Pulling images required for setting up a Kubernetes cluster
	I1218 12:57:48.134481   12728 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1218 12:57:48.134481   12728 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1218 12:57:48.134481   12728 command_runner.go:130] > [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1218 12:57:48.134481   12728 command_runner.go:130] > [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1218 12:57:48.134481   12728 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1218 12:57:48.134481   12728 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1218 12:57:48.135517   12728 out.go:204]   - Generating certificates and keys ...
	I1218 12:57:48.134481   12728 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1218 12:57:48.135517   12728 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1218 12:57:48.135517   12728 command_runner.go:130] > [certs] Using existing ca certificate authority
	I1218 12:57:48.135517   12728 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I1218 12:57:48.136484   12728 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1218 12:57:48.136484   12728 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1218 12:57:48.136484   12728 command_runner.go:130] > [certs] Generating "apiserver-kubelet-client" certificate and key
	I1218 12:57:48.136484   12728 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I1218 12:57:48.136484   12728 command_runner.go:130] > [certs] Generating "front-proxy-ca" certificate and key
	I1218 12:57:48.136484   12728 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I1218 12:57:48.136484   12728 command_runner.go:130] > [certs] Generating "front-proxy-client" certificate and key
	I1218 12:57:48.136484   12728 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I1218 12:57:48.136484   12728 command_runner.go:130] > [certs] Generating "etcd/ca" certificate and key
	I1218 12:57:48.136484   12728 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I1218 12:57:48.136484   12728 command_runner.go:130] > [certs] Generating "etcd/server" certificate and key
	I1218 12:57:48.137534   12728 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [localhost multinode-015900] and IPs [192.168.238.182 127.0.0.1 ::1]
	I1218 12:57:48.137534   12728 command_runner.go:130] > [certs] etcd/server serving cert is signed for DNS names [localhost multinode-015900] and IPs [192.168.238.182 127.0.0.1 ::1]
	I1218 12:57:48.137534   12728 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I1218 12:57:48.137534   12728 command_runner.go:130] > [certs] Generating "etcd/peer" certificate and key
	I1218 12:57:48.137534   12728 command_runner.go:130] > [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-015900] and IPs [192.168.238.182 127.0.0.1 ::1]
	I1218 12:57:48.137534   12728 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-015900] and IPs [192.168.238.182 127.0.0.1 ::1]
	I1218 12:57:48.137534   12728 command_runner.go:130] > [certs] Generating "etcd/healthcheck-client" certificate and key
	I1218 12:57:48.137534   12728 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1218 12:57:48.138507   12728 command_runner.go:130] > [certs] Generating "apiserver-etcd-client" certificate and key
	I1218 12:57:48.138507   12728 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I1218 12:57:48.138507   12728 kubeadm.go:322] [certs] Generating "sa" key and public key
	I1218 12:57:48.138507   12728 command_runner.go:130] > [certs] Generating "sa" key and public key
	I1218 12:57:48.138507   12728 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1218 12:57:48.138507   12728 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1218 12:57:48.138507   12728 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1218 12:57:48.138507   12728 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I1218 12:57:48.138507   12728 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1218 12:57:48.138507   12728 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1218 12:57:48.138507   12728 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1218 12:57:48.138507   12728 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1218 12:57:48.138507   12728 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1218 12:57:48.138507   12728 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1218 12:57:48.139512   12728 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1218 12:57:48.139512   12728 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1218 12:57:48.139512   12728 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1218 12:57:48.139512   12728 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1218 12:57:48.140651   12728 out.go:204]   - Booting up control plane ...
	I1218 12:57:48.140651   12728 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1218 12:57:48.140651   12728 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1218 12:57:48.140651   12728 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1218 12:57:48.141509   12728 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1218 12:57:48.141509   12728 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1218 12:57:48.141509   12728 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1218 12:57:48.141509   12728 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1218 12:57:48.141509   12728 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1218 12:57:48.141509   12728 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1218 12:57:48.141509   12728 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1218 12:57:48.141509   12728 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I1218 12:57:48.141509   12728 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1218 12:57:48.142493   12728 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1218 12:57:48.142493   12728 command_runner.go:130] > [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1218 12:57:48.142493   12728 command_runner.go:130] > [apiclient] All control plane components are healthy after 8.006892 seconds
	I1218 12:57:48.142493   12728 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.006892 seconds
	I1218 12:57:48.142493   12728 command_runner.go:130] > [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1218 12:57:48.142493   12728 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1218 12:57:48.142493   12728 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1218 12:57:48.143494   12728 command_runner.go:130] > [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1218 12:57:48.143494   12728 command_runner.go:130] > [upload-certs] Skipping phase. Please see --upload-certs
	I1218 12:57:48.143494   12728 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1218 12:57:48.143494   12728 command_runner.go:130] > [mark-control-plane] Marking the node multinode-015900 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1218 12:57:48.143494   12728 kubeadm.go:322] [mark-control-plane] Marking the node multinode-015900 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1218 12:57:48.143494   12728 command_runner.go:130] > [bootstrap-token] Using token: wngx84.cihr8ssvap9im7kf
	I1218 12:57:48.143494   12728 kubeadm.go:322] [bootstrap-token] Using token: wngx84.cihr8ssvap9im7kf
	I1218 12:57:48.144495   12728 out.go:204]   - Configuring RBAC rules ...
	I1218 12:57:48.144495   12728 command_runner.go:130] > [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1218 12:57:48.144495   12728 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1218 12:57:48.145496   12728 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1218 12:57:48.145496   12728 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1218 12:57:48.145496   12728 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1218 12:57:48.145496   12728 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1218 12:57:48.145496   12728 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1218 12:57:48.146494   12728 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1218 12:57:48.146494   12728 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1218 12:57:48.146494   12728 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1218 12:57:48.146494   12728 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1218 12:57:48.146494   12728 command_runner.go:130] > [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1218 12:57:48.146494   12728 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1218 12:57:48.146494   12728 command_runner.go:130] > [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1218 12:57:48.146494   12728 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1218 12:57:48.146494   12728 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I1218 12:57:48.146494   12728 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1218 12:57:48.146494   12728 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I1218 12:57:48.146494   12728 kubeadm.go:322] 
	I1218 12:57:48.147489   12728 command_runner.go:130] > Your Kubernetes control-plane has initialized successfully!
	I1218 12:57:48.147489   12728 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1218 12:57:48.147489   12728 kubeadm.go:322] 
	I1218 12:57:48.147489   12728 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1218 12:57:48.147489   12728 command_runner.go:130] > To start using your cluster, you need to run the following as a regular user:
	I1218 12:57:48.147489   12728 kubeadm.go:322] 
	I1218 12:57:48.147489   12728 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1218 12:57:48.147489   12728 command_runner.go:130] >   mkdir -p $HOME/.kube
	I1218 12:57:48.147489   12728 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1218 12:57:48.147489   12728 command_runner.go:130] >   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1218 12:57:48.147489   12728 command_runner.go:130] >   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1218 12:57:48.147489   12728 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1218 12:57:48.147489   12728 kubeadm.go:322] 
	I1218 12:57:48.147489   12728 command_runner.go:130] > Alternatively, if you are the root user, you can run:
	I1218 12:57:48.147489   12728 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I1218 12:57:48.147489   12728 kubeadm.go:322] 
	I1218 12:57:48.148488   12728 command_runner.go:130] >   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1218 12:57:48.148488   12728 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1218 12:57:48.148488   12728 kubeadm.go:322] 
	I1218 12:57:48.148488   12728 command_runner.go:130] > You should now deploy a pod network to the cluster.
	I1218 12:57:48.148488   12728 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1218 12:57:48.148488   12728 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1218 12:57:48.148488   12728 command_runner.go:130] > Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1218 12:57:48.148488   12728 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1218 12:57:48.148488   12728 command_runner.go:130] >   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1218 12:57:48.148488   12728 kubeadm.go:322] 
	I1218 12:57:48.148488   12728 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1218 12:57:48.148488   12728 command_runner.go:130] > You can now join any number of control-plane nodes by copying certificate authorities
	I1218 12:57:48.148488   12728 command_runner.go:130] > and service account keys on each node and then running the following as root:
	I1218 12:57:48.148488   12728 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1218 12:57:48.148488   12728 kubeadm.go:322] 
	I1218 12:57:48.149499   12728 command_runner.go:130] >   kubeadm join control-plane.minikube.internal:8443 --token wngx84.cihr8ssvap9im7kf \
	I1218 12:57:48.149499   12728 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token wngx84.cihr8ssvap9im7kf \
	I1218 12:57:48.149499   12728 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:b2fa66f0127ff189a61b5e0d7ad6d9c9a72d2910f0374f3c179dae174436a982 \
	I1218 12:57:48.149499   12728 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:b2fa66f0127ff189a61b5e0d7ad6d9c9a72d2910f0374f3c179dae174436a982 \
	I1218 12:57:48.149499   12728 kubeadm.go:322] 	--control-plane 
	I1218 12:57:48.149499   12728 command_runner.go:130] > 	--control-plane 
	I1218 12:57:48.149499   12728 kubeadm.go:322] 
	I1218 12:57:48.149499   12728 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1218 12:57:48.149499   12728 command_runner.go:130] > Then you can join any number of worker nodes by running the following on each as root:
	I1218 12:57:48.149499   12728 kubeadm.go:322] 
	I1218 12:57:48.149499   12728 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token wngx84.cihr8ssvap9im7kf \
	I1218 12:57:48.149499   12728 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token wngx84.cihr8ssvap9im7kf \
	I1218 12:57:48.150488   12728 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:b2fa66f0127ff189a61b5e0d7ad6d9c9a72d2910f0374f3c179dae174436a982 
	I1218 12:57:48.150488   12728 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:b2fa66f0127ff189a61b5e0d7ad6d9c9a72d2910f0374f3c179dae174436a982 
	I1218 12:57:48.150488   12728 cni.go:84] Creating CNI manager for ""
	I1218 12:57:48.150488   12728 cni.go:136] 1 nodes found, recommending kindnet
	I1218 12:57:48.150488   12728 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1218 12:57:48.164489   12728 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1218 12:57:48.174940   12728 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I1218 12:57:48.175047   12728 command_runner.go:130] >   Size: 2685752   	Blocks: 5248       IO Block: 4096   regular file
	I1218 12:57:48.175047   12728 command_runner.go:130] > Device: 11h/17d	Inode: 3544        Links: 1
	I1218 12:57:48.175079   12728 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I1218 12:57:48.175079   12728 command_runner.go:130] > Access: 2023-12-18 12:56:02.700638100 +0000
	I1218 12:57:48.175079   12728 command_runner.go:130] > Modify: 2023-12-13 23:27:31.000000000 +0000
	I1218 12:57:48.175079   12728 command_runner.go:130] > Change: 2023-12-18 12:55:51.112000000 +0000
	I1218 12:57:48.175079   12728 command_runner.go:130] >  Birth: -
	I1218 12:57:48.175141   12728 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I1218 12:57:48.175141   12728 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I1218 12:57:48.233131   12728 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1218 12:57:49.706592   12728 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet created
	I1218 12:57:49.706592   12728 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet created
	I1218 12:57:49.706592   12728 command_runner.go:130] > serviceaccount/kindnet created
	I1218 12:57:49.706592   12728 command_runner.go:130] > daemonset.apps/kindnet created
	I1218 12:57:49.706733   12728 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.4735961s)
	I1218 12:57:49.706800   12728 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1218 12:57:49.722595   12728 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1218 12:57:49.723550   12728 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=30d8ecd1811578f7b9db580c501c654c189f68d4 minikube.k8s.io/name=multinode-015900 minikube.k8s.io/updated_at=2023_12_18T12_57_49_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1218 12:57:49.726641   12728 command_runner.go:130] > -16
	I1218 12:57:49.726641   12728 ops.go:34] apiserver oom_adj: -16
	I1218 12:57:49.882435   12728 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/minikube-rbac created
	I1218 12:57:49.890421   12728 command_runner.go:130] > node/multinode-015900 labeled
	I1218 12:57:49.898137   12728 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1218 12:57:50.028695   12728 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1218 12:57:50.406976   12728 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1218 12:57:50.520269   12728 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1218 12:57:50.902348   12728 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1218 12:57:51.024725   12728 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1218 12:57:51.407290   12728 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1218 12:57:51.535026   12728 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1218 12:57:51.909684   12728 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1218 12:57:52.018709   12728 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1218 12:57:52.411323   12728 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1218 12:57:52.544268   12728 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1218 12:57:52.895474   12728 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1218 12:57:53.022239   12728 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1218 12:57:53.398869   12728 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1218 12:57:53.512785   12728 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1218 12:57:53.904948   12728 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1218 12:57:54.033153   12728 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1218 12:57:54.403099   12728 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1218 12:57:54.521195   12728 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1218 12:57:54.912618   12728 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1218 12:57:55.036280   12728 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1218 12:57:55.399586   12728 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1218 12:57:55.511159   12728 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1218 12:57:55.901320   12728 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1218 12:57:56.022008   12728 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1218 12:57:56.403621   12728 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1218 12:57:56.527385   12728 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1218 12:57:56.906268   12728 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1218 12:57:57.023171   12728 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1218 12:57:57.404284   12728 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1218 12:57:57.593903   12728 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1218 12:57:57.911156   12728 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1218 12:57:58.028141   12728 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1218 12:57:58.398195   12728 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1218 12:57:58.533673   12728 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1218 12:57:58.902316   12728 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1218 12:57:59.077232   12728 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1218 12:57:59.412096   12728 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1218 12:57:59.542261   12728 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1218 12:57:59.902832   12728 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1218 12:58:00.056154   12728 command_runner.go:130] > NAME      SECRETS   AGE
	I1218 12:58:00.056154   12728 command_runner.go:130] > default   0         1s
	I1218 12:58:00.056154   12728 kubeadm.go:1088] duration metric: took 10.349206s to wait for elevateKubeSystemPrivileges.
	I1218 12:58:00.056154   12728 kubeadm.go:406] StartCluster complete in 26.6072932s
	I1218 12:58:00.056154   12728 settings.go:142] acquiring lock: {Name:mk2f48f1c2db86c45c5c20d13312e07e9c171d09 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1218 12:58:00.056154   12728 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	I1218 12:58:00.058161   12728 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\kubeconfig: {Name:mk3b9816be6135fef42a073128e9ed356868417a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1218 12:58:00.060158   12728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1218 12:58:00.060158   12728 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1218 12:58:00.060158   12728 addons.go:69] Setting storage-provisioner=true in profile "multinode-015900"
	I1218 12:58:00.060158   12728 addons.go:69] Setting default-storageclass=true in profile "multinode-015900"
	I1218 12:58:00.060158   12728 addons.go:231] Setting addon storage-provisioner=true in "multinode-015900"
	I1218 12:58:00.060158   12728 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "multinode-015900"
	I1218 12:58:00.060158   12728 host.go:66] Checking if "multinode-015900" exists ...
	I1218 12:58:00.060158   12728 config.go:182] Loaded profile config "multinode-015900": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1218 12:58:00.061160   12728 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-015900 ).state
	I1218 12:58:00.061160   12728 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-015900 ).state
	I1218 12:58:00.077160   12728 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	I1218 12:58:00.078162   12728 kapi.go:59] client config for multinode-015900: &rest.Config{Host:"https://192.168.238.182:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\profiles\\multinode-015900\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\profiles\\multinode-015900\\client.key", CAFile:"C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), C
AData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x21a1f20), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1218 12:58:00.080174   12728 cert_rotation.go:137] Starting client certificate rotation controller
	I1218 12:58:00.080174   12728 round_trippers.go:463] GET https://192.168.238.182:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I1218 12:58:00.080174   12728 round_trippers.go:469] Request Headers:
	I1218 12:58:00.080174   12728 round_trippers.go:473]     Accept: application/json, */*
	I1218 12:58:00.080174   12728 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1218 12:58:00.130451   12728 round_trippers.go:574] Response Status: 200 OK in 50 milliseconds
	I1218 12:58:00.130451   12728 round_trippers.go:577] Response Headers:
	I1218 12:58:00.130451   12728 round_trippers.go:580]     Audit-Id: 95860d47-a2e3-4982-903e-8eee393fb543
	I1218 12:58:00.130451   12728 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 12:58:00.130708   12728 round_trippers.go:580]     Content-Type: application/json
	I1218 12:58:00.130708   12728 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a04cbc8-e61e-444a-bf0e-8472e0035c2c
	I1218 12:58:00.130708   12728 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e24e6e3-cf39-4982-af10-f8c010ad1d90
	I1218 12:58:00.130708   12728 round_trippers.go:580]     Content-Length: 291
	I1218 12:58:00.130708   12728 round_trippers.go:580]     Date: Mon, 18 Dec 2023 12:58:00 GMT
	I1218 12:58:00.130842   12728 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"ac4adb10-5952-477f-b353-31b85c54eafc","resourceVersion":"343","creationTimestamp":"2023-12-18T12:57:48Z"},"spec":{"replicas":2},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I1218 12:58:00.131613   12728 request.go:1212] Request Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"ac4adb10-5952-477f-b353-31b85c54eafc","resourceVersion":"343","creationTimestamp":"2023-12-18T12:57:48Z"},"spec":{"replicas":1},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I1218 12:58:00.131729   12728 round_trippers.go:463] PUT https://192.168.238.182:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I1218 12:58:00.131729   12728 round_trippers.go:469] Request Headers:
	I1218 12:58:00.131795   12728 round_trippers.go:473]     Content-Type: application/json
	I1218 12:58:00.131795   12728 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1218 12:58:00.131795   12728 round_trippers.go:473]     Accept: application/json, */*
	I1218 12:58:00.163154   12728 round_trippers.go:574] Response Status: 200 OK in 31 milliseconds
	I1218 12:58:00.163818   12728 round_trippers.go:577] Response Headers:
	I1218 12:58:00.163818   12728 round_trippers.go:580]     Date: Mon, 18 Dec 2023 12:58:00 GMT
	I1218 12:58:00.163818   12728 round_trippers.go:580]     Audit-Id: e4fb48ea-84dd-4e06-a619-cacf4b859053
	I1218 12:58:00.163818   12728 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 12:58:00.163951   12728 round_trippers.go:580]     Content-Type: application/json
	I1218 12:58:00.163951   12728 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a04cbc8-e61e-444a-bf0e-8472e0035c2c
	I1218 12:58:00.163951   12728 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e24e6e3-cf39-4982-af10-f8c010ad1d90
	I1218 12:58:00.163951   12728 round_trippers.go:580]     Content-Length: 291
	I1218 12:58:00.163951   12728 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"ac4adb10-5952-477f-b353-31b85c54eafc","resourceVersion":"367","creationTimestamp":"2023-12-18T12:57:48Z"},"spec":{"replicas":1},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I1218 12:58:00.462038   12728 command_runner.go:130] > apiVersion: v1
	I1218 12:58:00.462038   12728 command_runner.go:130] > data:
	I1218 12:58:00.462038   12728 command_runner.go:130] >   Corefile: |
	I1218 12:58:00.462038   12728 command_runner.go:130] >     .:53 {
	I1218 12:58:00.462038   12728 command_runner.go:130] >         errors
	I1218 12:58:00.462038   12728 command_runner.go:130] >         health {
	I1218 12:58:00.462038   12728 command_runner.go:130] >            lameduck 5s
	I1218 12:58:00.462038   12728 command_runner.go:130] >         }
	I1218 12:58:00.462038   12728 command_runner.go:130] >         ready
	I1218 12:58:00.462038   12728 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I1218 12:58:00.462038   12728 command_runner.go:130] >            pods insecure
	I1218 12:58:00.462038   12728 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I1218 12:58:00.462038   12728 command_runner.go:130] >            ttl 30
	I1218 12:58:00.462038   12728 command_runner.go:130] >         }
	I1218 12:58:00.462038   12728 command_runner.go:130] >         prometheus :9153
	I1218 12:58:00.462038   12728 command_runner.go:130] >         forward . /etc/resolv.conf {
	I1218 12:58:00.462038   12728 command_runner.go:130] >            max_concurrent 1000
	I1218 12:58:00.462038   12728 command_runner.go:130] >         }
	I1218 12:58:00.462038   12728 command_runner.go:130] >         cache 30
	I1218 12:58:00.462038   12728 command_runner.go:130] >         loop
	I1218 12:58:00.462038   12728 command_runner.go:130] >         reload
	I1218 12:58:00.462038   12728 command_runner.go:130] >         loadbalance
	I1218 12:58:00.462038   12728 command_runner.go:130] >     }
	I1218 12:58:00.462038   12728 command_runner.go:130] > kind: ConfigMap
	I1218 12:58:00.462038   12728 command_runner.go:130] > metadata:
	I1218 12:58:00.462038   12728 command_runner.go:130] >   creationTimestamp: "2023-12-18T12:57:48Z"
	I1218 12:58:00.462038   12728 command_runner.go:130] >   name: coredns
	I1218 12:58:00.462038   12728 command_runner.go:130] >   namespace: kube-system
	I1218 12:58:00.462038   12728 command_runner.go:130] >   resourceVersion: "266"
	I1218 12:58:00.462038   12728 command_runner.go:130] >   uid: 40ad0019-312a-4903-9379-40e12697856d
	I1218 12:58:00.463069   12728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.224.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1218 12:58:00.586181   12728 round_trippers.go:463] GET https://192.168.238.182:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I1218 12:58:00.586281   12728 round_trippers.go:469] Request Headers:
	I1218 12:58:00.586281   12728 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1218 12:58:00.586370   12728 round_trippers.go:473]     Accept: application/json, */*
	I1218 12:58:00.597670   12728 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I1218 12:58:00.597670   12728 round_trippers.go:577] Response Headers:
	I1218 12:58:00.597670   12728 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 12:58:00.597670   12728 round_trippers.go:580]     Content-Type: application/json
	I1218 12:58:00.597670   12728 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a04cbc8-e61e-444a-bf0e-8472e0035c2c
	I1218 12:58:00.597670   12728 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e24e6e3-cf39-4982-af10-f8c010ad1d90
	I1218 12:58:00.597670   12728 round_trippers.go:580]     Content-Length: 291
	I1218 12:58:00.597670   12728 round_trippers.go:580]     Date: Mon, 18 Dec 2023 12:58:00 GMT
	I1218 12:58:00.597670   12728 round_trippers.go:580]     Audit-Id: fc5e368f-f415-447a-ba6c-63c0bfa75b8e
	I1218 12:58:00.597670   12728 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"ac4adb10-5952-477f-b353-31b85c54eafc","resourceVersion":"397","creationTimestamp":"2023-12-18T12:57:48Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I1218 12:58:00.598669   12728 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-015900" context rescaled to 1 replicas
	I1218 12:58:00.598669   12728 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.238.182 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1218 12:58:00.599665   12728 out.go:177] * Verifying Kubernetes components...
	I1218 12:58:00.623460   12728 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1218 12:58:01.344512   12728 command_runner.go:130] > configmap/coredns replaced
	I1218 12:58:01.344662   12728 start.go:929] {"host.minikube.internal": 192.168.224.1} host record injected into CoreDNS's ConfigMap
	I1218 12:58:01.345710   12728 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	I1218 12:58:01.346481   12728 kapi.go:59] client config for multinode-015900: &rest.Config{Host:"https://192.168.238.182:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\profiles\\multinode-015900\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\profiles\\multinode-015900\\client.key", CAFile:"C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), C
AData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x21a1f20), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1218 12:58:01.347375   12728 node_ready.go:35] waiting up to 6m0s for node "multinode-015900" to be "Ready" ...
	I1218 12:58:01.347559   12728 round_trippers.go:463] GET https://192.168.238.182:8443/api/v1/nodes/multinode-015900
	I1218 12:58:01.347559   12728 round_trippers.go:469] Request Headers:
	I1218 12:58:01.347559   12728 round_trippers.go:473]     Accept: application/json, */*
	I1218 12:58:01.347559   12728 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1218 12:58:01.364939   12728 round_trippers.go:574] Response Status: 200 OK in 17 milliseconds
	I1218 12:58:01.364939   12728 round_trippers.go:577] Response Headers:
	I1218 12:58:01.364939   12728 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a04cbc8-e61e-444a-bf0e-8472e0035c2c
	I1218 12:58:01.364939   12728 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e24e6e3-cf39-4982-af10-f8c010ad1d90
	I1218 12:58:01.365444   12728 round_trippers.go:580]     Date: Mon, 18 Dec 2023 12:58:01 GMT
	I1218 12:58:01.365444   12728 round_trippers.go:580]     Audit-Id: df2d7e99-38e1-4ce3-bb58-c5e44c8173a0
	I1218 12:58:01.365444   12728 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 12:58:01.365444   12728 round_trippers.go:580]     Content-Type: application/json
	I1218 12:58:01.365681   12728 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-015900","uid":"b1b226d9-f305-41e8-ae9a-d377e8d5a4c5","resourceVersion":"355","creationTimestamp":"2023-12-18T12:57:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-015900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30d8ecd1811578f7b9db580c501c654c189f68d4","minikube.k8s.io/name":"multinode-015900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_18T12_57_49_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-18T12:57:44Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I1218 12:58:01.852611   12728 round_trippers.go:463] GET https://192.168.238.182:8443/api/v1/nodes/multinode-015900
	I1218 12:58:01.852710   12728 round_trippers.go:469] Request Headers:
	I1218 12:58:01.852710   12728 round_trippers.go:473]     Accept: application/json, */*
	I1218 12:58:01.852710   12728 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1218 12:58:01.856781   12728 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1218 12:58:01.856847   12728 round_trippers.go:577] Response Headers:
	I1218 12:58:01.856847   12728 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a04cbc8-e61e-444a-bf0e-8472e0035c2c
	I1218 12:58:01.856847   12728 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e24e6e3-cf39-4982-af10-f8c010ad1d90
	I1218 12:58:01.856906   12728 round_trippers.go:580]     Date: Mon, 18 Dec 2023 12:58:01 GMT
	I1218 12:58:01.856906   12728 round_trippers.go:580]     Audit-Id: 3d88e0cf-53a6-43d7-ba5b-a27d24828ec0
	I1218 12:58:01.856906   12728 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 12:58:01.856906   12728 round_trippers.go:580]     Content-Type: application/json
	I1218 12:58:01.857098   12728 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-015900","uid":"b1b226d9-f305-41e8-ae9a-d377e8d5a4c5","resourceVersion":"355","creationTimestamp":"2023-12-18T12:57:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-015900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30d8ecd1811578f7b9db580c501c654c189f68d4","minikube.k8s.io/name":"multinode-015900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_18T12_57_49_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-18T12:57:44Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I1218 12:58:02.357190   12728 round_trippers.go:463] GET https://192.168.238.182:8443/api/v1/nodes/multinode-015900
	I1218 12:58:02.357190   12728 round_trippers.go:469] Request Headers:
	I1218 12:58:02.357190   12728 round_trippers.go:473]     Accept: application/json, */*
	I1218 12:58:02.357190   12728 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1218 12:58:02.361971   12728 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1218 12:58:02.362357   12728 round_trippers.go:577] Response Headers:
	I1218 12:58:02.362357   12728 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 12:58:02.362357   12728 round_trippers.go:580]     Content-Type: application/json
	I1218 12:58:02.362357   12728 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a04cbc8-e61e-444a-bf0e-8472e0035c2c
	I1218 12:58:02.362357   12728 main.go:141] libmachine: [stdout =====>] : Running
	
	I1218 12:58:02.362605   12728 main.go:141] libmachine: [stderr =====>] : 
	I1218 12:58:02.362357   12728 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e24e6e3-cf39-4982-af10-f8c010ad1d90
	I1218 12:58:02.363383   12728 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1218 12:58:02.362659   12728 round_trippers.go:580]     Date: Mon, 18 Dec 2023 12:58:02 GMT
	I1218 12:58:02.364354   12728 round_trippers.go:580]     Audit-Id: 375260c5-7e8e-4649-a6a2-89f01905fde1
	I1218 12:58:02.364470   12728 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1218 12:58:02.362357   12728 main.go:141] libmachine: [stdout =====>] : Running
	
	I1218 12:58:02.364470   12728 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1218 12:58:02.364595   12728 main.go:141] libmachine: [stderr =====>] : 
	I1218 12:58:02.364703   12728 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-015900 ).state
	I1218 12:58:02.364891   12728 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-015900","uid":"b1b226d9-f305-41e8-ae9a-d377e8d5a4c5","resourceVersion":"355","creationTimestamp":"2023-12-18T12:57:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-015900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30d8ecd1811578f7b9db580c501c654c189f68d4","minikube.k8s.io/name":"multinode-015900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_18T12_57_49_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-18T12:57:44Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I1218 12:58:02.365921   12728 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	I1218 12:58:02.366772   12728 kapi.go:59] client config for multinode-015900: &rest.Config{Host:"https://192.168.238.182:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\profiles\\multinode-015900\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\profiles\\multinode-015900\\client.key", CAFile:"C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), C
AData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x21a1f20), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1218 12:58:02.367991   12728 addons.go:231] Setting addon default-storageclass=true in "multinode-015900"
	I1218 12:58:02.368159   12728 host.go:66] Checking if "multinode-015900" exists ...
	I1218 12:58:02.369132   12728 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-015900 ).state
	I1218 12:58:02.853798   12728 round_trippers.go:463] GET https://192.168.238.182:8443/api/v1/nodes/multinode-015900
	I1218 12:58:02.853930   12728 round_trippers.go:469] Request Headers:
	I1218 12:58:02.853930   12728 round_trippers.go:473]     Accept: application/json, */*
	I1218 12:58:02.853930   12728 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1218 12:58:02.858485   12728 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1218 12:58:02.858955   12728 round_trippers.go:577] Response Headers:
	I1218 12:58:02.858955   12728 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 12:58:02.858955   12728 round_trippers.go:580]     Content-Type: application/json
	I1218 12:58:02.858955   12728 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a04cbc8-e61e-444a-bf0e-8472e0035c2c
	I1218 12:58:02.859025   12728 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e24e6e3-cf39-4982-af10-f8c010ad1d90
	I1218 12:58:02.859025   12728 round_trippers.go:580]     Date: Mon, 18 Dec 2023 12:58:02 GMT
	I1218 12:58:02.859025   12728 round_trippers.go:580]     Audit-Id: 1c6a59a2-abc1-46e5-a1a3-c51c5d03f7f7
	I1218 12:58:02.859025   12728 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-015900","uid":"b1b226d9-f305-41e8-ae9a-d377e8d5a4c5","resourceVersion":"355","creationTimestamp":"2023-12-18T12:57:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-015900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30d8ecd1811578f7b9db580c501c654c189f68d4","minikube.k8s.io/name":"multinode-015900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_18T12_57_49_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-18T12:57:44Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I1218 12:58:03.360906   12728 round_trippers.go:463] GET https://192.168.238.182:8443/api/v1/nodes/multinode-015900
	I1218 12:58:03.361037   12728 round_trippers.go:469] Request Headers:
	I1218 12:58:03.361170   12728 round_trippers.go:473]     Accept: application/json, */*
	I1218 12:58:03.361216   12728 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1218 12:58:03.365587   12728 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1218 12:58:03.365816   12728 round_trippers.go:577] Response Headers:
	I1218 12:58:03.365816   12728 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a04cbc8-e61e-444a-bf0e-8472e0035c2c
	I1218 12:58:03.365816   12728 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e24e6e3-cf39-4982-af10-f8c010ad1d90
	I1218 12:58:03.365816   12728 round_trippers.go:580]     Date: Mon, 18 Dec 2023 12:58:03 GMT
	I1218 12:58:03.365942   12728 round_trippers.go:580]     Audit-Id: e8385213-55a1-4f3e-bd7b-afcbc881efdc
	I1218 12:58:03.365942   12728 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 12:58:03.365942   12728 round_trippers.go:580]     Content-Type: application/json
	I1218 12:58:03.366278   12728 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-015900","uid":"b1b226d9-f305-41e8-ae9a-d377e8d5a4c5","resourceVersion":"355","creationTimestamp":"2023-12-18T12:57:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-015900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30d8ecd1811578f7b9db580c501c654c189f68d4","minikube.k8s.io/name":"multinode-015900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_18T12_57_49_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-18T12:57:44Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I1218 12:58:03.366907   12728 node_ready.go:58] node "multinode-015900" has status "Ready":"False"
	I1218 12:58:03.853148   12728 round_trippers.go:463] GET https://192.168.238.182:8443/api/v1/nodes/multinode-015900
	I1218 12:58:03.853148   12728 round_trippers.go:469] Request Headers:
	I1218 12:58:03.853148   12728 round_trippers.go:473]     Accept: application/json, */*
	I1218 12:58:03.853148   12728 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1218 12:58:03.856734   12728 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1218 12:58:03.856734   12728 round_trippers.go:577] Response Headers:
	I1218 12:58:03.857671   12728 round_trippers.go:580]     Audit-Id: 6ba5bafe-adeb-4476-8fbf-22d40862ceb4
	I1218 12:58:03.857671   12728 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 12:58:03.857671   12728 round_trippers.go:580]     Content-Type: application/json
	I1218 12:58:03.857671   12728 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a04cbc8-e61e-444a-bf0e-8472e0035c2c
	I1218 12:58:03.857720   12728 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e24e6e3-cf39-4982-af10-f8c010ad1d90
	I1218 12:58:03.857720   12728 round_trippers.go:580]     Date: Mon, 18 Dec 2023 12:58:03 GMT
	I1218 12:58:03.857958   12728 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-015900","uid":"b1b226d9-f305-41e8-ae9a-d377e8d5a4c5","resourceVersion":"355","creationTimestamp":"2023-12-18T12:57:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-015900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30d8ecd1811578f7b9db580c501c654c189f68d4","minikube.k8s.io/name":"multinode-015900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_18T12_57_49_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-18T12:57:44Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I1218 12:58:04.359760   12728 round_trippers.go:463] GET https://192.168.238.182:8443/api/v1/nodes/multinode-015900
	I1218 12:58:04.359760   12728 round_trippers.go:469] Request Headers:
	I1218 12:58:04.359853   12728 round_trippers.go:473]     Accept: application/json, */*
	I1218 12:58:04.359853   12728 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1218 12:58:04.364115   12728 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1218 12:58:04.364115   12728 round_trippers.go:577] Response Headers:
	I1218 12:58:04.364115   12728 round_trippers.go:580]     Audit-Id: fea39283-5a9b-4654-8adc-260cabf2e69e
	I1218 12:58:04.364115   12728 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 12:58:04.364115   12728 round_trippers.go:580]     Content-Type: application/json
	I1218 12:58:04.364115   12728 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a04cbc8-e61e-444a-bf0e-8472e0035c2c
	I1218 12:58:04.364115   12728 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e24e6e3-cf39-4982-af10-f8c010ad1d90
	I1218 12:58:04.364115   12728 round_trippers.go:580]     Date: Mon, 18 Dec 2023 12:58:04 GMT
	I1218 12:58:04.364527   12728 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-015900","uid":"b1b226d9-f305-41e8-ae9a-d377e8d5a4c5","resourceVersion":"355","creationTimestamp":"2023-12-18T12:57:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-015900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30d8ecd1811578f7b9db580c501c654c189f68d4","minikube.k8s.io/name":"multinode-015900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_18T12_57_49_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-18T12:57:44Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I1218 12:58:04.563630   12728 main.go:141] libmachine: [stdout =====>] : Running
	
	I1218 12:58:04.563630   12728 main.go:141] libmachine: [stderr =====>] : 
	I1218 12:58:04.563630   12728 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-015900 ).networkadapters[0]).ipaddresses[0]
	I1218 12:58:04.610443   12728 main.go:141] libmachine: [stdout =====>] : Running
	
	I1218 12:58:04.610443   12728 main.go:141] libmachine: [stderr =====>] : 
	I1218 12:58:04.610443   12728 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1218 12:58:04.610707   12728 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1218 12:58:04.610707   12728 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-015900 ).state
	I1218 12:58:04.848798   12728 round_trippers.go:463] GET https://192.168.238.182:8443/api/v1/nodes/multinode-015900
	I1218 12:58:04.848798   12728 round_trippers.go:469] Request Headers:
	I1218 12:58:04.848798   12728 round_trippers.go:473]     Accept: application/json, */*
	I1218 12:58:04.848798   12728 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1218 12:58:04.851909   12728 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1218 12:58:04.852949   12728 round_trippers.go:577] Response Headers:
	I1218 12:58:04.852949   12728 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e24e6e3-cf39-4982-af10-f8c010ad1d90
	I1218 12:58:04.852949   12728 round_trippers.go:580]     Date: Mon, 18 Dec 2023 12:58:04 GMT
	I1218 12:58:04.852949   12728 round_trippers.go:580]     Audit-Id: faa94ecb-494e-41a4-b957-b342f80a61dc
	I1218 12:58:04.853060   12728 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 12:58:04.853060   12728 round_trippers.go:580]     Content-Type: application/json
	I1218 12:58:04.853060   12728 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a04cbc8-e61e-444a-bf0e-8472e0035c2c
	I1218 12:58:04.853060   12728 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-015900","uid":"b1b226d9-f305-41e8-ae9a-d377e8d5a4c5","resourceVersion":"355","creationTimestamp":"2023-12-18T12:57:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-015900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30d8ecd1811578f7b9db580c501c654c189f68d4","minikube.k8s.io/name":"multinode-015900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_18T12_57_49_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-18T12:57:44Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I1218 12:58:05.359743   12728 round_trippers.go:463] GET https://192.168.238.182:8443/api/v1/nodes/multinode-015900
	I1218 12:58:05.359806   12728 round_trippers.go:469] Request Headers:
	I1218 12:58:05.359863   12728 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1218 12:58:05.359863   12728 round_trippers.go:473]     Accept: application/json, */*
	I1218 12:58:05.365302   12728 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1218 12:58:05.365302   12728 round_trippers.go:577] Response Headers:
	I1218 12:58:05.365302   12728 round_trippers.go:580]     Audit-Id: 2029fcf7-511e-4d47-a65e-98fa0ddd8774
	I1218 12:58:05.365302   12728 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 12:58:05.365302   12728 round_trippers.go:580]     Content-Type: application/json
	I1218 12:58:05.365302   12728 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a04cbc8-e61e-444a-bf0e-8472e0035c2c
	I1218 12:58:05.365302   12728 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e24e6e3-cf39-4982-af10-f8c010ad1d90
	I1218 12:58:05.365302   12728 round_trippers.go:580]     Date: Mon, 18 Dec 2023 12:58:05 GMT
	I1218 12:58:05.366314   12728 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-015900","uid":"b1b226d9-f305-41e8-ae9a-d377e8d5a4c5","resourceVersion":"355","creationTimestamp":"2023-12-18T12:57:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-015900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30d8ecd1811578f7b9db580c501c654c189f68d4","minikube.k8s.io/name":"multinode-015900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_18T12_57_49_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-18T12:57:44Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I1218 12:58:05.851605   12728 round_trippers.go:463] GET https://192.168.238.182:8443/api/v1/nodes/multinode-015900
	I1218 12:58:05.851605   12728 round_trippers.go:469] Request Headers:
	I1218 12:58:05.851728   12728 round_trippers.go:473]     Accept: application/json, */*
	I1218 12:58:05.851728   12728 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1218 12:58:05.856311   12728 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1218 12:58:05.856396   12728 round_trippers.go:577] Response Headers:
	I1218 12:58:05.856396   12728 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e24e6e3-cf39-4982-af10-f8c010ad1d90
	I1218 12:58:05.856396   12728 round_trippers.go:580]     Date: Mon, 18 Dec 2023 12:58:05 GMT
	I1218 12:58:05.856396   12728 round_trippers.go:580]     Audit-Id: 7a542b69-9a54-4073-b514-dad9525137cd
	I1218 12:58:05.856484   12728 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 12:58:05.856484   12728 round_trippers.go:580]     Content-Type: application/json
	I1218 12:58:05.856484   12728 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a04cbc8-e61e-444a-bf0e-8472e0035c2c
	I1218 12:58:05.856822   12728 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-015900","uid":"b1b226d9-f305-41e8-ae9a-d377e8d5a4c5","resourceVersion":"355","creationTimestamp":"2023-12-18T12:57:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-015900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30d8ecd1811578f7b9db580c501c654c189f68d4","minikube.k8s.io/name":"multinode-015900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_18T12_57_49_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-18T12:57:44Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I1218 12:58:05.857445   12728 node_ready.go:58] node "multinode-015900" has status "Ready":"False"
	I1218 12:58:06.361915   12728 round_trippers.go:463] GET https://192.168.238.182:8443/api/v1/nodes/multinode-015900
	I1218 12:58:06.361982   12728 round_trippers.go:469] Request Headers:
	I1218 12:58:06.361982   12728 round_trippers.go:473]     Accept: application/json, */*
	I1218 12:58:06.361982   12728 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1218 12:58:06.365409   12728 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1218 12:58:06.365444   12728 round_trippers.go:577] Response Headers:
	I1218 12:58:06.365444   12728 round_trippers.go:580]     Content-Type: application/json
	I1218 12:58:06.365444   12728 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a04cbc8-e61e-444a-bf0e-8472e0035c2c
	I1218 12:58:06.365444   12728 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e24e6e3-cf39-4982-af10-f8c010ad1d90
	I1218 12:58:06.365444   12728 round_trippers.go:580]     Date: Mon, 18 Dec 2023 12:58:06 GMT
	I1218 12:58:06.365504   12728 round_trippers.go:580]     Audit-Id: 5ace8068-b201-4728-be3c-509a8cffa51c
	I1218 12:58:06.365504   12728 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 12:58:06.365659   12728 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-015900","uid":"b1b226d9-f305-41e8-ae9a-d377e8d5a4c5","resourceVersion":"355","creationTimestamp":"2023-12-18T12:57:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-015900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30d8ecd1811578f7b9db580c501c654c189f68d4","minikube.k8s.io/name":"multinode-015900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_18T12_57_49_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-18T12:57:44Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I1218 12:58:06.854681   12728 main.go:141] libmachine: [stdout =====>] : Running
	
	I1218 12:58:06.854866   12728 main.go:141] libmachine: [stderr =====>] : 
	I1218 12:58:06.854866   12728 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-015900 ).networkadapters[0]).ipaddresses[0]
	I1218 12:58:06.855141   12728 round_trippers.go:463] GET https://192.168.238.182:8443/api/v1/nodes/multinode-015900
	I1218 12:58:06.855141   12728 round_trippers.go:469] Request Headers:
	I1218 12:58:06.855141   12728 round_trippers.go:473]     Accept: application/json, */*
	I1218 12:58:06.855141   12728 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1218 12:58:06.861186   12728 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1218 12:58:06.861186   12728 round_trippers.go:577] Response Headers:
	I1218 12:58:06.861186   12728 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 12:58:06.861186   12728 round_trippers.go:580]     Content-Type: application/json
	I1218 12:58:06.861186   12728 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a04cbc8-e61e-444a-bf0e-8472e0035c2c
	I1218 12:58:06.861186   12728 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e24e6e3-cf39-4982-af10-f8c010ad1d90
	I1218 12:58:06.861186   12728 round_trippers.go:580]     Date: Mon, 18 Dec 2023 12:58:06 GMT
	I1218 12:58:06.861186   12728 round_trippers.go:580]     Audit-Id: 18e81e11-c6d0-478d-9db9-c34b483ae0dd
	I1218 12:58:06.861186   12728 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-015900","uid":"b1b226d9-f305-41e8-ae9a-d377e8d5a4c5","resourceVersion":"355","creationTimestamp":"2023-12-18T12:57:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-015900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30d8ecd1811578f7b9db580c501c654c189f68d4","minikube.k8s.io/name":"multinode-015900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_18T12_57_49_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-18T12:57:44Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I1218 12:58:07.331809   12728 main.go:141] libmachine: [stdout =====>] : 192.168.238.182
	
	I1218 12:58:07.332046   12728 main.go:141] libmachine: [stderr =====>] : 
	I1218 12:58:07.332814   12728 sshutil.go:53] new ssh client: &{IP:192.168.238.182 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-015900\id_rsa Username:docker}
	I1218 12:58:07.348169   12728 round_trippers.go:463] GET https://192.168.238.182:8443/api/v1/nodes/multinode-015900
	I1218 12:58:07.348270   12728 round_trippers.go:469] Request Headers:
	I1218 12:58:07.348270   12728 round_trippers.go:473]     Accept: application/json, */*
	I1218 12:58:07.348270   12728 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1218 12:58:07.351755   12728 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1218 12:58:07.352118   12728 round_trippers.go:577] Response Headers:
	I1218 12:58:07.352118   12728 round_trippers.go:580]     Audit-Id: 39e91a6a-6a3a-45bc-aa8e-0f01aed9cfe5
	I1218 12:58:07.352118   12728 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 12:58:07.352118   12728 round_trippers.go:580]     Content-Type: application/json
	I1218 12:58:07.352118   12728 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a04cbc8-e61e-444a-bf0e-8472e0035c2c
	I1218 12:58:07.352118   12728 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e24e6e3-cf39-4982-af10-f8c010ad1d90
	I1218 12:58:07.352118   12728 round_trippers.go:580]     Date: Mon, 18 Dec 2023 12:58:07 GMT
	I1218 12:58:07.352389   12728 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-015900","uid":"b1b226d9-f305-41e8-ae9a-d377e8d5a4c5","resourceVersion":"355","creationTimestamp":"2023-12-18T12:57:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-015900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30d8ecd1811578f7b9db580c501c654c189f68d4","minikube.k8s.io/name":"multinode-015900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_18T12_57_49_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-18T12:57:44Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I1218 12:58:07.530914   12728 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1218 12:58:07.853918   12728 round_trippers.go:463] GET https://192.168.238.182:8443/api/v1/nodes/multinode-015900
	I1218 12:58:07.853971   12728 round_trippers.go:469] Request Headers:
	I1218 12:58:07.853971   12728 round_trippers.go:473]     Accept: application/json, */*
	I1218 12:58:07.853971   12728 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1218 12:58:07.857318   12728 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1218 12:58:07.857318   12728 round_trippers.go:577] Response Headers:
	I1218 12:58:07.857318   12728 round_trippers.go:580]     Content-Type: application/json
	I1218 12:58:07.857318   12728 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a04cbc8-e61e-444a-bf0e-8472e0035c2c
	I1218 12:58:07.857318   12728 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e24e6e3-cf39-4982-af10-f8c010ad1d90
	I1218 12:58:07.857318   12728 round_trippers.go:580]     Date: Mon, 18 Dec 2023 12:58:07 GMT
	I1218 12:58:07.857318   12728 round_trippers.go:580]     Audit-Id: b123a29f-976a-475e-885e-85c45e9cb965
	I1218 12:58:07.857800   12728 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 12:58:07.858033   12728 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-015900","uid":"b1b226d9-f305-41e8-ae9a-d377e8d5a4c5","resourceVersion":"355","creationTimestamp":"2023-12-18T12:57:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-015900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30d8ecd1811578f7b9db580c501c654c189f68d4","minikube.k8s.io/name":"multinode-015900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_18T12_57_49_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-18T12:57:44Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I1218 12:58:07.858529   12728 node_ready.go:58] node "multinode-015900" has status "Ready":"False"
	I1218 12:58:08.252871   12728 command_runner.go:130] > serviceaccount/storage-provisioner created
	I1218 12:58:08.252906   12728 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner created
	I1218 12:58:08.253012   12728 command_runner.go:130] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I1218 12:58:08.253012   12728 command_runner.go:130] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I1218 12:58:08.253012   12728 command_runner.go:130] > endpoints/k8s.io-minikube-hostpath created
	I1218 12:58:08.253012   12728 command_runner.go:130] > pod/storage-provisioner created
	I1218 12:58:08.363245   12728 round_trippers.go:463] GET https://192.168.238.182:8443/api/v1/nodes/multinode-015900
	I1218 12:58:08.363245   12728 round_trippers.go:469] Request Headers:
	I1218 12:58:08.363245   12728 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1218 12:58:08.363245   12728 round_trippers.go:473]     Accept: application/json, */*
	I1218 12:58:08.366750   12728 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1218 12:58:08.366750   12728 round_trippers.go:577] Response Headers:
	I1218 12:58:08.366750   12728 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e24e6e3-cf39-4982-af10-f8c010ad1d90
	I1218 12:58:08.366881   12728 round_trippers.go:580]     Date: Mon, 18 Dec 2023 12:58:08 GMT
	I1218 12:58:08.366881   12728 round_trippers.go:580]     Audit-Id: 79f20a18-bc4b-43e0-a934-e4d78a25b55f
	I1218 12:58:08.366881   12728 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 12:58:08.366881   12728 round_trippers.go:580]     Content-Type: application/json
	I1218 12:58:08.366881   12728 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a04cbc8-e61e-444a-bf0e-8472e0035c2c
	I1218 12:58:08.366881   12728 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-015900","uid":"b1b226d9-f305-41e8-ae9a-d377e8d5a4c5","resourceVersion":"355","creationTimestamp":"2023-12-18T12:57:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-015900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30d8ecd1811578f7b9db580c501c654c189f68d4","minikube.k8s.io/name":"multinode-015900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_18T12_57_49_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-18T12:57:44Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I1218 12:58:08.856243   12728 round_trippers.go:463] GET https://192.168.238.182:8443/api/v1/nodes/multinode-015900
	I1218 12:58:08.856243   12728 round_trippers.go:469] Request Headers:
	I1218 12:58:08.856243   12728 round_trippers.go:473]     Accept: application/json, */*
	I1218 12:58:08.856350   12728 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1218 12:58:08.859747   12728 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1218 12:58:08.859747   12728 round_trippers.go:577] Response Headers:
	I1218 12:58:08.859747   12728 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a04cbc8-e61e-444a-bf0e-8472e0035c2c
	I1218 12:58:08.859747   12728 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e24e6e3-cf39-4982-af10-f8c010ad1d90
	I1218 12:58:08.859747   12728 round_trippers.go:580]     Date: Mon, 18 Dec 2023 12:58:08 GMT
	I1218 12:58:08.859747   12728 round_trippers.go:580]     Audit-Id: b01fc719-597e-4bb6-b8dc-267c3fc7d73a
	I1218 12:58:08.859747   12728 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 12:58:08.859747   12728 round_trippers.go:580]     Content-Type: application/json
	I1218 12:58:08.860742   12728 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-015900","uid":"b1b226d9-f305-41e8-ae9a-d377e8d5a4c5","resourceVersion":"355","creationTimestamp":"2023-12-18T12:57:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-015900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30d8ecd1811578f7b9db580c501c654c189f68d4","minikube.k8s.io/name":"multinode-015900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_18T12_57_49_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-18T12:57:44Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I1218 12:58:09.363611   12728 round_trippers.go:463] GET https://192.168.238.182:8443/api/v1/nodes/multinode-015900
	I1218 12:58:09.363611   12728 round_trippers.go:469] Request Headers:
	I1218 12:58:09.363611   12728 round_trippers.go:473]     Accept: application/json, */*
	I1218 12:58:09.363611   12728 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1218 12:58:09.368147   12728 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1218 12:58:09.368541   12728 round_trippers.go:577] Response Headers:
	I1218 12:58:09.368541   12728 round_trippers.go:580]     Audit-Id: 0e3cbd8a-d73b-458d-af04-200c9dda1017
	I1218 12:58:09.368541   12728 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 12:58:09.368541   12728 round_trippers.go:580]     Content-Type: application/json
	I1218 12:58:09.368541   12728 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a04cbc8-e61e-444a-bf0e-8472e0035c2c
	I1218 12:58:09.368541   12728 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e24e6e3-cf39-4982-af10-f8c010ad1d90
	I1218 12:58:09.368541   12728 round_trippers.go:580]     Date: Mon, 18 Dec 2023 12:58:09 GMT
	I1218 12:58:09.368939   12728 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-015900","uid":"b1b226d9-f305-41e8-ae9a-d377e8d5a4c5","resourceVersion":"355","creationTimestamp":"2023-12-18T12:57:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-015900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30d8ecd1811578f7b9db580c501c654c189f68d4","minikube.k8s.io/name":"multinode-015900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_18T12_57_49_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-18T12:57:44Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I1218 12:58:09.457708   12728 main.go:141] libmachine: [stdout =====>] : 192.168.238.182
	
	I1218 12:58:09.457708   12728 main.go:141] libmachine: [stderr =====>] : 
	I1218 12:58:09.457708   12728 sshutil.go:53] new ssh client: &{IP:192.168.238.182 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-015900\id_rsa Username:docker}
	I1218 12:58:09.588888   12728 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1218 12:58:09.853297   12728 round_trippers.go:463] GET https://192.168.238.182:8443/api/v1/nodes/multinode-015900
	I1218 12:58:09.853352   12728 round_trippers.go:469] Request Headers:
	I1218 12:58:09.853352   12728 round_trippers.go:473]     Accept: application/json, */*
	I1218 12:58:09.853468   12728 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1218 12:58:09.856701   12728 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1218 12:58:09.856701   12728 round_trippers.go:577] Response Headers:
	I1218 12:58:09.856701   12728 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 12:58:09.856701   12728 round_trippers.go:580]     Content-Type: application/json
	I1218 12:58:09.856701   12728 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a04cbc8-e61e-444a-bf0e-8472e0035c2c
	I1218 12:58:09.857115   12728 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e24e6e3-cf39-4982-af10-f8c010ad1d90
	I1218 12:58:09.857115   12728 round_trippers.go:580]     Date: Mon, 18 Dec 2023 12:58:09 GMT
	I1218 12:58:09.857115   12728 round_trippers.go:580]     Audit-Id: fe01c91d-a259-440d-8e64-c4fd6c20b29a
	I1218 12:58:09.857717   12728 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-015900","uid":"b1b226d9-f305-41e8-ae9a-d377e8d5a4c5","resourceVersion":"355","creationTimestamp":"2023-12-18T12:57:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-015900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30d8ecd1811578f7b9db580c501c654c189f68d4","minikube.k8s.io/name":"multinode-015900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_18T12_57_49_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-18T12:57:44Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I1218 12:58:09.908690   12728 command_runner.go:130] > storageclass.storage.k8s.io/standard created
	I1218 12:58:09.909004   12728 round_trippers.go:463] GET https://192.168.238.182:8443/apis/storage.k8s.io/v1/storageclasses
	I1218 12:58:09.909066   12728 round_trippers.go:469] Request Headers:
	I1218 12:58:09.909066   12728 round_trippers.go:473]     Accept: application/json, */*
	I1218 12:58:09.909066   12728 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1218 12:58:09.912427   12728 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1218 12:58:09.912427   12728 round_trippers.go:577] Response Headers:
	I1218 12:58:09.912427   12728 round_trippers.go:580]     Audit-Id: 4ba45e5a-b95f-4367-b255-89e90284379f
	I1218 12:58:09.912427   12728 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 12:58:09.912427   12728 round_trippers.go:580]     Content-Type: application/json
	I1218 12:58:09.912427   12728 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a04cbc8-e61e-444a-bf0e-8472e0035c2c
	I1218 12:58:09.912427   12728 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e24e6e3-cf39-4982-af10-f8c010ad1d90
	I1218 12:58:09.912668   12728 round_trippers.go:580]     Content-Length: 1273
	I1218 12:58:09.912668   12728 round_trippers.go:580]     Date: Mon, 18 Dec 2023 12:58:09 GMT
	I1218 12:58:09.912668   12728 request.go:1212] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"420"},"items":[{"metadata":{"name":"standard","uid":"f52fdf98-baf2-4434-b46e-cde6993b02a6","resourceVersion":"420","creationTimestamp":"2023-12-18T12:58:09Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2023-12-18T12:58:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kuberne
tes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is- [truncated 249 chars]
	I1218 12:58:09.913363   12728 request.go:1212] Request Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"f52fdf98-baf2-4434-b46e-cde6993b02a6","resourceVersion":"420","creationTimestamp":"2023-12-18T12:58:09Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2023-12-18T12:58:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclas
s.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I1218 12:58:09.913446   12728 round_trippers.go:463] PUT https://192.168.238.182:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I1218 12:58:09.913446   12728 round_trippers.go:469] Request Headers:
	I1218 12:58:09.913500   12728 round_trippers.go:473]     Accept: application/json, */*
	I1218 12:58:09.913500   12728 round_trippers.go:473]     Content-Type: application/json
	I1218 12:58:09.913500   12728 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1218 12:58:09.916856   12728 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1218 12:58:09.916856   12728 round_trippers.go:577] Response Headers:
	I1218 12:58:09.916856   12728 round_trippers.go:580]     Content-Type: application/json
	I1218 12:58:09.916856   12728 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a04cbc8-e61e-444a-bf0e-8472e0035c2c
	I1218 12:58:09.916856   12728 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e24e6e3-cf39-4982-af10-f8c010ad1d90
	I1218 12:58:09.916856   12728 round_trippers.go:580]     Content-Length: 1220
	I1218 12:58:09.916856   12728 round_trippers.go:580]     Date: Mon, 18 Dec 2023 12:58:09 GMT
	I1218 12:58:09.916856   12728 round_trippers.go:580]     Audit-Id: c26b4cd3-e2ef-4f72-b41b-b47b472c2471
	I1218 12:58:09.916856   12728 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 12:58:09.916856   12728 request.go:1212] Response Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"f52fdf98-baf2-4434-b46e-cde6993b02a6","resourceVersion":"420","creationTimestamp":"2023-12-18T12:58:09Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2023-12-18T12:58:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storagecla
ss.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I1218 12:58:09.916856   12728 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I1218 12:58:09.919803   12728 addons.go:502] enable addons completed in 9.8596088s: enabled=[storage-provisioner default-storageclass]
	I1218 12:58:10.359995   12728 round_trippers.go:463] GET https://192.168.238.182:8443/api/v1/nodes/multinode-015900
	I1218 12:58:10.359995   12728 round_trippers.go:469] Request Headers:
	I1218 12:58:10.359995   12728 round_trippers.go:473]     Accept: application/json, */*
	I1218 12:58:10.359995   12728 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1218 12:58:10.363560   12728 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1218 12:58:10.363560   12728 round_trippers.go:577] Response Headers:
	I1218 12:58:10.363560   12728 round_trippers.go:580]     Audit-Id: 94dd893a-ec7b-4d15-bc11-fadd56160fe1
	I1218 12:58:10.363560   12728 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 12:58:10.363936   12728 round_trippers.go:580]     Content-Type: application/json
	I1218 12:58:10.363936   12728 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a04cbc8-e61e-444a-bf0e-8472e0035c2c
	I1218 12:58:10.363936   12728 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e24e6e3-cf39-4982-af10-f8c010ad1d90
	I1218 12:58:10.363936   12728 round_trippers.go:580]     Date: Mon, 18 Dec 2023 12:58:10 GMT
	I1218 12:58:10.364501   12728 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-015900","uid":"b1b226d9-f305-41e8-ae9a-d377e8d5a4c5","resourceVersion":"355","creationTimestamp":"2023-12-18T12:57:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-015900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30d8ecd1811578f7b9db580c501c654c189f68d4","minikube.k8s.io/name":"multinode-015900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_18T12_57_49_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-18T12:57:44Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I1218 12:58:10.364851   12728 node_ready.go:58] node "multinode-015900" has status "Ready":"False"
	I1218 12:58:10.852110   12728 round_trippers.go:463] GET https://192.168.238.182:8443/api/v1/nodes/multinode-015900
	I1218 12:58:10.852110   12728 round_trippers.go:469] Request Headers:
	I1218 12:58:10.852194   12728 round_trippers.go:473]     Accept: application/json, */*
	I1218 12:58:10.852194   12728 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1218 12:58:10.963534   12728 round_trippers.go:574] Response Status: 200 OK in 111 milliseconds
	I1218 12:58:10.964330   12728 round_trippers.go:577] Response Headers:
	I1218 12:58:10.964330   12728 round_trippers.go:580]     Content-Type: application/json
	I1218 12:58:10.964330   12728 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a04cbc8-e61e-444a-bf0e-8472e0035c2c
	I1218 12:58:10.964330   12728 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e24e6e3-cf39-4982-af10-f8c010ad1d90
	I1218 12:58:10.964330   12728 round_trippers.go:580]     Date: Mon, 18 Dec 2023 12:58:10 GMT
	I1218 12:58:10.964330   12728 round_trippers.go:580]     Audit-Id: b6b8a19e-ce5e-470f-b0d2-1649c7d083c4
	I1218 12:58:10.964330   12728 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 12:58:10.964651   12728 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-015900","uid":"b1b226d9-f305-41e8-ae9a-d377e8d5a4c5","resourceVersion":"355","creationTimestamp":"2023-12-18T12:57:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-015900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30d8ecd1811578f7b9db580c501c654c189f68d4","minikube.k8s.io/name":"multinode-015900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_18T12_57_49_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-18T12:57:44Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I1218 12:58:11.356286   12728 round_trippers.go:463] GET https://192.168.238.182:8443/api/v1/nodes/multinode-015900
	I1218 12:58:11.356286   12728 round_trippers.go:469] Request Headers:
	I1218 12:58:11.356286   12728 round_trippers.go:473]     Accept: application/json, */*
	I1218 12:58:11.356286   12728 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1218 12:58:11.359651   12728 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1218 12:58:11.359651   12728 round_trippers.go:577] Response Headers:
	I1218 12:58:11.360106   12728 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e24e6e3-cf39-4982-af10-f8c010ad1d90
	I1218 12:58:11.360106   12728 round_trippers.go:580]     Date: Mon, 18 Dec 2023 12:58:11 GMT
	I1218 12:58:11.360106   12728 round_trippers.go:580]     Audit-Id: 6581356e-89a9-4a5a-b46d-efa261ae2749
	I1218 12:58:11.360106   12728 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 12:58:11.360106   12728 round_trippers.go:580]     Content-Type: application/json
	I1218 12:58:11.360106   12728 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a04cbc8-e61e-444a-bf0e-8472e0035c2c
	I1218 12:58:11.360444   12728 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-015900","uid":"b1b226d9-f305-41e8-ae9a-d377e8d5a4c5","resourceVersion":"355","creationTimestamp":"2023-12-18T12:57:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-015900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30d8ecd1811578f7b9db580c501c654c189f68d4","minikube.k8s.io/name":"multinode-015900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_18T12_57_49_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-18T12:57:44Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I1218 12:58:11.854845   12728 round_trippers.go:463] GET https://192.168.238.182:8443/api/v1/nodes/multinode-015900
	I1218 12:58:11.854845   12728 round_trippers.go:469] Request Headers:
	I1218 12:58:11.854947   12728 round_trippers.go:473]     Accept: application/json, */*
	I1218 12:58:11.854947   12728 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1218 12:58:11.859853   12728 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1218 12:58:11.859947   12728 round_trippers.go:577] Response Headers:
	I1218 12:58:11.859947   12728 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a04cbc8-e61e-444a-bf0e-8472e0035c2c
	I1218 12:58:11.859947   12728 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e24e6e3-cf39-4982-af10-f8c010ad1d90
	I1218 12:58:11.859947   12728 round_trippers.go:580]     Date: Mon, 18 Dec 2023 12:58:11 GMT
	I1218 12:58:11.859947   12728 round_trippers.go:580]     Audit-Id: fcbae442-292b-409c-9c5c-803867701eff
	I1218 12:58:11.859947   12728 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 12:58:11.859947   12728 round_trippers.go:580]     Content-Type: application/json
	I1218 12:58:11.860221   12728 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-015900","uid":"b1b226d9-f305-41e8-ae9a-d377e8d5a4c5","resourceVersion":"355","creationTimestamp":"2023-12-18T12:57:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-015900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30d8ecd1811578f7b9db580c501c654c189f68d4","minikube.k8s.io/name":"multinode-015900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_18T12_57_49_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-18T12:57:44Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I1218 12:58:12.356627   12728 round_trippers.go:463] GET https://192.168.238.182:8443/api/v1/nodes/multinode-015900
	I1218 12:58:12.356627   12728 round_trippers.go:469] Request Headers:
	I1218 12:58:12.356627   12728 round_trippers.go:473]     Accept: application/json, */*
	I1218 12:58:12.356627   12728 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1218 12:58:12.360137   12728 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1218 12:58:12.360710   12728 round_trippers.go:577] Response Headers:
	I1218 12:58:12.360710   12728 round_trippers.go:580]     Date: Mon, 18 Dec 2023 12:58:12 GMT
	I1218 12:58:12.360710   12728 round_trippers.go:580]     Audit-Id: 13660282-adf7-4854-a0d8-6c139a2a4b07
	I1218 12:58:12.360710   12728 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 12:58:12.360710   12728 round_trippers.go:580]     Content-Type: application/json
	I1218 12:58:12.360710   12728 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a04cbc8-e61e-444a-bf0e-8472e0035c2c
	I1218 12:58:12.360710   12728 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e24e6e3-cf39-4982-af10-f8c010ad1d90
	I1218 12:58:12.361118   12728 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-015900","uid":"b1b226d9-f305-41e8-ae9a-d377e8d5a4c5","resourceVersion":"355","creationTimestamp":"2023-12-18T12:57:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-015900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30d8ecd1811578f7b9db580c501c654c189f68d4","minikube.k8s.io/name":"multinode-015900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_18T12_57_49_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-18T12:57:44Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I1218 12:58:12.856616   12728 round_trippers.go:463] GET https://192.168.238.182:8443/api/v1/nodes/multinode-015900
	I1218 12:58:12.856728   12728 round_trippers.go:469] Request Headers:
	I1218 12:58:12.856728   12728 round_trippers.go:473]     Accept: application/json, */*
	I1218 12:58:12.856728   12728 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1218 12:58:12.861027   12728 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1218 12:58:12.861497   12728 round_trippers.go:577] Response Headers:
	I1218 12:58:12.861497   12728 round_trippers.go:580]     Content-Type: application/json
	I1218 12:58:12.861497   12728 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a04cbc8-e61e-444a-bf0e-8472e0035c2c
	I1218 12:58:12.861497   12728 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e24e6e3-cf39-4982-af10-f8c010ad1d90
	I1218 12:58:12.861497   12728 round_trippers.go:580]     Date: Mon, 18 Dec 2023 12:58:12 GMT
	I1218 12:58:12.861497   12728 round_trippers.go:580]     Audit-Id: 1a62351b-a6f4-48a7-ada1-98cb52b203e3
	I1218 12:58:12.861497   12728 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 12:58:12.861805   12728 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-015900","uid":"b1b226d9-f305-41e8-ae9a-d377e8d5a4c5","resourceVersion":"355","creationTimestamp":"2023-12-18T12:57:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-015900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30d8ecd1811578f7b9db580c501c654c189f68d4","minikube.k8s.io/name":"multinode-015900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_18T12_57_49_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-18T12:57:44Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I1218 12:58:12.862500   12728 node_ready.go:58] node "multinode-015900" has status "Ready":"False"
	I1218 12:58:13.355856   12728 round_trippers.go:463] GET https://192.168.238.182:8443/api/v1/nodes/multinode-015900
	I1218 12:58:13.356051   12728 round_trippers.go:469] Request Headers:
	I1218 12:58:13.356051   12728 round_trippers.go:473]     Accept: application/json, */*
	I1218 12:58:13.356051   12728 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1218 12:58:13.362617   12728 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1218 12:58:13.362617   12728 round_trippers.go:577] Response Headers:
	I1218 12:58:13.362617   12728 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e24e6e3-cf39-4982-af10-f8c010ad1d90
	I1218 12:58:13.362617   12728 round_trippers.go:580]     Date: Mon, 18 Dec 2023 12:58:13 GMT
	I1218 12:58:13.362617   12728 round_trippers.go:580]     Audit-Id: fd564bd2-8737-4ba8-9faf-043e799b5c21
	I1218 12:58:13.362617   12728 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 12:58:13.362617   12728 round_trippers.go:580]     Content-Type: application/json
	I1218 12:58:13.362617   12728 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a04cbc8-e61e-444a-bf0e-8472e0035c2c
	I1218 12:58:13.362617   12728 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-015900","uid":"b1b226d9-f305-41e8-ae9a-d377e8d5a4c5","resourceVersion":"355","creationTimestamp":"2023-12-18T12:57:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-015900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30d8ecd1811578f7b9db580c501c654c189f68d4","minikube.k8s.io/name":"multinode-015900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_18T12_57_49_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-18T12:57:44Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I1218 12:58:13.854790   12728 round_trippers.go:463] GET https://192.168.238.182:8443/api/v1/nodes/multinode-015900
	I1218 12:58:13.854901   12728 round_trippers.go:469] Request Headers:
	I1218 12:58:13.854901   12728 round_trippers.go:473]     Accept: application/json, */*
	I1218 12:58:13.854901   12728 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1218 12:58:13.858486   12728 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1218 12:58:13.858591   12728 round_trippers.go:577] Response Headers:
	I1218 12:58:13.858591   12728 round_trippers.go:580]     Audit-Id: 452dd591-77bf-481d-9653-e36ae69bc47e
	I1218 12:58:13.858591   12728 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 12:58:13.858591   12728 round_trippers.go:580]     Content-Type: application/json
	I1218 12:58:13.858591   12728 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a04cbc8-e61e-444a-bf0e-8472e0035c2c
	I1218 12:58:13.858591   12728 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e24e6e3-cf39-4982-af10-f8c010ad1d90
	I1218 12:58:13.858591   12728 round_trippers.go:580]     Date: Mon, 18 Dec 2023 12:58:13 GMT
	I1218 12:58:13.858960   12728 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-015900","uid":"b1b226d9-f305-41e8-ae9a-d377e8d5a4c5","resourceVersion":"355","creationTimestamp":"2023-12-18T12:57:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-015900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30d8ecd1811578f7b9db580c501c654c189f68d4","minikube.k8s.io/name":"multinode-015900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_18T12_57_49_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-18T12:57:44Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I1218 12:58:14.354731   12728 round_trippers.go:463] GET https://192.168.238.182:8443/api/v1/nodes/multinode-015900
	I1218 12:58:14.354731   12728 round_trippers.go:469] Request Headers:
	I1218 12:58:14.354840   12728 round_trippers.go:473]     Accept: application/json, */*
	I1218 12:58:14.354840   12728 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1218 12:58:14.358230   12728 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1218 12:58:14.359055   12728 round_trippers.go:577] Response Headers:
	I1218 12:58:14.359055   12728 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 12:58:14.359055   12728 round_trippers.go:580]     Content-Type: application/json
	I1218 12:58:14.359055   12728 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a04cbc8-e61e-444a-bf0e-8472e0035c2c
	I1218 12:58:14.359055   12728 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e24e6e3-cf39-4982-af10-f8c010ad1d90
	I1218 12:58:14.359163   12728 round_trippers.go:580]     Date: Mon, 18 Dec 2023 12:58:14 GMT
	I1218 12:58:14.359163   12728 round_trippers.go:580]     Audit-Id: 26200d5f-96d6-436c-86f5-112c950fa1a1
	I1218 12:58:14.359440   12728 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-015900","uid":"b1b226d9-f305-41e8-ae9a-d377e8d5a4c5","resourceVersion":"355","creationTimestamp":"2023-12-18T12:57:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-015900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30d8ecd1811578f7b9db580c501c654c189f68d4","minikube.k8s.io/name":"multinode-015900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_18T12_57_49_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-18T12:57:44Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I1218 12:58:14.855618   12728 round_trippers.go:463] GET https://192.168.238.182:8443/api/v1/nodes/multinode-015900
	I1218 12:58:14.855733   12728 round_trippers.go:469] Request Headers:
	I1218 12:58:14.855733   12728 round_trippers.go:473]     Accept: application/json, */*
	I1218 12:58:14.855733   12728 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1218 12:58:14.860019   12728 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1218 12:58:14.860141   12728 round_trippers.go:577] Response Headers:
	I1218 12:58:14.860141   12728 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a04cbc8-e61e-444a-bf0e-8472e0035c2c
	I1218 12:58:14.860141   12728 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e24e6e3-cf39-4982-af10-f8c010ad1d90
	I1218 12:58:14.860244   12728 round_trippers.go:580]     Date: Mon, 18 Dec 2023 12:58:14 GMT
	I1218 12:58:14.860244   12728 round_trippers.go:580]     Audit-Id: f75edcba-2147-4f1d-9259-ff4e8a6e62ca
	I1218 12:58:14.860244   12728 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 12:58:14.860244   12728 round_trippers.go:580]     Content-Type: application/json
	I1218 12:58:14.860459   12728 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-015900","uid":"b1b226d9-f305-41e8-ae9a-d377e8d5a4c5","resourceVersion":"355","creationTimestamp":"2023-12-18T12:57:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-015900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30d8ecd1811578f7b9db580c501c654c189f68d4","minikube.k8s.io/name":"multinode-015900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_18T12_57_49_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-18T12:57:44Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I1218 12:58:15.354896   12728 round_trippers.go:463] GET https://192.168.238.182:8443/api/v1/nodes/multinode-015900
	I1218 12:58:15.355002   12728 round_trippers.go:469] Request Headers:
	I1218 12:58:15.355002   12728 round_trippers.go:473]     Accept: application/json, */*
	I1218 12:58:15.355002   12728 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1218 12:58:15.358394   12728 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1218 12:58:15.359266   12728 round_trippers.go:577] Response Headers:
	I1218 12:58:15.359266   12728 round_trippers.go:580]     Audit-Id: b565da38-d10c-4d80-8ba5-c88e585ee20f
	I1218 12:58:15.359266   12728 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 12:58:15.359266   12728 round_trippers.go:580]     Content-Type: application/json
	I1218 12:58:15.359266   12728 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a04cbc8-e61e-444a-bf0e-8472e0035c2c
	I1218 12:58:15.359266   12728 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e24e6e3-cf39-4982-af10-f8c010ad1d90
	I1218 12:58:15.359266   12728 round_trippers.go:580]     Date: Mon, 18 Dec 2023 12:58:15 GMT
	I1218 12:58:15.359723   12728 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-015900","uid":"b1b226d9-f305-41e8-ae9a-d377e8d5a4c5","resourceVersion":"355","creationTimestamp":"2023-12-18T12:57:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-015900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30d8ecd1811578f7b9db580c501c654c189f68d4","minikube.k8s.io/name":"multinode-015900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_18T12_57_49_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-18T12:57:44Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I1218 12:58:15.360195   12728 node_ready.go:58] node "multinode-015900" has status "Ready":"False"
	I1218 12:58:15.854535   12728 round_trippers.go:463] GET https://192.168.238.182:8443/api/v1/nodes/multinode-015900
	I1218 12:58:15.854535   12728 round_trippers.go:469] Request Headers:
	I1218 12:58:15.854535   12728 round_trippers.go:473]     Accept: application/json, */*
	I1218 12:58:15.854677   12728 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1218 12:58:15.859108   12728 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1218 12:58:15.859409   12728 round_trippers.go:577] Response Headers:
	I1218 12:58:15.859409   12728 round_trippers.go:580]     Audit-Id: 5c9435ef-78fb-49fb-a226-ede4aabb1d10
	I1218 12:58:15.859409   12728 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 12:58:15.859409   12728 round_trippers.go:580]     Content-Type: application/json
	I1218 12:58:15.859409   12728 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a04cbc8-e61e-444a-bf0e-8472e0035c2c
	I1218 12:58:15.859409   12728 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e24e6e3-cf39-4982-af10-f8c010ad1d90
	I1218 12:58:15.859409   12728 round_trippers.go:580]     Date: Mon, 18 Dec 2023 12:58:15 GMT
	I1218 12:58:15.859693   12728 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-015900","uid":"b1b226d9-f305-41e8-ae9a-d377e8d5a4c5","resourceVersion":"355","creationTimestamp":"2023-12-18T12:57:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-015900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30d8ecd1811578f7b9db580c501c654c189f68d4","minikube.k8s.io/name":"multinode-015900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_18T12_57_49_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-18T12:57:44Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I1218 12:58:16.352240   12728 round_trippers.go:463] GET https://192.168.238.182:8443/api/v1/nodes/multinode-015900
	I1218 12:58:16.352355   12728 round_trippers.go:469] Request Headers:
	I1218 12:58:16.352355   12728 round_trippers.go:473]     Accept: application/json, */*
	I1218 12:58:16.352355   12728 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1218 12:58:16.355944   12728 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1218 12:58:16.356652   12728 round_trippers.go:577] Response Headers:
	I1218 12:58:16.356652   12728 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a04cbc8-e61e-444a-bf0e-8472e0035c2c
	I1218 12:58:16.356652   12728 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e24e6e3-cf39-4982-af10-f8c010ad1d90
	I1218 12:58:16.356652   12728 round_trippers.go:580]     Date: Mon, 18 Dec 2023 12:58:16 GMT
	I1218 12:58:16.356652   12728 round_trippers.go:580]     Audit-Id: f932eb0f-c14e-4645-894d-de3a7ceec96b
	I1218 12:58:16.356652   12728 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 12:58:16.356759   12728 round_trippers.go:580]     Content-Type: application/json
	I1218 12:58:16.356995   12728 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-015900","uid":"b1b226d9-f305-41e8-ae9a-d377e8d5a4c5","resourceVersion":"355","creationTimestamp":"2023-12-18T12:57:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-015900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30d8ecd1811578f7b9db580c501c654c189f68d4","minikube.k8s.io/name":"multinode-015900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_18T12_57_49_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-18T12:57:44Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I1218 12:58:16.853523   12728 round_trippers.go:463] GET https://192.168.238.182:8443/api/v1/nodes/multinode-015900
	I1218 12:58:16.853670   12728 round_trippers.go:469] Request Headers:
	I1218 12:58:16.853670   12728 round_trippers.go:473]     Accept: application/json, */*
	I1218 12:58:16.853670   12728 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1218 12:58:16.858028   12728 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1218 12:58:16.858504   12728 round_trippers.go:577] Response Headers:
	I1218 12:58:16.858504   12728 round_trippers.go:580]     Audit-Id: ecfedde6-fd3a-49eb-916b-9e8adb556c4e
	I1218 12:58:16.858504   12728 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 12:58:16.858504   12728 round_trippers.go:580]     Content-Type: application/json
	I1218 12:58:16.858504   12728 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a04cbc8-e61e-444a-bf0e-8472e0035c2c
	I1218 12:58:16.858504   12728 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e24e6e3-cf39-4982-af10-f8c010ad1d90
	I1218 12:58:16.858504   12728 round_trippers.go:580]     Date: Mon, 18 Dec 2023 12:58:16 GMT
	I1218 12:58:16.859112   12728 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-015900","uid":"b1b226d9-f305-41e8-ae9a-d377e8d5a4c5","resourceVersion":"430","creationTimestamp":"2023-12-18T12:57:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-015900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30d8ecd1811578f7b9db580c501c654c189f68d4","minikube.k8s.io/name":"multinode-015900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_18T12_57_49_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-18T12:57:44Z","fieldsType":"FieldsV1","fi [truncated 4784 chars]
	I1218 12:58:16.859590   12728 node_ready.go:49] node "multinode-015900" has status "Ready":"True"
	I1218 12:58:16.859723   12728 node_ready.go:38] duration metric: took 15.5122919s waiting for node "multinode-015900" to be "Ready" ...
	I1218 12:58:16.859723   12728 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1218 12:58:16.860040   12728 round_trippers.go:463] GET https://192.168.238.182:8443/api/v1/namespaces/kube-system/pods
	I1218 12:58:16.860040   12728 round_trippers.go:469] Request Headers:
	I1218 12:58:16.860040   12728 round_trippers.go:473]     Accept: application/json, */*
	I1218 12:58:16.860040   12728 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1218 12:58:16.870847   12728 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I1218 12:58:16.870847   12728 round_trippers.go:577] Response Headers:
	I1218 12:58:16.870847   12728 round_trippers.go:580]     Date: Mon, 18 Dec 2023 12:58:16 GMT
	I1218 12:58:16.870847   12728 round_trippers.go:580]     Audit-Id: 848f76d7-41be-46a7-a1f5-03c26f208cbb
	I1218 12:58:16.870847   12728 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 12:58:16.870847   12728 round_trippers.go:580]     Content-Type: application/json
	I1218 12:58:16.870847   12728 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a04cbc8-e61e-444a-bf0e-8472e0035c2c
	I1218 12:58:16.870847   12728 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e24e6e3-cf39-4982-af10-f8c010ad1d90
	I1218 12:58:16.872199   12728 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"436"},"items":[{"metadata":{"name":"coredns-5dd5756b68-256fn","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"6bd59fc6-50d6-4764-823d-71232811cff2","resourceVersion":"436","creationTimestamp":"2023-12-18T12:58:00Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"d117bf5f-d14c-4a53-ad5a-250a0b115b2c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-18T12:58:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d117bf5f-d14c-4a53-ad5a-250a0b115b2c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 54052 chars]
	I1218 12:58:16.877632   12728 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-256fn" in "kube-system" namespace to be "Ready" ...
	I1218 12:58:16.877785   12728 round_trippers.go:463] GET https://192.168.238.182:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-256fn
	I1218 12:58:16.877785   12728 round_trippers.go:469] Request Headers:
	I1218 12:58:16.877785   12728 round_trippers.go:473]     Accept: application/json, */*
	I1218 12:58:16.877785   12728 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1218 12:58:16.880177   12728 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1218 12:58:16.880177   12728 round_trippers.go:577] Response Headers:
	I1218 12:58:16.880177   12728 round_trippers.go:580]     Date: Mon, 18 Dec 2023 12:58:16 GMT
	I1218 12:58:16.880177   12728 round_trippers.go:580]     Audit-Id: 1ab0faaa-6a69-47a5-9250-9493038adf9e
	I1218 12:58:16.880177   12728 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 12:58:16.880177   12728 round_trippers.go:580]     Content-Type: application/json
	I1218 12:58:16.880177   12728 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a04cbc8-e61e-444a-bf0e-8472e0035c2c
	I1218 12:58:16.880580   12728 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e24e6e3-cf39-4982-af10-f8c010ad1d90
	I1218 12:58:16.880938   12728 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-256fn","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"6bd59fc6-50d6-4764-823d-71232811cff2","resourceVersion":"436","creationTimestamp":"2023-12-18T12:58:00Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"d117bf5f-d14c-4a53-ad5a-250a0b115b2c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-18T12:58:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d117bf5f-d14c-4a53-ad5a-250a0b115b2c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6153 chars]
	I1218 12:58:16.881513   12728 round_trippers.go:463] GET https://192.168.238.182:8443/api/v1/nodes/multinode-015900
	I1218 12:58:16.881513   12728 round_trippers.go:469] Request Headers:
	I1218 12:58:16.881513   12728 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1218 12:58:16.881513   12728 round_trippers.go:473]     Accept: application/json, */*
	I1218 12:58:16.883793   12728 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1218 12:58:16.883793   12728 round_trippers.go:577] Response Headers:
	I1218 12:58:16.883793   12728 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e24e6e3-cf39-4982-af10-f8c010ad1d90
	I1218 12:58:16.883793   12728 round_trippers.go:580]     Date: Mon, 18 Dec 2023 12:58:16 GMT
	I1218 12:58:16.883793   12728 round_trippers.go:580]     Audit-Id: 22dd2b07-25c0-442b-b33b-c8fc74a48f5f
	I1218 12:58:16.883793   12728 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 12:58:16.883793   12728 round_trippers.go:580]     Content-Type: application/json
	I1218 12:58:16.884724   12728 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a04cbc8-e61e-444a-bf0e-8472e0035c2c
	I1218 12:58:16.885097   12728 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-015900","uid":"b1b226d9-f305-41e8-ae9a-d377e8d5a4c5","resourceVersion":"430","creationTimestamp":"2023-12-18T12:57:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-015900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30d8ecd1811578f7b9db580c501c654c189f68d4","minikube.k8s.io/name":"multinode-015900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_18T12_57_49_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-18T12:57:44Z","fieldsType":"FieldsV1","fi [truncated 4784 chars]
	I1218 12:58:17.393558   12728 round_trippers.go:463] GET https://192.168.238.182:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-256fn
	I1218 12:58:17.393558   12728 round_trippers.go:469] Request Headers:
	I1218 12:58:17.393633   12728 round_trippers.go:473]     Accept: application/json, */*
	I1218 12:58:17.393633   12728 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1218 12:58:17.397007   12728 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1218 12:58:17.397007   12728 round_trippers.go:577] Response Headers:
	I1218 12:58:17.397418   12728 round_trippers.go:580]     Audit-Id: 857c92b7-c0e1-4f71-aa73-440460713cd2
	I1218 12:58:17.397418   12728 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 12:58:17.397418   12728 round_trippers.go:580]     Content-Type: application/json
	I1218 12:58:17.397418   12728 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a04cbc8-e61e-444a-bf0e-8472e0035c2c
	I1218 12:58:17.397418   12728 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e24e6e3-cf39-4982-af10-f8c010ad1d90
	I1218 12:58:17.397418   12728 round_trippers.go:580]     Date: Mon, 18 Dec 2023 12:58:17 GMT
	I1218 12:58:17.397496   12728 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-256fn","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"6bd59fc6-50d6-4764-823d-71232811cff2","resourceVersion":"436","creationTimestamp":"2023-12-18T12:58:00Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"d117bf5f-d14c-4a53-ad5a-250a0b115b2c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-18T12:58:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d117bf5f-d14c-4a53-ad5a-250a0b115b2c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6153 chars]
	I1218 12:58:17.398252   12728 round_trippers.go:463] GET https://192.168.238.182:8443/api/v1/nodes/multinode-015900
	I1218 12:58:17.398355   12728 round_trippers.go:469] Request Headers:
	I1218 12:58:17.398355   12728 round_trippers.go:473]     Accept: application/json, */*
	I1218 12:58:17.398355   12728 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1218 12:58:17.400583   12728 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1218 12:58:17.400583   12728 round_trippers.go:577] Response Headers:
	I1218 12:58:17.400583   12728 round_trippers.go:580]     Audit-Id: d1e95f3c-3bbf-48d9-8c71-a54fc04ec690
	I1218 12:58:17.400583   12728 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 12:58:17.400583   12728 round_trippers.go:580]     Content-Type: application/json
	I1218 12:58:17.400583   12728 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a04cbc8-e61e-444a-bf0e-8472e0035c2c
	I1218 12:58:17.400583   12728 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e24e6e3-cf39-4982-af10-f8c010ad1d90
	I1218 12:58:17.400583   12728 round_trippers.go:580]     Date: Mon, 18 Dec 2023 12:58:17 GMT
	I1218 12:58:17.401751   12728 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-015900","uid":"b1b226d9-f305-41e8-ae9a-d377e8d5a4c5","resourceVersion":"430","creationTimestamp":"2023-12-18T12:57:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-015900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30d8ecd1811578f7b9db580c501c654c189f68d4","minikube.k8s.io/name":"multinode-015900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_18T12_57_49_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-18T12:57:44Z","fieldsType":"FieldsV1","fi [truncated 4784 chars]
	I1218 12:58:17.884993   12728 round_trippers.go:463] GET https://192.168.238.182:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-256fn
	I1218 12:58:17.884993   12728 round_trippers.go:469] Request Headers:
	I1218 12:58:17.885071   12728 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1218 12:58:17.885071   12728 round_trippers.go:473]     Accept: application/json, */*
	I1218 12:58:17.888476   12728 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1218 12:58:17.888476   12728 round_trippers.go:577] Response Headers:
	I1218 12:58:17.888476   12728 round_trippers.go:580]     Date: Mon, 18 Dec 2023 12:58:17 GMT
	I1218 12:58:17.888476   12728 round_trippers.go:580]     Audit-Id: 5b0ee975-10f8-4b69-b693-41ef1d9b3e08
	I1218 12:58:17.888988   12728 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 12:58:17.888988   12728 round_trippers.go:580]     Content-Type: application/json
	I1218 12:58:17.888988   12728 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a04cbc8-e61e-444a-bf0e-8472e0035c2c
	I1218 12:58:17.888988   12728 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e24e6e3-cf39-4982-af10-f8c010ad1d90
	I1218 12:58:17.889330   12728 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-256fn","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"6bd59fc6-50d6-4764-823d-71232811cff2","resourceVersion":"436","creationTimestamp":"2023-12-18T12:58:00Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"d117bf5f-d14c-4a53-ad5a-250a0b115b2c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-18T12:58:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d117bf5f-d14c-4a53-ad5a-250a0b115b2c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6153 chars]
	I1218 12:58:17.889830   12728 round_trippers.go:463] GET https://192.168.238.182:8443/api/v1/nodes/multinode-015900
	I1218 12:58:17.889830   12728 round_trippers.go:469] Request Headers:
	I1218 12:58:17.889830   12728 round_trippers.go:473]     Accept: application/json, */*
	I1218 12:58:17.889830   12728 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1218 12:58:17.892450   12728 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1218 12:58:17.892450   12728 round_trippers.go:577] Response Headers:
	I1218 12:58:17.892450   12728 round_trippers.go:580]     Content-Type: application/json
	I1218 12:58:17.892450   12728 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a04cbc8-e61e-444a-bf0e-8472e0035c2c
	I1218 12:58:17.892450   12728 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e24e6e3-cf39-4982-af10-f8c010ad1d90
	I1218 12:58:17.892450   12728 round_trippers.go:580]     Date: Mon, 18 Dec 2023 12:58:17 GMT
	I1218 12:58:17.892450   12728 round_trippers.go:580]     Audit-Id: 76ceb65c-2d41-4f71-b27a-b649a33d9820
	I1218 12:58:17.892450   12728 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 12:58:17.893850   12728 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-015900","uid":"b1b226d9-f305-41e8-ae9a-d377e8d5a4c5","resourceVersion":"430","creationTimestamp":"2023-12-18T12:57:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-015900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30d8ecd1811578f7b9db580c501c654c189f68d4","minikube.k8s.io/name":"multinode-015900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_18T12_57_49_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-18T12:57:44Z","fieldsType":"FieldsV1","fi [truncated 4784 chars]
	I1218 12:58:18.391653   12728 round_trippers.go:463] GET https://192.168.238.182:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-256fn
	I1218 12:58:18.391653   12728 round_trippers.go:469] Request Headers:
	I1218 12:58:18.391653   12728 round_trippers.go:473]     Accept: application/json, */*
	I1218 12:58:18.391653   12728 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1218 12:58:18.396058   12728 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1218 12:58:18.396058   12728 round_trippers.go:577] Response Headers:
	I1218 12:58:18.396058   12728 round_trippers.go:580]     Audit-Id: 87e56ad8-12db-435a-ac4b-0082a4e1fa4c
	I1218 12:58:18.396058   12728 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 12:58:18.396058   12728 round_trippers.go:580]     Content-Type: application/json
	I1218 12:58:18.396058   12728 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a04cbc8-e61e-444a-bf0e-8472e0035c2c
	I1218 12:58:18.396058   12728 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e24e6e3-cf39-4982-af10-f8c010ad1d90
	I1218 12:58:18.396058   12728 round_trippers.go:580]     Date: Mon, 18 Dec 2023 12:58:18 GMT
	I1218 12:58:18.396058   12728 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-256fn","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"6bd59fc6-50d6-4764-823d-71232811cff2","resourceVersion":"436","creationTimestamp":"2023-12-18T12:58:00Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"d117bf5f-d14c-4a53-ad5a-250a0b115b2c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-18T12:58:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d117bf5f-d14c-4a53-ad5a-250a0b115b2c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6153 chars]
	I1218 12:58:18.397639   12728 round_trippers.go:463] GET https://192.168.238.182:8443/api/v1/nodes/multinode-015900
	I1218 12:58:18.397724   12728 round_trippers.go:469] Request Headers:
	I1218 12:58:18.397724   12728 round_trippers.go:473]     Accept: application/json, */*
	I1218 12:58:18.397724   12728 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1218 12:58:18.402985   12728 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1218 12:58:18.402985   12728 round_trippers.go:577] Response Headers:
	I1218 12:58:18.402985   12728 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a04cbc8-e61e-444a-bf0e-8472e0035c2c
	I1218 12:58:18.403946   12728 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e24e6e3-cf39-4982-af10-f8c010ad1d90
	I1218 12:58:18.403946   12728 round_trippers.go:580]     Date: Mon, 18 Dec 2023 12:58:18 GMT
	I1218 12:58:18.403946   12728 round_trippers.go:580]     Audit-Id: 31c39a66-94d9-47ce-b5b5-1f358dd75c08
	I1218 12:58:18.404067   12728 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 12:58:18.404160   12728 round_trippers.go:580]     Content-Type: application/json
	I1218 12:58:18.404602   12728 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-015900","uid":"b1b226d9-f305-41e8-ae9a-d377e8d5a4c5","resourceVersion":"430","creationTimestamp":"2023-12-18T12:57:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-015900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30d8ecd1811578f7b9db580c501c654c189f68d4","minikube.k8s.io/name":"multinode-015900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_18T12_57_49_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-18T12:57:44Z","fieldsType":"FieldsV1","fi [truncated 4784 chars]
	I1218 12:58:18.889100   12728 round_trippers.go:463] GET https://192.168.238.182:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-256fn
	I1218 12:58:18.889100   12728 round_trippers.go:469] Request Headers:
	I1218 12:58:18.889100   12728 round_trippers.go:473]     Accept: application/json, */*
	I1218 12:58:18.889100   12728 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1218 12:58:18.896123   12728 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I1218 12:58:18.896123   12728 round_trippers.go:577] Response Headers:
	I1218 12:58:18.896123   12728 round_trippers.go:580]     Content-Type: application/json
	I1218 12:58:18.896123   12728 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a04cbc8-e61e-444a-bf0e-8472e0035c2c
	I1218 12:58:18.896123   12728 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e24e6e3-cf39-4982-af10-f8c010ad1d90
	I1218 12:58:18.896123   12728 round_trippers.go:580]     Date: Mon, 18 Dec 2023 12:58:18 GMT
	I1218 12:58:18.896123   12728 round_trippers.go:580]     Audit-Id: b0d89b98-922d-49c2-8063-85bc543fbed5
	I1218 12:58:18.896123   12728 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 12:58:18.896123   12728 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-256fn","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"6bd59fc6-50d6-4764-823d-71232811cff2","resourceVersion":"436","creationTimestamp":"2023-12-18T12:58:00Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"d117bf5f-d14c-4a53-ad5a-250a0b115b2c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-18T12:58:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d117bf5f-d14c-4a53-ad5a-250a0b115b2c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6153 chars]
	I1218 12:58:18.897421   12728 round_trippers.go:463] GET https://192.168.238.182:8443/api/v1/nodes/multinode-015900
	I1218 12:58:18.897421   12728 round_trippers.go:469] Request Headers:
	I1218 12:58:18.897421   12728 round_trippers.go:473]     Accept: application/json, */*
	I1218 12:58:18.897421   12728 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1218 12:58:18.900934   12728 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1218 12:58:18.901490   12728 round_trippers.go:577] Response Headers:
	I1218 12:58:18.901538   12728 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 12:58:18.901538   12728 round_trippers.go:580]     Content-Type: application/json
	I1218 12:58:18.901538   12728 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a04cbc8-e61e-444a-bf0e-8472e0035c2c
	I1218 12:58:18.901538   12728 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e24e6e3-cf39-4982-af10-f8c010ad1d90
	I1218 12:58:18.901538   12728 round_trippers.go:580]     Date: Mon, 18 Dec 2023 12:58:18 GMT
	I1218 12:58:18.901538   12728 round_trippers.go:580]     Audit-Id: c9822513-4d8e-45b9-a154-8e248b98d54e
	I1218 12:58:18.901693   12728 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-015900","uid":"b1b226d9-f305-41e8-ae9a-d377e8d5a4c5","resourceVersion":"430","creationTimestamp":"2023-12-18T12:57:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-015900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30d8ecd1811578f7b9db580c501c654c189f68d4","minikube.k8s.io/name":"multinode-015900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_18T12_57_49_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-18T12:57:44Z","fieldsType":"FieldsV1","fi [truncated 4784 chars]
	I1218 12:58:18.901693   12728 pod_ready.go:102] pod "coredns-5dd5756b68-256fn" in "kube-system" namespace has status "Ready":"False"
	I1218 12:58:19.390965   12728 round_trippers.go:463] GET https://192.168.238.182:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-256fn
	I1218 12:58:19.390965   12728 round_trippers.go:469] Request Headers:
	I1218 12:58:19.390965   12728 round_trippers.go:473]     Accept: application/json, */*
	I1218 12:58:19.390965   12728 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1218 12:58:19.394575   12728 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1218 12:58:19.394575   12728 round_trippers.go:577] Response Headers:
	I1218 12:58:19.394575   12728 round_trippers.go:580]     Content-Type: application/json
	I1218 12:58:19.395009   12728 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a04cbc8-e61e-444a-bf0e-8472e0035c2c
	I1218 12:58:19.395009   12728 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e24e6e3-cf39-4982-af10-f8c010ad1d90
	I1218 12:58:19.395009   12728 round_trippers.go:580]     Date: Mon, 18 Dec 2023 12:58:19 GMT
	I1218 12:58:19.395009   12728 round_trippers.go:580]     Audit-Id: 97ed483c-36ad-473b-87ed-2c15a3b43cd8
	I1218 12:58:19.395009   12728 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 12:58:19.395275   12728 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-256fn","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"6bd59fc6-50d6-4764-823d-71232811cff2","resourceVersion":"449","creationTimestamp":"2023-12-18T12:58:00Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"d117bf5f-d14c-4a53-ad5a-250a0b115b2c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-18T12:58:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d117bf5f-d14c-4a53-ad5a-250a0b115b2c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6285 chars]
	I1218 12:58:19.395474   12728 round_trippers.go:463] GET https://192.168.238.182:8443/api/v1/nodes/multinode-015900
	I1218 12:58:19.396026   12728 round_trippers.go:469] Request Headers:
	I1218 12:58:19.396026   12728 round_trippers.go:473]     Accept: application/json, */*
	I1218 12:58:19.396026   12728 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1218 12:58:19.399073   12728 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1218 12:58:19.399073   12728 round_trippers.go:577] Response Headers:
	I1218 12:58:19.399073   12728 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a04cbc8-e61e-444a-bf0e-8472e0035c2c
	I1218 12:58:19.399414   12728 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e24e6e3-cf39-4982-af10-f8c010ad1d90
	I1218 12:58:19.399414   12728 round_trippers.go:580]     Date: Mon, 18 Dec 2023 12:58:19 GMT
	I1218 12:58:19.399414   12728 round_trippers.go:580]     Audit-Id: f624d71f-b02b-4d58-88d7-4fc60a2f18b8
	I1218 12:58:19.399414   12728 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 12:58:19.399414   12728 round_trippers.go:580]     Content-Type: application/json
	I1218 12:58:19.399626   12728 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-015900","uid":"b1b226d9-f305-41e8-ae9a-d377e8d5a4c5","resourceVersion":"454","creationTimestamp":"2023-12-18T12:57:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-015900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30d8ecd1811578f7b9db580c501c654c189f68d4","minikube.k8s.io/name":"multinode-015900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_18T12_57_49_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-18T12:57:44Z","fieldsType":"FieldsV1","fi [truncated 4960 chars]
	I1218 12:58:19.400238   12728 pod_ready.go:92] pod "coredns-5dd5756b68-256fn" in "kube-system" namespace has status "Ready":"True"
	I1218 12:58:19.400311   12728 pod_ready.go:81] duration metric: took 2.5225973s waiting for pod "coredns-5dd5756b68-256fn" in "kube-system" namespace to be "Ready" ...
	I1218 12:58:19.400311   12728 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-015900" in "kube-system" namespace to be "Ready" ...
	I1218 12:58:19.400502   12728 round_trippers.go:463] GET https://192.168.238.182:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-015900
	I1218 12:58:19.400502   12728 round_trippers.go:469] Request Headers:
	I1218 12:58:19.400549   12728 round_trippers.go:473]     Accept: application/json, */*
	I1218 12:58:19.400549   12728 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1218 12:58:19.403294   12728 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1218 12:58:19.403294   12728 round_trippers.go:577] Response Headers:
	I1218 12:58:19.403294   12728 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 12:58:19.403294   12728 round_trippers.go:580]     Content-Type: application/json
	I1218 12:58:19.403836   12728 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a04cbc8-e61e-444a-bf0e-8472e0035c2c
	I1218 12:58:19.403836   12728 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e24e6e3-cf39-4982-af10-f8c010ad1d90
	I1218 12:58:19.403910   12728 round_trippers.go:580]     Date: Mon, 18 Dec 2023 12:58:19 GMT
	I1218 12:58:19.403937   12728 round_trippers.go:580]     Audit-Id: c4e6dce1-1b77-4217-b864-2effa97daa69
	I1218 12:58:19.404139   12728 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-015900","namespace":"kube-system","uid":"a7426d67-4ce5-4c3b-be1e-f08631877ad4","resourceVersion":"344","creationTimestamp":"2023-12-18T12:57:48Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.238.182:2379","kubernetes.io/config.hash":"0ccdce53007951dcbebf9ee828c6d414","kubernetes.io/config.mirror":"0ccdce53007951dcbebf9ee828c6d414","kubernetes.io/config.seen":"2023-12-18T12:57:48.242049787Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-015900","uid":"b1b226d9-f305-41e8-ae9a-d377e8d5a4c5","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-18T12:57:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise
-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/conf [truncated 5882 chars]
	I1218 12:58:19.404734   12728 round_trippers.go:463] GET https://192.168.238.182:8443/api/v1/nodes/multinode-015900
	I1218 12:58:19.404734   12728 round_trippers.go:469] Request Headers:
	I1218 12:58:19.404734   12728 round_trippers.go:473]     Accept: application/json, */*
	I1218 12:58:19.404734   12728 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1218 12:58:19.407389   12728 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1218 12:58:19.407389   12728 round_trippers.go:577] Response Headers:
	I1218 12:58:19.407389   12728 round_trippers.go:580]     Audit-Id: a04da8da-50e4-4d03-91ac-219c19c86890
	I1218 12:58:19.407389   12728 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 12:58:19.407389   12728 round_trippers.go:580]     Content-Type: application/json
	I1218 12:58:19.407389   12728 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a04cbc8-e61e-444a-bf0e-8472e0035c2c
	I1218 12:58:19.407389   12728 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e24e6e3-cf39-4982-af10-f8c010ad1d90
	I1218 12:58:19.407389   12728 round_trippers.go:580]     Date: Mon, 18 Dec 2023 12:58:19 GMT
	I1218 12:58:19.408373   12728 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-015900","uid":"b1b226d9-f305-41e8-ae9a-d377e8d5a4c5","resourceVersion":"454","creationTimestamp":"2023-12-18T12:57:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-015900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30d8ecd1811578f7b9db580c501c654c189f68d4","minikube.k8s.io/name":"multinode-015900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_18T12_57_49_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-18T12:57:44Z","fieldsType":"FieldsV1","fi [truncated 4960 chars]
	I1218 12:58:19.408373   12728 pod_ready.go:92] pod "etcd-multinode-015900" in "kube-system" namespace has status "Ready":"True"
	I1218 12:58:19.408373   12728 pod_ready.go:81] duration metric: took 8.0626ms waiting for pod "etcd-multinode-015900" in "kube-system" namespace to be "Ready" ...
	I1218 12:58:19.408373   12728 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-015900" in "kube-system" namespace to be "Ready" ...
	I1218 12:58:19.408373   12728 round_trippers.go:463] GET https://192.168.238.182:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-015900
	I1218 12:58:19.408373   12728 round_trippers.go:469] Request Headers:
	I1218 12:58:19.408373   12728 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1218 12:58:19.408373   12728 round_trippers.go:473]     Accept: application/json, */*
	I1218 12:58:19.411377   12728 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1218 12:58:19.411377   12728 round_trippers.go:577] Response Headers:
	I1218 12:58:19.411377   12728 round_trippers.go:580]     Date: Mon, 18 Dec 2023 12:58:19 GMT
	I1218 12:58:19.411377   12728 round_trippers.go:580]     Audit-Id: e17c3b9e-e568-45ea-afe3-6e845fe7feda
	I1218 12:58:19.411377   12728 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 12:58:19.412138   12728 round_trippers.go:580]     Content-Type: application/json
	I1218 12:58:19.412138   12728 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a04cbc8-e61e-444a-bf0e-8472e0035c2c
	I1218 12:58:19.412138   12728 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e24e6e3-cf39-4982-af10-f8c010ad1d90
	I1218 12:58:19.412477   12728 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-015900","namespace":"kube-system","uid":"fed449c3-ce1c-43a7-bedd-023eca58f1d0","resourceVersion":"417","creationTimestamp":"2023-12-18T12:57:48Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.238.182:8443","kubernetes.io/config.hash":"8f250ad179b46602df2936b85d0cd45e","kubernetes.io/config.mirror":"8f250ad179b46602df2936b85d0cd45e","kubernetes.io/config.seen":"2023-12-18T12:57:48.242055287Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-015900","uid":"b1b226d9-f305-41e8-ae9a-d377e8d5a4c5","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-18T12:57:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.k
ubernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernete [truncated 7417 chars]
	I1218 12:58:19.412997   12728 round_trippers.go:463] GET https://192.168.238.182:8443/api/v1/nodes/multinode-015900
	I1218 12:58:19.412997   12728 round_trippers.go:469] Request Headers:
	I1218 12:58:19.412997   12728 round_trippers.go:473]     Accept: application/json, */*
	I1218 12:58:19.412997   12728 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1218 12:58:19.415566   12728 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1218 12:58:19.415566   12728 round_trippers.go:577] Response Headers:
	I1218 12:58:19.415665   12728 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 12:58:19.415665   12728 round_trippers.go:580]     Content-Type: application/json
	I1218 12:58:19.415665   12728 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a04cbc8-e61e-444a-bf0e-8472e0035c2c
	I1218 12:58:19.415665   12728 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e24e6e3-cf39-4982-af10-f8c010ad1d90
	I1218 12:58:19.415665   12728 round_trippers.go:580]     Date: Mon, 18 Dec 2023 12:58:19 GMT
	I1218 12:58:19.415665   12728 round_trippers.go:580]     Audit-Id: fe6c7132-ee96-4d44-966a-0999d536721a
	I1218 12:58:19.415729   12728 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-015900","uid":"b1b226d9-f305-41e8-ae9a-d377e8d5a4c5","resourceVersion":"454","creationTimestamp":"2023-12-18T12:57:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-015900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30d8ecd1811578f7b9db580c501c654c189f68d4","minikube.k8s.io/name":"multinode-015900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_18T12_57_49_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-18T12:57:44Z","fieldsType":"FieldsV1","fi [truncated 4960 chars]
	I1218 12:58:19.415729   12728 pod_ready.go:92] pod "kube-apiserver-multinode-015900" in "kube-system" namespace has status "Ready":"True"
	I1218 12:58:19.416269   12728 pod_ready.go:81] duration metric: took 7.8953ms waiting for pod "kube-apiserver-multinode-015900" in "kube-system" namespace to be "Ready" ...
	I1218 12:58:19.416269   12728 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-015900" in "kube-system" namespace to be "Ready" ...
	I1218 12:58:19.416269   12728 round_trippers.go:463] GET https://192.168.238.182:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-015900
	I1218 12:58:19.416269   12728 round_trippers.go:469] Request Headers:
	I1218 12:58:19.416269   12728 round_trippers.go:473]     Accept: application/json, */*
	I1218 12:58:19.416482   12728 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1218 12:58:19.418300   12728 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1218 12:58:19.418300   12728 round_trippers.go:577] Response Headers:
	I1218 12:58:19.418300   12728 round_trippers.go:580]     Date: Mon, 18 Dec 2023 12:58:19 GMT
	I1218 12:58:19.418300   12728 round_trippers.go:580]     Audit-Id: 93414872-e49f-4727-a1f8-e90e317701ca
	I1218 12:58:19.419106   12728 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 12:58:19.419106   12728 round_trippers.go:580]     Content-Type: application/json
	I1218 12:58:19.419106   12728 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a04cbc8-e61e-444a-bf0e-8472e0035c2c
	I1218 12:58:19.419106   12728 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e24e6e3-cf39-4982-af10-f8c010ad1d90
	I1218 12:58:19.419402   12728 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-015900","namespace":"kube-system","uid":"adc96380-4e5f-4486-a471-23e8dad2a63b","resourceVersion":"418","creationTimestamp":"2023-12-18T12:57:48Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"a757899e3fc62b58934ce911dce4fad5","kubernetes.io/config.mirror":"a757899e3fc62b58934ce911dce4fad5","kubernetes.io/config.seen":"2023-12-18T12:57:48.242056787Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-015900","uid":"b1b226d9-f305-41e8-ae9a-d377e8d5a4c5","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-18T12:57:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 6977 chars]
	I1218 12:58:19.419700   12728 round_trippers.go:463] GET https://192.168.238.182:8443/api/v1/nodes/multinode-015900
	I1218 12:58:19.419700   12728 round_trippers.go:469] Request Headers:
	I1218 12:58:19.419700   12728 round_trippers.go:473]     Accept: application/json, */*
	I1218 12:58:19.419700   12728 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1218 12:58:19.424165   12728 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1218 12:58:19.424165   12728 round_trippers.go:577] Response Headers:
	I1218 12:58:19.424165   12728 round_trippers.go:580]     Content-Type: application/json
	I1218 12:58:19.424165   12728 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a04cbc8-e61e-444a-bf0e-8472e0035c2c
	I1218 12:58:19.424165   12728 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e24e6e3-cf39-4982-af10-f8c010ad1d90
	I1218 12:58:19.424165   12728 round_trippers.go:580]     Date: Mon, 18 Dec 2023 12:58:19 GMT
	I1218 12:58:19.424165   12728 round_trippers.go:580]     Audit-Id: ca993a19-fd86-49f0-b224-804e22eb857e
	I1218 12:58:19.424165   12728 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 12:58:19.424818   12728 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-015900","uid":"b1b226d9-f305-41e8-ae9a-d377e8d5a4c5","resourceVersion":"454","creationTimestamp":"2023-12-18T12:57:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-015900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30d8ecd1811578f7b9db580c501c654c189f68d4","minikube.k8s.io/name":"multinode-015900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_18T12_57_49_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-18T12:57:44Z","fieldsType":"FieldsV1","fi [truncated 4960 chars]
	I1218 12:58:19.425510   12728 pod_ready.go:92] pod "kube-controller-manager-multinode-015900" in "kube-system" namespace has status "Ready":"True"
	I1218 12:58:19.425550   12728 pod_ready.go:81] duration metric: took 9.281ms waiting for pod "kube-controller-manager-multinode-015900" in "kube-system" namespace to be "Ready" ...
	I1218 12:58:19.425663   12728 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-xpxz2" in "kube-system" namespace to be "Ready" ...
	I1218 12:58:19.425767   12728 round_trippers.go:463] GET https://192.168.238.182:8443/api/v1/namespaces/kube-system/pods/kube-proxy-xpxz2
	I1218 12:58:19.425767   12728 round_trippers.go:469] Request Headers:
	I1218 12:58:19.425832   12728 round_trippers.go:473]     Accept: application/json, */*
	I1218 12:58:19.425899   12728 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1218 12:58:19.430625   12728 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1218 12:58:19.430679   12728 round_trippers.go:577] Response Headers:
	I1218 12:58:19.430734   12728 round_trippers.go:580]     Audit-Id: 16e18aa0-5dc4-4009-9160-b4588d662b62
	I1218 12:58:19.430734   12728 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 12:58:19.430734   12728 round_trippers.go:580]     Content-Type: application/json
	I1218 12:58:19.430734   12728 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a04cbc8-e61e-444a-bf0e-8472e0035c2c
	I1218 12:58:19.430734   12728 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e24e6e3-cf39-4982-af10-f8c010ad1d90
	I1218 12:58:19.430734   12728 round_trippers.go:580]     Date: Mon, 18 Dec 2023 12:58:19 GMT
	I1218 12:58:19.431017   12728 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-xpxz2","generateName":"kube-proxy-","namespace":"kube-system","uid":"6070d8c7-5af2-4e9f-b737-760782b764a6","resourceVersion":"403","creationTimestamp":"2023-12-18T12:58:00Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"02d626d1-4faa-4d32-9f7c-aa1c56272dc4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-18T12:58:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"02d626d1-4faa-4d32-9f7c-aa1c56272dc4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5541 chars]
	I1218 12:58:19.431194   12728 round_trippers.go:463] GET https://192.168.238.182:8443/api/v1/nodes/multinode-015900
	I1218 12:58:19.431194   12728 round_trippers.go:469] Request Headers:
	I1218 12:58:19.431194   12728 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1218 12:58:19.431194   12728 round_trippers.go:473]     Accept: application/json, */*
	I1218 12:58:19.434442   12728 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1218 12:58:19.434442   12728 round_trippers.go:577] Response Headers:
	I1218 12:58:19.434442   12728 round_trippers.go:580]     Audit-Id: 54968a2a-1829-4342-b2cc-207ff60de35e
	I1218 12:58:19.434442   12728 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 12:58:19.434442   12728 round_trippers.go:580]     Content-Type: application/json
	I1218 12:58:19.434442   12728 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a04cbc8-e61e-444a-bf0e-8472e0035c2c
	I1218 12:58:19.434442   12728 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e24e6e3-cf39-4982-af10-f8c010ad1d90
	I1218 12:58:19.434442   12728 round_trippers.go:580]     Date: Mon, 18 Dec 2023 12:58:19 GMT
	I1218 12:58:19.434442   12728 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-015900","uid":"b1b226d9-f305-41e8-ae9a-d377e8d5a4c5","resourceVersion":"454","creationTimestamp":"2023-12-18T12:57:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-015900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30d8ecd1811578f7b9db580c501c654c189f68d4","minikube.k8s.io/name":"multinode-015900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_18T12_57_49_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-18T12:57:44Z","fieldsType":"FieldsV1","fi [truncated 4960 chars]
	I1218 12:58:19.435230   12728 pod_ready.go:92] pod "kube-proxy-xpxz2" in "kube-system" namespace has status "Ready":"True"
	I1218 12:58:19.435230   12728 pod_ready.go:81] duration metric: took 9.5661ms waiting for pod "kube-proxy-xpxz2" in "kube-system" namespace to be "Ready" ...
	I1218 12:58:19.435230   12728 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-015900" in "kube-system" namespace to be "Ready" ...
	I1218 12:58:19.595257   12728 request.go:629] Waited for 160.0265ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.238.182:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-015900
	I1218 12:58:19.595257   12728 round_trippers.go:463] GET https://192.168.238.182:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-015900
	I1218 12:58:19.595649   12728 round_trippers.go:469] Request Headers:
	I1218 12:58:19.595649   12728 round_trippers.go:473]     Accept: application/json, */*
	I1218 12:58:19.595649   12728 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1218 12:58:19.601683   12728 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1218 12:58:19.601725   12728 round_trippers.go:577] Response Headers:
	I1218 12:58:19.601725   12728 round_trippers.go:580]     Date: Mon, 18 Dec 2023 12:58:19 GMT
	I1218 12:58:19.601766   12728 round_trippers.go:580]     Audit-Id: 2f344028-78ec-41c2-a41b-f94c65cc94f9
	I1218 12:58:19.601766   12728 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 12:58:19.601766   12728 round_trippers.go:580]     Content-Type: application/json
	I1218 12:58:19.601766   12728 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a04cbc8-e61e-444a-bf0e-8472e0035c2c
	I1218 12:58:19.601766   12728 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e24e6e3-cf39-4982-af10-f8c010ad1d90
	I1218 12:58:19.601862   12728 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-015900","namespace":"kube-system","uid":"45ab2fd7-20c1-4148-8989-51a285e6b7d5","resourceVersion":"416","creationTimestamp":"2023-12-18T12:57:48Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"33558618a35e7b5da7a13bdc2f198c7e","kubernetes.io/config.mirror":"33558618a35e7b5da7a13bdc2f198c7e","kubernetes.io/config.seen":"2023-12-18T12:57:48.242057987Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-015900","uid":"b1b226d9-f305-41e8-ae9a-d377e8d5a4c5","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-18T12:57:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4707 chars]
	I1218 12:58:19.798168   12728 request.go:629] Waited for 195.8772ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.238.182:8443/api/v1/nodes/multinode-015900
	I1218 12:58:19.798251   12728 round_trippers.go:463] GET https://192.168.238.182:8443/api/v1/nodes/multinode-015900
	I1218 12:58:19.798251   12728 round_trippers.go:469] Request Headers:
	I1218 12:58:19.798251   12728 round_trippers.go:473]     Accept: application/json, */*
	I1218 12:58:19.798251   12728 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1218 12:58:19.801889   12728 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1218 12:58:19.801889   12728 round_trippers.go:577] Response Headers:
	I1218 12:58:19.801889   12728 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 12:58:19.802215   12728 round_trippers.go:580]     Content-Type: application/json
	I1218 12:58:19.802215   12728 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a04cbc8-e61e-444a-bf0e-8472e0035c2c
	I1218 12:58:19.802215   12728 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e24e6e3-cf39-4982-af10-f8c010ad1d90
	I1218 12:58:19.802215   12728 round_trippers.go:580]     Date: Mon, 18 Dec 2023 12:58:19 GMT
	I1218 12:58:19.802280   12728 round_trippers.go:580]     Audit-Id: 7a557ba5-0ff0-4818-80b2-185354b9e5f7
	I1218 12:58:19.802280   12728 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-015900","uid":"b1b226d9-f305-41e8-ae9a-d377e8d5a4c5","resourceVersion":"454","creationTimestamp":"2023-12-18T12:57:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-015900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30d8ecd1811578f7b9db580c501c654c189f68d4","minikube.k8s.io/name":"multinode-015900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_18T12_57_49_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-18T12:57:44Z","fieldsType":"FieldsV1","fi [truncated 4960 chars]
	I1218 12:58:19.803271   12728 pod_ready.go:92] pod "kube-scheduler-multinode-015900" in "kube-system" namespace has status "Ready":"True"
	I1218 12:58:19.803366   12728 pod_ready.go:81] duration metric: took 368.04ms waiting for pod "kube-scheduler-multinode-015900" in "kube-system" namespace to be "Ready" ...
	I1218 12:58:19.803366   12728 pod_ready.go:38] duration metric: took 2.9436322s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1218 12:58:19.803366   12728 api_server.go:52] waiting for apiserver process to appear ...
	I1218 12:58:19.816108   12728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 12:58:19.836837   12728 command_runner.go:130] > 2085
	I1218 12:58:19.837051   12728 api_server.go:72] duration metric: took 19.2382663s to wait for apiserver process to appear ...
	I1218 12:58:19.837051   12728 api_server.go:88] waiting for apiserver healthz status ...
	I1218 12:58:19.837109   12728 api_server.go:253] Checking apiserver healthz at https://192.168.238.182:8443/healthz ...
	I1218 12:58:19.848793   12728 api_server.go:279] https://192.168.238.182:8443/healthz returned 200:
	ok
	I1218 12:58:19.849124   12728 round_trippers.go:463] GET https://192.168.238.182:8443/version
	I1218 12:58:19.849159   12728 round_trippers.go:469] Request Headers:
	I1218 12:58:19.849159   12728 round_trippers.go:473]     Accept: application/json, */*
	I1218 12:58:19.849159   12728 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1218 12:58:19.851338   12728 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1218 12:58:19.851338   12728 round_trippers.go:577] Response Headers:
	I1218 12:58:19.851338   12728 round_trippers.go:580]     Audit-Id: 50a1cdd2-7c30-4beb-9ee8-ce73e14dc929
	I1218 12:58:19.851415   12728 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 12:58:19.851415   12728 round_trippers.go:580]     Content-Type: application/json
	I1218 12:58:19.851415   12728 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a04cbc8-e61e-444a-bf0e-8472e0035c2c
	I1218 12:58:19.851415   12728 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e24e6e3-cf39-4982-af10-f8c010ad1d90
	I1218 12:58:19.851415   12728 round_trippers.go:580]     Content-Length: 264
	I1218 12:58:19.851415   12728 round_trippers.go:580]     Date: Mon, 18 Dec 2023 12:58:19 GMT
	I1218 12:58:19.851415   12728 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "28",
	  "gitVersion": "v1.28.4",
	  "gitCommit": "bae2c62678db2b5053817bc97181fcc2e8388103",
	  "gitTreeState": "clean",
	  "buildDate": "2023-11-15T16:48:54Z",
	  "goVersion": "go1.20.11",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I1218 12:58:19.851594   12728 api_server.go:141] control plane version: v1.28.4
	I1218 12:58:19.851690   12728 api_server.go:131] duration metric: took 14.6388ms to wait for apiserver health ...
	I1218 12:58:19.851690   12728 system_pods.go:43] waiting for kube-system pods to appear ...
	I1218 12:58:20.000483   12728 request.go:629] Waited for 148.7057ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.238.182:8443/api/v1/namespaces/kube-system/pods
	I1218 12:58:20.001041   12728 round_trippers.go:463] GET https://192.168.238.182:8443/api/v1/namespaces/kube-system/pods
	I1218 12:58:20.001108   12728 round_trippers.go:469] Request Headers:
	I1218 12:58:20.001108   12728 round_trippers.go:473]     Accept: application/json, */*
	I1218 12:58:20.001108   12728 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1218 12:58:20.007691   12728 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1218 12:58:20.007691   12728 round_trippers.go:577] Response Headers:
	I1218 12:58:20.008661   12728 round_trippers.go:580]     Content-Type: application/json
	I1218 12:58:20.008683   12728 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a04cbc8-e61e-444a-bf0e-8472e0035c2c
	I1218 12:58:20.008683   12728 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e24e6e3-cf39-4982-af10-f8c010ad1d90
	I1218 12:58:20.008683   12728 round_trippers.go:580]     Date: Mon, 18 Dec 2023 12:58:20 GMT
	I1218 12:58:20.008683   12728 round_trippers.go:580]     Audit-Id: 6acb52d2-ad73-4027-90ac-9893beb34f60
	I1218 12:58:20.008683   12728 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 12:58:20.010821   12728 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"455"},"items":[{"metadata":{"name":"coredns-5dd5756b68-256fn","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"6bd59fc6-50d6-4764-823d-71232811cff2","resourceVersion":"449","creationTimestamp":"2023-12-18T12:58:00Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"d117bf5f-d14c-4a53-ad5a-250a0b115b2c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-18T12:58:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d117bf5f-d14c-4a53-ad5a-250a0b115b2c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 54168 chars]
	I1218 12:58:20.012928   12728 system_pods.go:59] 8 kube-system pods found
	I1218 12:58:20.013483   12728 system_pods.go:61] "coredns-5dd5756b68-256fn" [6bd59fc6-50d6-4764-823d-71232811cff2] Running
	I1218 12:58:20.013483   12728 system_pods.go:61] "etcd-multinode-015900" [a7426d67-4ce5-4c3b-be1e-f08631877ad4] Running
	I1218 12:58:20.013483   12728 system_pods.go:61] "kindnet-bfllh" [f376dae1-8132-46e0-a367-7a4764b6138b] Running
	I1218 12:58:20.013540   12728 system_pods.go:61] "kube-apiserver-multinode-015900" [fed449c3-ce1c-43a7-bedd-023eca58f1d0] Running
	I1218 12:58:20.013540   12728 system_pods.go:61] "kube-controller-manager-multinode-015900" [adc96380-4e5f-4486-a471-23e8dad2a63b] Running
	I1218 12:58:20.013540   12728 system_pods.go:61] "kube-proxy-xpxz2" [6070d8c7-5af2-4e9f-b737-760782b764a6] Running
	I1218 12:58:20.013540   12728 system_pods.go:61] "kube-scheduler-multinode-015900" [45ab2fd7-20c1-4148-8989-51a285e6b7d5] Running
	I1218 12:58:20.013540   12728 system_pods.go:61] "storage-provisioner" [9b6ddc85-8b7a-45d0-9867-3be6bd8085e6] Running
	I1218 12:58:20.013540   12728 system_pods.go:74] duration metric: took 161.8499ms to wait for pod list to return data ...
	I1218 12:58:20.013629   12728 default_sa.go:34] waiting for default service account to be created ...
	I1218 12:58:20.200439   12728 request.go:629] Waited for 186.5423ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.238.182:8443/api/v1/namespaces/default/serviceaccounts
	I1218 12:58:20.200679   12728 round_trippers.go:463] GET https://192.168.238.182:8443/api/v1/namespaces/default/serviceaccounts
	I1218 12:58:20.200679   12728 round_trippers.go:469] Request Headers:
	I1218 12:58:20.200741   12728 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1218 12:58:20.200741   12728 round_trippers.go:473]     Accept: application/json, */*
	I1218 12:58:20.205461   12728 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1218 12:58:20.205461   12728 round_trippers.go:577] Response Headers:
	I1218 12:58:20.205461   12728 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e24e6e3-cf39-4982-af10-f8c010ad1d90
	I1218 12:58:20.205461   12728 round_trippers.go:580]     Content-Length: 261
	I1218 12:58:20.205461   12728 round_trippers.go:580]     Date: Mon, 18 Dec 2023 12:58:20 GMT
	I1218 12:58:20.205461   12728 round_trippers.go:580]     Audit-Id: 4f1257b4-3473-4b69-a357-66a177920eec
	I1218 12:58:20.205461   12728 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 12:58:20.205461   12728 round_trippers.go:580]     Content-Type: application/json
	I1218 12:58:20.205461   12728 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a04cbc8-e61e-444a-bf0e-8472e0035c2c
	I1218 12:58:20.205461   12728 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"456"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"da3c6889-4b7c-416c-9129-f3bb36ad7663","resourceVersion":"351","creationTimestamp":"2023-12-18T12:57:59Z"}}]}
	I1218 12:58:20.206171   12728 default_sa.go:45] found service account: "default"
	I1218 12:58:20.206171   12728 default_sa.go:55] duration metric: took 192.5421ms for default service account to be created ...
	I1218 12:58:20.206171   12728 system_pods.go:116] waiting for k8s-apps to be running ...
	I1218 12:58:20.405893   12728 request.go:629] Waited for 199.7213ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.238.182:8443/api/v1/namespaces/kube-system/pods
	I1218 12:58:20.406241   12728 round_trippers.go:463] GET https://192.168.238.182:8443/api/v1/namespaces/kube-system/pods
	I1218 12:58:20.406241   12728 round_trippers.go:469] Request Headers:
	I1218 12:58:20.406241   12728 round_trippers.go:473]     Accept: application/json, */*
	I1218 12:58:20.406325   12728 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1218 12:58:20.411682   12728 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1218 12:58:20.411682   12728 round_trippers.go:577] Response Headers:
	I1218 12:58:20.412047   12728 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a04cbc8-e61e-444a-bf0e-8472e0035c2c
	I1218 12:58:20.412047   12728 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e24e6e3-cf39-4982-af10-f8c010ad1d90
	I1218 12:58:20.412047   12728 round_trippers.go:580]     Date: Mon, 18 Dec 2023 12:58:20 GMT
	I1218 12:58:20.412047   12728 round_trippers.go:580]     Audit-Id: 3b02e2e4-9153-40b3-8f1e-2ad3ceef290a
	I1218 12:58:20.412047   12728 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 12:58:20.412047   12728 round_trippers.go:580]     Content-Type: application/json
	I1218 12:58:20.414263   12728 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"456"},"items":[{"metadata":{"name":"coredns-5dd5756b68-256fn","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"6bd59fc6-50d6-4764-823d-71232811cff2","resourceVersion":"449","creationTimestamp":"2023-12-18T12:58:00Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"d117bf5f-d14c-4a53-ad5a-250a0b115b2c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-18T12:58:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d117bf5f-d14c-4a53-ad5a-250a0b115b2c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 54168 chars]
	I1218 12:58:20.417288   12728 system_pods.go:86] 8 kube-system pods found
	I1218 12:58:20.417288   12728 system_pods.go:89] "coredns-5dd5756b68-256fn" [6bd59fc6-50d6-4764-823d-71232811cff2] Running
	I1218 12:58:20.417288   12728 system_pods.go:89] "etcd-multinode-015900" [a7426d67-4ce5-4c3b-be1e-f08631877ad4] Running
	I1218 12:58:20.417288   12728 system_pods.go:89] "kindnet-bfllh" [f376dae1-8132-46e0-a367-7a4764b6138b] Running
	I1218 12:58:20.417288   12728 system_pods.go:89] "kube-apiserver-multinode-015900" [fed449c3-ce1c-43a7-bedd-023eca58f1d0] Running
	I1218 12:58:20.417288   12728 system_pods.go:89] "kube-controller-manager-multinode-015900" [adc96380-4e5f-4486-a471-23e8dad2a63b] Running
	I1218 12:58:20.417288   12728 system_pods.go:89] "kube-proxy-xpxz2" [6070d8c7-5af2-4e9f-b737-760782b764a6] Running
	I1218 12:58:20.417288   12728 system_pods.go:89] "kube-scheduler-multinode-015900" [45ab2fd7-20c1-4148-8989-51a285e6b7d5] Running
	I1218 12:58:20.417288   12728 system_pods.go:89] "storage-provisioner" [9b6ddc85-8b7a-45d0-9867-3be6bd8085e6] Running
	I1218 12:58:20.417288   12728 system_pods.go:126] duration metric: took 211.1159ms to wait for k8s-apps to be running ...
	I1218 12:58:20.417288   12728 system_svc.go:44] waiting for kubelet service to be running ....
	I1218 12:58:20.430983   12728 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1218 12:58:20.451905   12728 system_svc.go:56] duration metric: took 34.6168ms WaitForService to wait for kubelet.
	I1218 12:58:20.452068   12728 kubeadm.go:581] duration metric: took 19.8533276s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1218 12:58:20.452380   12728 node_conditions.go:102] verifying NodePressure condition ...
	I1218 12:58:20.592289   12728 request.go:629] Waited for 139.6643ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.238.182:8443/api/v1/nodes
	I1218 12:58:20.592613   12728 round_trippers.go:463] GET https://192.168.238.182:8443/api/v1/nodes
	I1218 12:58:20.592690   12728 round_trippers.go:469] Request Headers:
	I1218 12:58:20.592690   12728 round_trippers.go:473]     Accept: application/json, */*
	I1218 12:58:20.592690   12728 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1218 12:58:20.596093   12728 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1218 12:58:20.596093   12728 round_trippers.go:577] Response Headers:
	I1218 12:58:20.596093   12728 round_trippers.go:580]     Date: Mon, 18 Dec 2023 12:58:20 GMT
	I1218 12:58:20.596093   12728 round_trippers.go:580]     Audit-Id: 2bac7cc7-78ea-4813-922a-94e527a12c46
	I1218 12:58:20.596093   12728 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 12:58:20.596425   12728 round_trippers.go:580]     Content-Type: application/json
	I1218 12:58:20.596425   12728 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a04cbc8-e61e-444a-bf0e-8472e0035c2c
	I1218 12:58:20.596425   12728 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e24e6e3-cf39-4982-af10-f8c010ad1d90
	I1218 12:58:20.596715   12728 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"456"},"items":[{"metadata":{"name":"multinode-015900","uid":"b1b226d9-f305-41e8-ae9a-d377e8d5a4c5","resourceVersion":"454","creationTimestamp":"2023-12-18T12:57:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-015900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30d8ecd1811578f7b9db580c501c654c189f68d4","minikube.k8s.io/name":"multinode-015900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_18T12_57_49_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFiel
ds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time" [truncated 5013 chars]
	I1218 12:58:20.597516   12728 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1218 12:58:20.597640   12728 node_conditions.go:123] node cpu capacity is 2
	I1218 12:58:20.597726   12728 node_conditions.go:105] duration metric: took 145.1916ms to run NodePressure ...
	I1218 12:58:20.597726   12728 start.go:228] waiting for startup goroutines ...
	I1218 12:58:20.597786   12728 start.go:233] waiting for cluster config update ...
	I1218 12:58:20.597786   12728 start.go:242] writing updated cluster config ...
	I1218 12:58:20.614848   12728 ssh_runner.go:195] Run: rm -f paused
	I1218 12:58:20.763418   12728 start.go:600] kubectl: 1.29.0, cluster: 1.28.4 (minor skew: 1)
	I1218 12:58:20.764421   12728 out.go:177] * Done! kubectl is now configured to use "multinode-015900" cluster and "default" namespace by default
	
	* 
	* ==> Docker <==
	* -- Journal begins at Mon 2023-12-18 12:55:53 UTC, ends at Mon 2023-12-18 12:59:34 UTC. --
	Dec 18 12:58:01 multinode-015900 dockerd[1344]: time="2023-12-18T12:58:01.154093281Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 18 12:58:05 multinode-015900 cri-dockerd[1228]: time="2023-12-18T12:58:05Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/edbac0d907de9c24ba1961ed7a92cc8209455aada6074e050909d9df92d3f558/resolv.conf as [nameserver 192.168.224.1]"
	Dec 18 12:58:10 multinode-015900 cri-dockerd[1228]: time="2023-12-18T12:58:10Z" level=info msg="Stop pulling image docker.io/kindest/kindnetd:v20230809-80a64d96: Status: Downloaded newer image for kindest/kindnetd:v20230809-80a64d96"
	Dec 18 12:58:11 multinode-015900 dockerd[1344]: time="2023-12-18T12:58:11.145274686Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Dec 18 12:58:11 multinode-015900 dockerd[1344]: time="2023-12-18T12:58:11.145416587Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 18 12:58:11 multinode-015900 dockerd[1344]: time="2023-12-18T12:58:11.145444887Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Dec 18 12:58:11 multinode-015900 dockerd[1344]: time="2023-12-18T12:58:11.145456087Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 18 12:58:17 multinode-015900 dockerd[1344]: time="2023-12-18T12:58:17.090105292Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Dec 18 12:58:17 multinode-015900 dockerd[1344]: time="2023-12-18T12:58:17.092918302Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Dec 18 12:58:17 multinode-015900 dockerd[1344]: time="2023-12-18T12:58:17.093147803Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 18 12:58:17 multinode-015900 dockerd[1344]: time="2023-12-18T12:58:17.093265003Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Dec 18 12:58:17 multinode-015900 dockerd[1344]: time="2023-12-18T12:58:17.091467797Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 18 12:58:17 multinode-015900 dockerd[1344]: time="2023-12-18T12:58:17.093390704Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 18 12:58:17 multinode-015900 dockerd[1344]: time="2023-12-18T12:58:17.093585504Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Dec 18 12:58:17 multinode-015900 dockerd[1344]: time="2023-12-18T12:58:17.093620904Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 18 12:58:17 multinode-015900 cri-dockerd[1228]: time="2023-12-18T12:58:17Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/3844ace09fc880146b66e177e06d623398d2d6294b6b4567d7408eb789d95a9c/resolv.conf as [nameserver 192.168.224.1]"
	Dec 18 12:58:17 multinode-015900 cri-dockerd[1228]: time="2023-12-18T12:58:17Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/83b7f24a350e12a0393bfee9379b82efd391afe1f3f144683858ea37d0304250/resolv.conf as [nameserver 192.168.224.1]"
	Dec 18 12:58:17 multinode-015900 dockerd[1344]: time="2023-12-18T12:58:17.799524528Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Dec 18 12:58:17 multinode-015900 dockerd[1344]: time="2023-12-18T12:58:17.799802729Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 18 12:58:17 multinode-015900 dockerd[1344]: time="2023-12-18T12:58:17.799844029Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Dec 18 12:58:17 multinode-015900 dockerd[1344]: time="2023-12-18T12:58:17.799905029Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 18 12:58:17 multinode-015900 dockerd[1344]: time="2023-12-18T12:58:17.892561660Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Dec 18 12:58:17 multinode-015900 dockerd[1344]: time="2023-12-18T12:58:17.900835390Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 18 12:58:17 multinode-015900 dockerd[1344]: time="2023-12-18T12:58:17.901163791Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Dec 18 12:58:17 multinode-015900 dockerd[1344]: time="2023-12-18T12:58:17.901569892Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                      CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	ce5fda4e192f9       ead0a4a53df89                                                                              About a minute ago   Running             coredns                   0                   83b7f24a350e1       coredns-5dd5756b68-256fn
	6803f1209e6c8       6e38f40d628db                                                                              About a minute ago   Running             storage-provisioner       0                   3844ace09fc88       storage-provisioner
	1bc4da30bccba       kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052   About a minute ago   Running             kindnet-cni               0                   edbac0d907de9       kindnet-bfllh
	cbb2ed8a2d451       83f6cc407eed8                                                                              About a minute ago   Running             kube-proxy                0                   176319065ee74       kube-proxy-xpxz2
	b4091c55c3174       73deb9a3f7025                                                                              About a minute ago   Running             etcd                      0                   eb1751282a0b3       etcd-multinode-015900
	3b0c7b029fe22       d058aa5ab969c                                                                              About a minute ago   Running             kube-controller-manager   0                   6dc6c4e8b8bca       kube-controller-manager-multinode-015900
	11172ef348e40       e3db313c6dbc0                                                                              About a minute ago   Running             kube-scheduler            0                   05ab95d27e3db       kube-scheduler-multinode-015900
	6eb3af9836893       7fe0e6f37db33                                                                              About a minute ago   Running             kube-apiserver            0                   b24923dba9040       kube-apiserver-multinode-015900
	
	* 
	* ==> coredns [ce5fda4e192f] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = e48cc74d4d4792b6e037fc6364095f03dd97c499e20d6def56cab70b374eb190d7fd9d3720ca48b7382edb6d6fbe7d631f96f64e38a41e6bd8617ab8ab6ece2c
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:59375 - 22306 "HINFO IN 712720368267907743.4018947577463859218. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.082850178s
	
	* 
	* ==> describe nodes <==
	* Name:               multinode-015900
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-015900
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=30d8ecd1811578f7b9db580c501c654c189f68d4
	                    minikube.k8s.io/name=multinode-015900
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_12_18T12_57_49_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 18 Dec 2023 12:57:44 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-015900
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 18 Dec 2023 12:59:30 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 18 Dec 2023 12:58:19 +0000   Mon, 18 Dec 2023 12:57:42 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 18 Dec 2023 12:58:19 +0000   Mon, 18 Dec 2023 12:57:42 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 18 Dec 2023 12:58:19 +0000   Mon, 18 Dec 2023 12:57:42 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 18 Dec 2023 12:58:19 +0000   Mon, 18 Dec 2023 12:58:16 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.238.182
	  Hostname:    multinode-015900
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165980Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165980Ki
	  pods:               110
	System Info:
	  Machine ID:                 9642ce71bc18407dae06f7ff0a55c3e9
	  System UUID:                b492d3f0-2f33-7042-bc48-d29ef920286a
	  Boot ID:                    0cf02454-6747-47df-823d-76cb15e13fd1
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://24.0.7
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-5dd5756b68-256fn                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     94s
	  kube-system                 etcd-multinode-015900                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         106s
	  kube-system                 kindnet-bfllh                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      94s
	  kube-system                 kube-apiserver-multinode-015900             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         106s
	  kube-system                 kube-controller-manager-multinode-015900    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         106s
	  kube-system                 kube-proxy-xpxz2                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         94s
	  kube-system                 kube-scheduler-multinode-015900             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         106s
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         86s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 92s                  kube-proxy       
	  Normal  Starting                 115s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  115s (x8 over 115s)  kubelet          Node multinode-015900 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    115s (x8 over 115s)  kubelet          Node multinode-015900 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     115s (x7 over 115s)  kubelet          Node multinode-015900 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  115s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 106s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  106s                 kubelet          Node multinode-015900 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    106s                 kubelet          Node multinode-015900 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     106s                 kubelet          Node multinode-015900 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  106s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           95s                  node-controller  Node multinode-015900 event: Registered Node multinode-015900 in Controller
	  Normal  NodeReady                78s                  kubelet          Node multinode-015900 status is now: NodeReady
	
	* 
	* ==> dmesg <==
	* [  +1.315146] psmouse serio1: trackpoint: failed to get extended button data, assuming 3 buttons
	[  +1.117498] systemd-fstab-generator[113]: Ignoring "noauto" for root device
	[  +1.236956] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000003] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[Dec18 12:56] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[ +44.242867] systemd-fstab-generator[648]: Ignoring "noauto" for root device
	[  +0.148619] systemd-fstab-generator[659]: Ignoring "noauto" for root device
	[Dec18 12:57] systemd-fstab-generator[952]: Ignoring "noauto" for root device
	[  +0.592149] systemd-fstab-generator[991]: Ignoring "noauto" for root device
	[  +0.167881] systemd-fstab-generator[1002]: Ignoring "noauto" for root device
	[  +0.194951] systemd-fstab-generator[1015]: Ignoring "noauto" for root device
	[  +1.358713] kauditd_printk_skb: 28 callbacks suppressed
	[  +0.340651] systemd-fstab-generator[1173]: Ignoring "noauto" for root device
	[  +0.165769] systemd-fstab-generator[1184]: Ignoring "noauto" for root device
	[  +0.182075] systemd-fstab-generator[1195]: Ignoring "noauto" for root device
	[  +0.158810] systemd-fstab-generator[1206]: Ignoring "noauto" for root device
	[  +0.193105] systemd-fstab-generator[1220]: Ignoring "noauto" for root device
	[ +12.845013] systemd-fstab-generator[1328]: Ignoring "noauto" for root device
	[  +2.225688] kauditd_printk_skb: 29 callbacks suppressed
	[  +7.125974] systemd-fstab-generator[1709]: Ignoring "noauto" for root device
	[  +0.807148] kauditd_printk_skb: 29 callbacks suppressed
	[  +8.554721] systemd-fstab-generator[2669]: Ignoring "noauto" for root device
	[Dec18 12:58] kauditd_printk_skb: 14 callbacks suppressed
	
	* 
	* ==> etcd [b4091c55c317] <==
	* {"level":"info","ts":"2023-12-18T12:57:42.149157Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2023-12-18T12:57:42.149245Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.238.182:2380"}
	{"level":"info","ts":"2023-12-18T12:57:42.149504Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.238.182:2380"}
	{"level":"info","ts":"2023-12-18T12:57:42.150478Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"41ffe37f8ac56e1b","initial-advertise-peer-urls":["https://192.168.238.182:2380"],"listen-peer-urls":["https://192.168.238.182:2380"],"advertise-client-urls":["https://192.168.238.182:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.238.182:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-12-18T12:57:42.152748Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-12-18T12:57:42.161741Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"41ffe37f8ac56e1b is starting a new election at term 1"}
	{"level":"info","ts":"2023-12-18T12:57:42.162034Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"41ffe37f8ac56e1b became pre-candidate at term 1"}
	{"level":"info","ts":"2023-12-18T12:57:42.162203Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"41ffe37f8ac56e1b received MsgPreVoteResp from 41ffe37f8ac56e1b at term 1"}
	{"level":"info","ts":"2023-12-18T12:57:42.162357Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"41ffe37f8ac56e1b became candidate at term 2"}
	{"level":"info","ts":"2023-12-18T12:57:42.162595Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"41ffe37f8ac56e1b received MsgVoteResp from 41ffe37f8ac56e1b at term 2"}
	{"level":"info","ts":"2023-12-18T12:57:42.162821Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"41ffe37f8ac56e1b became leader at term 2"}
	{"level":"info","ts":"2023-12-18T12:57:42.162975Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 41ffe37f8ac56e1b elected leader 41ffe37f8ac56e1b at term 2"}
	{"level":"info","ts":"2023-12-18T12:57:42.166883Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"41ffe37f8ac56e1b","local-member-attributes":"{Name:multinode-015900 ClientURLs:[https://192.168.238.182:2379]}","request-path":"/0/members/41ffe37f8ac56e1b/attributes","cluster-id":"d139d8f891842dfc","publish-timeout":"7s"}
	{"level":"info","ts":"2023-12-18T12:57:42.167088Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-12-18T12:57:42.170001Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.238.182:2379"}
	{"level":"info","ts":"2023-12-18T12:57:42.167206Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-12-18T12:57:42.16742Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2023-12-18T12:57:42.167505Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-12-18T12:57:42.182853Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-12-18T12:57:42.186757Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"d139d8f891842dfc","local-member-id":"41ffe37f8ac56e1b","cluster-version":"3.5"}
	{"level":"info","ts":"2023-12-18T12:57:42.189906Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-12-18T12:57:42.190037Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2023-12-18T12:57:42.197821Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"warn","ts":"2023-12-18T12:58:10.973504Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"106.8128ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-015900\" ","response":"range_response_count:1 size:4488"}
	{"level":"info","ts":"2023-12-18T12:58:10.973868Z","caller":"traceutil/trace.go:171","msg":"trace[1681957784] range","detail":"{range_begin:/registry/minions/multinode-015900; range_end:; response_count:1; response_revision:421; }","duration":"107.204003ms","start":"2023-12-18T12:58:10.866644Z","end":"2023-12-18T12:58:10.973848Z","steps":["trace[1681957784] 'range keys from in-memory index tree'  (duration: 106.618099ms)"],"step_count":1}
	
	* 
	* ==> kernel <==
	*  12:59:34 up 3 min,  0 users,  load average: 0.59, 0.47, 0.19
	Linux multinode-015900 5.10.57 #1 SMP Wed Dec 13 22:38:26 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kindnet [1bc4da30bccb] <==
	* I1218 12:58:11.613915       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I1218 12:58:11.614240       1 main.go:107] hostIP = 192.168.238.182
	podIP = 192.168.238.182
	I1218 12:58:11.614448       1 main.go:116] setting mtu 1500 for CNI 
	I1218 12:58:11.614465       1 main.go:146] kindnetd IP family: "ipv4"
	I1218 12:58:11.614483       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I1218 12:58:12.215365       1 main.go:223] Handling node with IPs: map[192.168.238.182:{}]
	I1218 12:58:12.215432       1 main.go:227] handling current node
	I1218 12:58:22.226935       1 main.go:223] Handling node with IPs: map[192.168.238.182:{}]
	I1218 12:58:22.227047       1 main.go:227] handling current node
	I1218 12:58:32.234161       1 main.go:223] Handling node with IPs: map[192.168.238.182:{}]
	I1218 12:58:32.234287       1 main.go:227] handling current node
	I1218 12:58:42.249143       1 main.go:223] Handling node with IPs: map[192.168.238.182:{}]
	I1218 12:58:42.249325       1 main.go:227] handling current node
	I1218 12:58:52.255486       1 main.go:223] Handling node with IPs: map[192.168.238.182:{}]
	I1218 12:58:52.255580       1 main.go:227] handling current node
	I1218 12:59:02.269372       1 main.go:223] Handling node with IPs: map[192.168.238.182:{}]
	I1218 12:59:02.269477       1 main.go:227] handling current node
	I1218 12:59:12.275019       1 main.go:223] Handling node with IPs: map[192.168.238.182:{}]
	I1218 12:59:12.275141       1 main.go:227] handling current node
	I1218 12:59:22.286087       1 main.go:223] Handling node with IPs: map[192.168.238.182:{}]
	I1218 12:59:22.286186       1 main.go:227] handling current node
	I1218 12:59:32.292190       1 main.go:223] Handling node with IPs: map[192.168.238.182:{}]
	I1218 12:59:32.292308       1 main.go:227] handling current node
	
	* 
	* ==> kube-apiserver [6eb3af983689] <==
	* I1218 12:57:44.444287       1 controller.go:624] quota admission added evaluator for: namespaces
	I1218 12:57:44.451095       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1218 12:57:44.451111       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1218 12:57:44.451295       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1218 12:57:44.451442       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1218 12:57:44.451612       1 aggregator.go:166] initial CRD sync complete...
	I1218 12:57:44.451762       1 autoregister_controller.go:141] Starting autoregister controller
	I1218 12:57:44.451941       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1218 12:57:44.452038       1 cache.go:39] Caches are synced for autoregister controller
	I1218 12:57:44.541375       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1218 12:57:45.251967       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1218 12:57:45.262520       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1218 12:57:45.262610       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1218 12:57:46.063110       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1218 12:57:46.132034       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1218 12:57:46.220907       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1218 12:57:46.229961       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.238.182]
	I1218 12:57:46.231438       1 controller.go:624] quota admission added evaluator for: endpoints
	I1218 12:57:46.238355       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1218 12:57:46.365982       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1218 12:57:48.067539       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1218 12:57:48.083280       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1218 12:57:48.098551       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I1218 12:57:59.371448       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I1218 12:58:00.070881       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	
	* 
	* ==> kube-controller-manager [3b0c7b029fe2] <==
	* I1218 12:57:59.307288       1 shared_informer.go:318] Caches are synced for endpoint_slice_mirroring
	I1218 12:57:59.326210       1 shared_informer.go:318] Caches are synced for resource quota
	I1218 12:57:59.369923       1 shared_informer.go:318] Caches are synced for endpoint_slice
	I1218 12:57:59.376893       1 shared_informer.go:318] Caches are synced for resource quota
	I1218 12:57:59.379895       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-5dd5756b68 to 2"
	I1218 12:57:59.387923       1 shared_informer.go:318] Caches are synced for disruption
	I1218 12:57:59.764072       1 shared_informer.go:318] Caches are synced for garbage collector
	I1218 12:57:59.764229       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1218 12:57:59.789470       1 shared_informer.go:318] Caches are synced for garbage collector
	I1218 12:58:00.090922       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-bfllh"
	I1218 12:58:00.095676       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-xpxz2"
	I1218 12:58:00.208362       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-5dd5756b68 to 1 from 2"
	I1218 12:58:00.298916       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-jf64x"
	I1218 12:58:00.345879       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-256fn"
	I1218 12:58:00.375518       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="995.424402ms"
	I1218 12:58:00.389464       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-5dd5756b68-jf64x"
	I1218 12:58:00.405536       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="29.86682ms"
	I1218 12:58:00.427988       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="22.40744ms"
	I1218 12:58:00.428594       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="216.602µs"
	I1218 12:58:16.597503       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="250.401µs"
	I1218 12:58:16.640495       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="51.701µs"
	I1218 12:58:19.032207       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="91.5µs"
	I1218 12:58:19.073070       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="11.560837ms"
	I1218 12:58:19.074978       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="290.701µs"
	I1218 12:58:19.266042       1 node_lifecycle_controller.go:1048] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	
	* 
	* ==> kube-proxy [cbb2ed8a2d45] <==
	* I1218 12:58:01.407065       1 server_others.go:69] "Using iptables proxy"
	I1218 12:58:01.431322       1 node.go:141] Successfully retrieved node IP: 192.168.238.182
	I1218 12:58:01.484627       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I1218 12:58:01.484653       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1218 12:58:01.489185       1 server_others.go:152] "Using iptables Proxier"
	I1218 12:58:01.489391       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1218 12:58:01.490772       1 server.go:846] "Version info" version="v1.28.4"
	I1218 12:58:01.491259       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1218 12:58:01.494167       1 config.go:188] "Starting service config controller"
	I1218 12:58:01.494300       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1218 12:58:01.494488       1 config.go:97] "Starting endpoint slice config controller"
	I1218 12:58:01.494636       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1218 12:58:01.497871       1 config.go:315] "Starting node config controller"
	I1218 12:58:01.497885       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1218 12:58:01.597755       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1218 12:58:01.603076       1 shared_informer.go:318] Caches are synced for node config
	I1218 12:58:01.597939       1 shared_informer.go:318] Caches are synced for service config
	
	* 
	* ==> kube-scheduler [11172ef348e4] <==
	* E1218 12:57:44.421204       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1218 12:57:44.421758       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1218 12:57:44.425651       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1218 12:57:44.425662       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1218 12:57:44.425668       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1218 12:57:44.425674       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1218 12:57:45.237083       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1218 12:57:45.237113       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W1218 12:57:45.258302       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1218 12:57:45.258342       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1218 12:57:45.363853       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1218 12:57:45.364833       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W1218 12:57:45.392943       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1218 12:57:45.393313       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W1218 12:57:45.476378       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1218 12:57:45.476547       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1218 12:57:45.525478       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1218 12:57:45.525523       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W1218 12:57:45.546560       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1218 12:57:45.547049       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1218 12:57:45.546922       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1218 12:57:45.547771       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W1218 12:57:45.628817       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1218 12:57:45.628929       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I1218 12:57:47.687467       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Mon 2023-12-18 12:55:53 UTC, ends at Mon 2023-12-18 12:59:34 UTC. --
	Dec 18 12:58:00 multinode-015900 kubelet[2696]: I1218 12:58:00.208182    2696 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f2ftm\" (UniqueName: \"kubernetes.io/projected/f376dae1-8132-46e0-a367-7a4764b6138b-kube-api-access-f2ftm\") pod \"kindnet-bfllh\" (UID: \"f376dae1-8132-46e0-a367-7a4764b6138b\") " pod="kube-system/kindnet-bfllh"
	Dec 18 12:58:00 multinode-015900 kubelet[2696]: I1218 12:58:00.208641    2696 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f376dae1-8132-46e0-a367-7a4764b6138b-xtables-lock\") pod \"kindnet-bfllh\" (UID: \"f376dae1-8132-46e0-a367-7a4764b6138b\") " pod="kube-system/kindnet-bfllh"
	Dec 18 12:58:00 multinode-015900 kubelet[2696]: I1218 12:58:00.208851    2696 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/f376dae1-8132-46e0-a367-7a4764b6138b-cni-cfg\") pod \"kindnet-bfllh\" (UID: \"f376dae1-8132-46e0-a367-7a4764b6138b\") " pod="kube-system/kindnet-bfllh"
	Dec 18 12:58:00 multinode-015900 kubelet[2696]: I1218 12:58:00.310466    2696 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6070d8c7-5af2-4e9f-b737-760782b764a6-xtables-lock\") pod \"kube-proxy-xpxz2\" (UID: \"6070d8c7-5af2-4e9f-b737-760782b764a6\") " pod="kube-system/kube-proxy-xpxz2"
	Dec 18 12:58:00 multinode-015900 kubelet[2696]: I1218 12:58:00.310511    2696 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6070d8c7-5af2-4e9f-b737-760782b764a6-lib-modules\") pod \"kube-proxy-xpxz2\" (UID: \"6070d8c7-5af2-4e9f-b737-760782b764a6\") " pod="kube-system/kube-proxy-xpxz2"
	Dec 18 12:58:00 multinode-015900 kubelet[2696]: I1218 12:58:00.310554    2696 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6lm2q\" (UniqueName: \"kubernetes.io/projected/6070d8c7-5af2-4e9f-b737-760782b764a6-kube-api-access-6lm2q\") pod \"kube-proxy-xpxz2\" (UID: \"6070d8c7-5af2-4e9f-b737-760782b764a6\") " pod="kube-system/kube-proxy-xpxz2"
	Dec 18 12:58:00 multinode-015900 kubelet[2696]: I1218 12:58:00.310620    2696 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/6070d8c7-5af2-4e9f-b737-760782b764a6-kube-proxy\") pod \"kube-proxy-xpxz2\" (UID: \"6070d8c7-5af2-4e9f-b737-760782b764a6\") " pod="kube-system/kube-proxy-xpxz2"
	Dec 18 12:58:05 multinode-015900 kubelet[2696]: I1218 12:58:05.950746    2696 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="edbac0d907de9c24ba1961ed7a92cc8209455aada6074e050909d9df92d3f558"
	Dec 18 12:58:08 multinode-015900 kubelet[2696]: I1218 12:58:08.434639    2696 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-xpxz2" podStartSLOduration=8.4345963 podCreationTimestamp="2023-12-18 12:58:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-12-18 12:58:01.642913388 +0000 UTC m=+13.615243282" watchObservedRunningTime="2023-12-18 12:58:08.4345963 +0000 UTC m=+20.406926194"
	Dec 18 12:58:16 multinode-015900 kubelet[2696]: I1218 12:58:16.551208    2696 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
	Dec 18 12:58:16 multinode-015900 kubelet[2696]: I1218 12:58:16.592113    2696 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kindnet-bfllh" podStartSLOduration=11.852970895 podCreationTimestamp="2023-12-18 12:58:00 +0000 UTC" firstStartedPulling="2023-12-18 12:58:05.952845271 +0000 UTC m=+17.925175065" lastFinishedPulling="2023-12-18 12:58:10.69194979 +0000 UTC m=+22.664279584" observedRunningTime="2023-12-18 12:58:12.047985923 +0000 UTC m=+24.020315717" watchObservedRunningTime="2023-12-18 12:58:16.592075414 +0000 UTC m=+28.564405308"
	Dec 18 12:58:16 multinode-015900 kubelet[2696]: I1218 12:58:16.592277    2696 topology_manager.go:215] "Topology Admit Handler" podUID="9b6ddc85-8b7a-45d0-9867-3be6bd8085e6" podNamespace="kube-system" podName="storage-provisioner"
	Dec 18 12:58:16 multinode-015900 kubelet[2696]: I1218 12:58:16.596449    2696 topology_manager.go:215] "Topology Admit Handler" podUID="6bd59fc6-50d6-4764-823d-71232811cff2" podNamespace="kube-system" podName="coredns-5dd5756b68-256fn"
	Dec 18 12:58:16 multinode-015900 kubelet[2696]: I1218 12:58:16.762786    2696 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vkh2p\" (UniqueName: \"kubernetes.io/projected/9b6ddc85-8b7a-45d0-9867-3be6bd8085e6-kube-api-access-vkh2p\") pod \"storage-provisioner\" (UID: \"9b6ddc85-8b7a-45d0-9867-3be6bd8085e6\") " pod="kube-system/storage-provisioner"
	Dec 18 12:58:16 multinode-015900 kubelet[2696]: I1218 12:58:16.762853    2696 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n9f8l\" (UniqueName: \"kubernetes.io/projected/6bd59fc6-50d6-4764-823d-71232811cff2-kube-api-access-n9f8l\") pod \"coredns-5dd5756b68-256fn\" (UID: \"6bd59fc6-50d6-4764-823d-71232811cff2\") " pod="kube-system/coredns-5dd5756b68-256fn"
	Dec 18 12:58:16 multinode-015900 kubelet[2696]: I1218 12:58:16.762880    2696 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/9b6ddc85-8b7a-45d0-9867-3be6bd8085e6-tmp\") pod \"storage-provisioner\" (UID: \"9b6ddc85-8b7a-45d0-9867-3be6bd8085e6\") " pod="kube-system/storage-provisioner"
	Dec 18 12:58:16 multinode-015900 kubelet[2696]: I1218 12:58:16.762904    2696 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6bd59fc6-50d6-4764-823d-71232811cff2-config-volume\") pod \"coredns-5dd5756b68-256fn\" (UID: \"6bd59fc6-50d6-4764-823d-71232811cff2\") " pod="kube-system/coredns-5dd5756b68-256fn"
	Dec 18 12:58:17 multinode-015900 kubelet[2696]: I1218 12:58:17.727884    2696 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="83b7f24a350e12a0393bfee9379b82efd391afe1f3f144683858ea37d0304250"
	Dec 18 12:58:17 multinode-015900 kubelet[2696]: I1218 12:58:17.974783    2696 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3844ace09fc880146b66e177e06d623398d2d6294b6b4567d7408eb789d95a9c"
	Dec 18 12:58:19 multinode-015900 kubelet[2696]: I1218 12:58:19.036466    2696 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=11.036424511 podCreationTimestamp="2023-12-18 12:58:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-12-18 12:58:19.009167825 +0000 UTC m=+30.981497719" watchObservedRunningTime="2023-12-18 12:58:19.036424511 +0000 UTC m=+31.008754305"
	Dec 18 12:58:19 multinode-015900 kubelet[2696]: I1218 12:58:19.060990    2696 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-256fn" podStartSLOduration=19.060951088 podCreationTimestamp="2023-12-18 12:58:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-12-18 12:58:19.037843415 +0000 UTC m=+31.010173209" watchObservedRunningTime="2023-12-18 12:58:19.060951088 +0000 UTC m=+31.033280882"
	Dec 18 12:58:48 multinode-015900 kubelet[2696]: E1218 12:58:48.472937    2696 iptables.go:575] "Could not set up iptables canary" err=<
	Dec 18 12:58:48 multinode-015900 kubelet[2696]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Dec 18 12:58:48 multinode-015900 kubelet[2696]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Dec 18 12:58:48 multinode-015900 kubelet[2696]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	
	* 
	* ==> storage-provisioner [6803f1209e6c] <==
	* I1218 12:58:18.094468       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1218 12:58:18.126636       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1218 12:58:18.129320       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1218 12:58:18.148124       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1218 12:58:18.149008       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_multinode-015900_dcfef51d-f585-43fa-bb0e-238906d259a2!
	I1218 12:58:18.149616       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"347a87ae-8581-4703-a4c5-aacd517d4214", APIVersion:"v1", ResourceVersion:"443", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' multinode-015900_dcfef51d-f585-43fa-bb0e-238906d259a2 became leader
	I1218 12:58:18.257838       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_multinode-015900_dcfef51d-f585-43fa-bb0e-238906d259a2!
	

                                                
                                                
-- /stdout --
** stderr ** 
	W1218 12:59:26.517298    4860 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p multinode-015900 -n multinode-015900
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p multinode-015900 -n multinode-015900: (12.0313938s)
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-015900 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/DeleteNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/DeleteNode (53.10s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (35.96s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:342: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-015900 stop
multinode_test.go:342: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-015900 stop: (28.672759s)
multinode_test.go:348: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-015900 status
multinode_test.go:348: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-015900 status: exit status 7 (2.4296015s)

                                                
                                                
-- stdout --
	multinode-015900
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	W1218 13:00:16.524551   15168 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
multinode_test.go:355: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-015900 status --alsologtostderr
multinode_test.go:355: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-015900 status --alsologtostderr: exit status 7 (2.4073604s)

                                                
                                                
-- stdout --
	multinode-015900
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	W1218 13:00:18.956548    8376 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I1218 13:00:19.042546    8376 out.go:296] Setting OutFile to fd 864 ...
	I1218 13:00:19.043547    8376 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1218 13:00:19.043547    8376 out.go:309] Setting ErrFile to fd 792...
	I1218 13:00:19.043547    8376 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1218 13:00:19.058542    8376 out.go:303] Setting JSON to false
	I1218 13:00:19.058542    8376 mustload.go:65] Loading cluster: multinode-015900
	I1218 13:00:19.058542    8376 notify.go:220] Checking for updates...
	I1218 13:00:19.059536    8376 config.go:182] Loaded profile config "multinode-015900": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1218 13:00:19.059536    8376 status.go:255] checking status of multinode-015900 ...
	I1218 13:00:19.060555    8376 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-015900 ).state
	I1218 13:00:21.195899    8376 main.go:141] libmachine: [stdout =====>] : Off
	
	I1218 13:00:21.195899    8376 main.go:141] libmachine: [stderr =====>] : 
	I1218 13:00:21.195899    8376 status.go:330] multinode-015900 host status = "Stopped" (err=<nil>)
	I1218 13:00:21.195899    8376 status.go:343] host is not running, skipping remaining checks
	I1218 13:00:21.195899    8376 status.go:257] multinode-015900 status: &{Name:multinode-015900 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:361: incorrect number of stopped hosts: args "out/minikube-windows-amd64.exe -p multinode-015900 status --alsologtostderr": multinode-015900
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
multinode_test.go:365: incorrect number of stopped kubelets: args "out/minikube-windows-amd64.exe -p multinode-015900 status --alsologtostderr": multinode-015900
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-015900 -n multinode-015900
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-015900 -n multinode-015900: exit status 7 (2.4513293s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
** stderr ** 
	W1218 13:00:21.364460   10256 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-015900" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StopMultiNode (35.96s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (181.59s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:382: (dbg) Run:  out/minikube-windows-amd64.exe start -p multinode-015900 --wait=true -v=8 --alsologtostderr --driver=hyperv
E1218 13:00:25.608659   14928 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-922300\client.crt: The system cannot find the path specified.
E1218 13:01:59.760807   14928 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ingress-addon-legacy-134100\client.crt: The system cannot find the path specified.
multinode_test.go:382: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p multinode-015900 --wait=true -v=8 --alsologtostderr --driver=hyperv: exit status 90 (2m49.5746139s)

                                                
                                                
-- stdout --
	* [multinode-015900] minikube v1.32.0 on Microsoft Windows 10 Enterprise N 10.0.19045.3803 Build 19045.3803
	  - KUBECONFIG=C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube7\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=17824
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the hyperv driver based on existing profile
	* Starting control plane node multinode-015900 in cluster multinode-015900
	* Restarting existing hyperv VM for "multinode-015900" ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	W1218 13:00:23.815908    8288 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I1218 13:00:23.895668    8288 out.go:296] Setting OutFile to fd 824 ...
	I1218 13:00:23.897495    8288 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1218 13:00:23.897587    8288 out.go:309] Setting ErrFile to fd 716...
	I1218 13:00:23.897587    8288 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1218 13:00:23.919826    8288 out.go:303] Setting JSON to false
	I1218 13:00:23.923340    8288 start.go:128] hostinfo: {"hostname":"minikube7","uptime":4898,"bootTime":1702899525,"procs":194,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.3803 Build 19045.3803","kernelVersion":"10.0.19045.3803 Build 19045.3803","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f66ed2ea-6c04-4a6b-8eea-b2fb0953e990"}
	W1218 13:00:23.923510    8288 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1218 13:00:23.956038    8288 out.go:177] * [multinode-015900] minikube v1.32.0 on Microsoft Windows 10 Enterprise N 10.0.19045.3803 Build 19045.3803
	I1218 13:00:23.957156    8288 notify.go:220] Checking for updates...
	I1218 13:00:23.958429    8288 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	I1218 13:00:23.959337    8288 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1218 13:00:24.007364    8288 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube7\minikube-integration\.minikube
	I1218 13:00:24.059147    8288 out.go:177]   - MINIKUBE_LOCATION=17824
	I1218 13:00:24.060111    8288 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1218 13:00:24.110931    8288 config.go:182] Loaded profile config "multinode-015900": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1218 13:00:24.112187    8288 driver.go:392] Setting default libvirt URI to qemu:///system
	I1218 13:00:29.389920    8288 out.go:177] * Using the hyperv driver based on existing profile
	I1218 13:00:29.390816    8288 start.go:298] selected driver: hyperv
	I1218 13:00:29.390816    8288 start.go:902] validating driver "hyperv" against &{Name:multinode-015900 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17765/minikube-v1.32.1-1702490427-17765-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702585004-17765@sha256:ba2fbb9efd5b81a443834ba0800f3bc13feea942ce199df74b0054a9bdb32bbd Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCon
fig:{KubernetesVersion:v1.28.4 ClusterName:multinode-015900 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.238.182 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/
minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1218 13:00:29.391528    8288 start.go:913] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1218 13:00:29.440920    8288 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1218 13:00:29.440920    8288 cni.go:84] Creating CNI manager for ""
	I1218 13:00:29.440920    8288 cni.go:136] 1 nodes found, recommending kindnet
	I1218 13:00:29.440920    8288 start_flags.go:323] config:
	{Name:multinode-015900 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17765/minikube-v1.32.1-1702490427-17765-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702585004-17765@sha256:ba2fbb9efd5b81a443834ba0800f3bc13feea942ce199df74b0054a9bdb32bbd Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-015900 Namespace:default AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.238.182 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144
MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1218 13:00:29.440920    8288 iso.go:125] acquiring lock: {Name:mka7fdf4332632f41b99a369cc525e40499a0030 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1218 13:00:29.443377    8288 out.go:177] * Starting control plane node multinode-015900 in cluster multinode-015900
	I1218 13:00:29.444036    8288 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I1218 13:00:29.444269    8288 preload.go:148] Found local preload: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4
	I1218 13:00:29.444319    8288 cache.go:56] Caching tarball of preloaded images
	I1218 13:00:29.444540    8288 preload.go:174] Found C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1218 13:00:29.444540    8288 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on docker
	I1218 13:00:29.444540    8288 profile.go:148] Saving config to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-015900\config.json ...
	I1218 13:00:29.447499    8288 start.go:365] acquiring machines lock for multinode-015900: {Name:mk814f158b6187cc9297257c36fdbe0d2871c950 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1218 13:00:29.447499    8288 start.go:369] acquired machines lock for "multinode-015900" in 0s
	I1218 13:00:29.448081    8288 start.go:96] Skipping create...Using existing machine configuration
	I1218 13:00:29.448128    8288 fix.go:54] fixHost starting: 
	I1218 13:00:29.448769    8288 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-015900 ).state
	I1218 13:00:32.065371    8288 main.go:141] libmachine: [stdout =====>] : Off
	
	I1218 13:00:32.065637    8288 main.go:141] libmachine: [stderr =====>] : 
	I1218 13:00:32.065717    8288 fix.go:102] recreateIfNeeded on multinode-015900: state=Stopped err=<nil>
	W1218 13:00:32.065717    8288 fix.go:128] unexpected machine state, will restart: <nil>
	I1218 13:00:32.066714    8288 out.go:177] * Restarting existing hyperv VM for "multinode-015900" ...
	I1218 13:00:32.067404    8288 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-015900
	I1218 13:00:34.931001    8288 main.go:141] libmachine: [stdout =====>] : 
	I1218 13:00:34.931001    8288 main.go:141] libmachine: [stderr =====>] : 
	I1218 13:00:34.931001    8288 main.go:141] libmachine: Waiting for host to start...
	I1218 13:00:34.931091    8288 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-015900 ).state
	I1218 13:00:37.144817    8288 main.go:141] libmachine: [stdout =====>] : Running
	
	I1218 13:00:37.144855    8288 main.go:141] libmachine: [stderr =====>] : 
	I1218 13:00:37.144855    8288 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-015900 ).networkadapters[0]).ipaddresses[0]
	I1218 13:00:39.638223    8288 main.go:141] libmachine: [stdout =====>] : 
	I1218 13:00:39.638223    8288 main.go:141] libmachine: [stderr =====>] : 
	I1218 13:00:40.640910    8288 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-015900 ).state
	I1218 13:00:42.832255    8288 main.go:141] libmachine: [stdout =====>] : Running
	
	I1218 13:00:42.832559    8288 main.go:141] libmachine: [stderr =====>] : 
	I1218 13:00:42.832559    8288 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-015900 ).networkadapters[0]).ipaddresses[0]
	I1218 13:00:45.332514    8288 main.go:141] libmachine: [stdout =====>] : 
	I1218 13:00:45.332715    8288 main.go:141] libmachine: [stderr =====>] : 
	I1218 13:00:46.348104    8288 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-015900 ).state
	I1218 13:00:48.509152    8288 main.go:141] libmachine: [stdout =====>] : Running
	
	I1218 13:00:48.509379    8288 main.go:141] libmachine: [stderr =====>] : 
	I1218 13:00:48.509487    8288 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-015900 ).networkadapters[0]).ipaddresses[0]
	I1218 13:00:51.021131    8288 main.go:141] libmachine: [stdout =====>] : 
	I1218 13:00:51.021539    8288 main.go:141] libmachine: [stderr =====>] : 
	I1218 13:00:52.036179    8288 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-015900 ).state
	I1218 13:00:54.267398    8288 main.go:141] libmachine: [stdout =====>] : Running
	
	I1218 13:00:54.267676    8288 main.go:141] libmachine: [stderr =====>] : 
	I1218 13:00:54.267676    8288 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-015900 ).networkadapters[0]).ipaddresses[0]
	I1218 13:00:56.731737    8288 main.go:141] libmachine: [stdout =====>] : 
	I1218 13:00:56.731960    8288 main.go:141] libmachine: [stderr =====>] : 
	I1218 13:00:57.735540    8288 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-015900 ).state
	I1218 13:00:59.970337    8288 main.go:141] libmachine: [stdout =====>] : Running
	
	I1218 13:00:59.970383    8288 main.go:141] libmachine: [stderr =====>] : 
	I1218 13:00:59.970457    8288 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-015900 ).networkadapters[0]).ipaddresses[0]
	I1218 13:01:02.511446    8288 main.go:141] libmachine: [stdout =====>] : 192.168.238.77
	
	I1218 13:01:02.511446    8288 main.go:141] libmachine: [stderr =====>] : 
	I1218 13:01:02.514451    8288 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-015900 ).state
	I1218 13:01:04.635282    8288 main.go:141] libmachine: [stdout =====>] : Running
	
	I1218 13:01:04.635282    8288 main.go:141] libmachine: [stderr =====>] : 
	I1218 13:01:04.635425    8288 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-015900 ).networkadapters[0]).ipaddresses[0]
	I1218 13:01:07.169071    8288 main.go:141] libmachine: [stdout =====>] : 192.168.238.77
	
	I1218 13:01:07.169071    8288 main.go:141] libmachine: [stderr =====>] : 
	I1218 13:01:07.169071    8288 profile.go:148] Saving config to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-015900\config.json ...
	I1218 13:01:07.172576    8288 machine.go:88] provisioning docker machine ...
	I1218 13:01:07.172576    8288 buildroot.go:166] provisioning hostname "multinode-015900"
	I1218 13:01:07.172576    8288 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-015900 ).state
	I1218 13:01:09.252294    8288 main.go:141] libmachine: [stdout =====>] : Running
	
	I1218 13:01:09.252294    8288 main.go:141] libmachine: [stderr =====>] : 
	I1218 13:01:09.252294    8288 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-015900 ).networkadapters[0]).ipaddresses[0]
	I1218 13:01:11.813684    8288 main.go:141] libmachine: [stdout =====>] : 192.168.238.77
	
	I1218 13:01:11.813915    8288 main.go:141] libmachine: [stderr =====>] : 
	I1218 13:01:11.819713    8288 main.go:141] libmachine: Using SSH client type: native
	I1218 13:01:11.820363    8288 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xdb4f40] 0xdb7a80 <nil>  [] 0s} 192.168.238.77 22 <nil> <nil>}
	I1218 13:01:11.820363    8288 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-015900 && echo "multinode-015900" | sudo tee /etc/hostname
	I1218 13:01:11.981422    8288 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-015900
	
	I1218 13:01:11.981612    8288 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-015900 ).state
	I1218 13:01:14.115503    8288 main.go:141] libmachine: [stdout =====>] : Running
	
	I1218 13:01:14.115503    8288 main.go:141] libmachine: [stderr =====>] : 
	I1218 13:01:14.115697    8288 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-015900 ).networkadapters[0]).ipaddresses[0]
	I1218 13:01:16.596519    8288 main.go:141] libmachine: [stdout =====>] : 192.168.238.77
	
	I1218 13:01:16.596519    8288 main.go:141] libmachine: [stderr =====>] : 
	I1218 13:01:16.601189    8288 main.go:141] libmachine: Using SSH client type: native
	I1218 13:01:16.602031    8288 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xdb4f40] 0xdb7a80 <nil>  [] 0s} 192.168.238.77 22 <nil> <nil>}
	I1218 13:01:16.602031    8288 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-015900' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-015900/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-015900' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1218 13:01:16.753570    8288 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1218 13:01:16.753682    8288 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube7\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube7\minikube-integration\.minikube}
	I1218 13:01:16.753775    8288 buildroot.go:174] setting up certificates
	I1218 13:01:16.753775    8288 provision.go:83] configureAuth start
	I1218 13:01:16.753775    8288 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-015900 ).state
	I1218 13:01:18.822076    8288 main.go:141] libmachine: [stdout =====>] : Running
	
	I1218 13:01:18.822076    8288 main.go:141] libmachine: [stderr =====>] : 
	I1218 13:01:18.822076    8288 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-015900 ).networkadapters[0]).ipaddresses[0]
	I1218 13:01:21.295244    8288 main.go:141] libmachine: [stdout =====>] : 192.168.238.77
	
	I1218 13:01:21.295244    8288 main.go:141] libmachine: [stderr =====>] : 
	I1218 13:01:21.295375    8288 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-015900 ).state
	I1218 13:01:23.424904    8288 main.go:141] libmachine: [stdout =====>] : Running
	
	I1218 13:01:23.425115    8288 main.go:141] libmachine: [stderr =====>] : 
	I1218 13:01:23.425115    8288 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-015900 ).networkadapters[0]).ipaddresses[0]
	I1218 13:01:25.900702    8288 main.go:141] libmachine: [stdout =====>] : 192.168.238.77
	
	I1218 13:01:25.900956    8288 main.go:141] libmachine: [stderr =====>] : 
	I1218 13:01:25.900956    8288 provision.go:138] copyHostCerts
	I1218 13:01:25.901045    8288 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube7\minikube-integration\.minikube/cert.pem
	I1218 13:01:25.901045    8288 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/cert.pem, removing ...
	I1218 13:01:25.901045    8288 exec_runner.go:203] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cert.pem
	I1218 13:01:25.901984    8288 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/cert.pem (1123 bytes)
	I1218 13:01:25.902823    8288 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube7\minikube-integration\.minikube/key.pem
	I1218 13:01:25.902823    8288 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/key.pem, removing ...
	I1218 13:01:25.903392    8288 exec_runner.go:203] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\key.pem
	I1218 13:01:25.903844    8288 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/key.pem (1679 bytes)
	I1218 13:01:25.905070    8288 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube7\minikube-integration\.minikube/ca.pem
	I1218 13:01:25.905070    8288 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/ca.pem, removing ...
	I1218 13:01:25.905070    8288 exec_runner.go:203] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.pem
	I1218 13:01:25.905828    8288 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/ca.pem (1078 bytes)
	I1218 13:01:25.906739    8288 provision.go:112] generating server cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-015900 san=[192.168.238.77 192.168.238.77 localhost 127.0.0.1 minikube multinode-015900]
	I1218 13:01:26.231504    8288 provision.go:172] copyRemoteCerts
	I1218 13:01:26.246867    8288 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1218 13:01:26.246867    8288 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-015900 ).state
	I1218 13:01:28.334325    8288 main.go:141] libmachine: [stdout =====>] : Running
	
	I1218 13:01:28.334325    8288 main.go:141] libmachine: [stderr =====>] : 
	I1218 13:01:28.334413    8288 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-015900 ).networkadapters[0]).ipaddresses[0]
	I1218 13:01:30.890933    8288 main.go:141] libmachine: [stdout =====>] : 192.168.238.77
	
	I1218 13:01:30.891167    8288 main.go:141] libmachine: [stderr =====>] : 
	I1218 13:01:30.891650    8288 sshutil.go:53] new ssh client: &{IP:192.168.238.77 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-015900\id_rsa Username:docker}
	I1218 13:01:31.000612    8288 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.7537277s)
	I1218 13:01:31.000919    8288 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I1218 13:01:31.001728    8288 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1218 13:01:31.039055    8288 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I1218 13:01:31.039516    8288 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1224 bytes)
	I1218 13:01:31.079054    8288 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I1218 13:01:31.079614    8288 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1218 13:01:31.120260    8288 provision.go:86] duration metric: configureAuth took 14.366434s
	I1218 13:01:31.120329    8288 buildroot.go:189] setting minikube options for container-runtime
	I1218 13:01:31.120968    8288 config.go:182] Loaded profile config "multinode-015900": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1218 13:01:31.121088    8288 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-015900 ).state
	I1218 13:01:33.209735    8288 main.go:141] libmachine: [stdout =====>] : Running
	
	I1218 13:01:33.209735    8288 main.go:141] libmachine: [stderr =====>] : 
	I1218 13:01:33.209951    8288 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-015900 ).networkadapters[0]).ipaddresses[0]
	I1218 13:01:35.713453    8288 main.go:141] libmachine: [stdout =====>] : 192.168.238.77
	
	I1218 13:01:35.713453    8288 main.go:141] libmachine: [stderr =====>] : 
	I1218 13:01:35.718320    8288 main.go:141] libmachine: Using SSH client type: native
	I1218 13:01:35.719315    8288 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xdb4f40] 0xdb7a80 <nil>  [] 0s} 192.168.238.77 22 <nil> <nil>}
	I1218 13:01:35.719315    8288 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1218 13:01:35.862985    8288 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1218 13:01:35.863095    8288 buildroot.go:70] root file system type: tmpfs
	I1218 13:01:35.863282    8288 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I1218 13:01:35.863376    8288 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-015900 ).state
	I1218 13:01:37.965538    8288 main.go:141] libmachine: [stdout =====>] : Running
	
	I1218 13:01:37.965538    8288 main.go:141] libmachine: [stderr =====>] : 
	I1218 13:01:37.965618    8288 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-015900 ).networkadapters[0]).ipaddresses[0]
	I1218 13:01:40.522992    8288 main.go:141] libmachine: [stdout =====>] : 192.168.238.77
	
	I1218 13:01:40.522992    8288 main.go:141] libmachine: [stderr =====>] : 
	I1218 13:01:40.527690    8288 main.go:141] libmachine: Using SSH client type: native
	I1218 13:01:40.528706    8288 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xdb4f40] 0xdb7a80 <nil>  [] 0s} 192.168.238.77 22 <nil> <nil>}
	I1218 13:01:40.528706    8288 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1218 13:01:40.691077    8288 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1218 13:01:40.691178    8288 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-015900 ).state
	I1218 13:01:42.804197    8288 main.go:141] libmachine: [stdout =====>] : Running
	
	I1218 13:01:42.804197    8288 main.go:141] libmachine: [stderr =====>] : 
	I1218 13:01:42.804197    8288 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-015900 ).networkadapters[0]).ipaddresses[0]
	I1218 13:01:45.286223    8288 main.go:141] libmachine: [stdout =====>] : 192.168.238.77
	
	I1218 13:01:45.286438    8288 main.go:141] libmachine: [stderr =====>] : 
	I1218 13:01:45.291977    8288 main.go:141] libmachine: Using SSH client type: native
	I1218 13:01:45.292683    8288 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xdb4f40] 0xdb7a80 <nil>  [] 0s} 192.168.238.77 22 <nil> <nil>}
	I1218 13:01:45.292683    8288 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1218 13:01:46.511137    8288 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I1218 13:01:46.511137    8288 machine.go:91] provisioned docker machine in 39.3384193s
	I1218 13:01:46.511137    8288 start.go:300] post-start starting for "multinode-015900" (driver="hyperv")
	I1218 13:01:46.511137    8288 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1218 13:01:46.524216    8288 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1218 13:01:46.524216    8288 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-015900 ).state
	I1218 13:01:48.620897    8288 main.go:141] libmachine: [stdout =====>] : Running
	
	I1218 13:01:48.620897    8288 main.go:141] libmachine: [stderr =====>] : 
	I1218 13:01:48.620897    8288 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-015900 ).networkadapters[0]).ipaddresses[0]
	I1218 13:01:51.095262    8288 main.go:141] libmachine: [stdout =====>] : 192.168.238.77
	
	I1218 13:01:51.095576    8288 main.go:141] libmachine: [stderr =====>] : 
	I1218 13:01:51.095890    8288 sshutil.go:53] new ssh client: &{IP:192.168.238.77 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-015900\id_rsa Username:docker}
	I1218 13:01:51.204399    8288 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.680144s)
	I1218 13:01:51.217682    8288 ssh_runner.go:195] Run: cat /etc/os-release
	I1218 13:01:51.224046    8288 command_runner.go:130] > NAME=Buildroot
	I1218 13:01:51.224046    8288 command_runner.go:130] > VERSION=2021.02.12-1-g0492d51-dirty
	I1218 13:01:51.224046    8288 command_runner.go:130] > ID=buildroot
	I1218 13:01:51.224046    8288 command_runner.go:130] > VERSION_ID=2021.02.12
	I1218 13:01:51.224046    8288 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
	I1218 13:01:51.224166    8288 info.go:137] Remote host: Buildroot 2021.02.12
	I1218 13:01:51.224306    8288 filesync.go:126] Scanning C:\Users\jenkins.minikube7\minikube-integration\.minikube\addons for local assets ...
	I1218 13:01:51.224767    8288 filesync.go:126] Scanning C:\Users\jenkins.minikube7\minikube-integration\.minikube\files for local assets ...
	I1218 13:01:51.226368    8288 filesync.go:149] local asset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\149282.pem -> 149282.pem in /etc/ssl/certs
	I1218 13:01:51.226368    8288 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\149282.pem -> /etc/ssl/certs/149282.pem
	I1218 13:01:51.238255    8288 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1218 13:01:51.253030    8288 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\149282.pem --> /etc/ssl/certs/149282.pem (1708 bytes)
	I1218 13:01:51.289315    8288 start.go:303] post-start completed in 4.7781605s
	I1218 13:01:51.289446    8288 fix.go:56] fixHost completed within 1m21.8410215s
	I1218 13:01:51.289586    8288 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-015900 ).state
	I1218 13:01:53.378586    8288 main.go:141] libmachine: [stdout =====>] : Running
	
	I1218 13:01:53.378586    8288 main.go:141] libmachine: [stderr =====>] : 
	I1218 13:01:53.378714    8288 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-015900 ).networkadapters[0]).ipaddresses[0]
	I1218 13:01:55.878830    8288 main.go:141] libmachine: [stdout =====>] : 192.168.238.77
	
	I1218 13:01:55.878830    8288 main.go:141] libmachine: [stderr =====>] : 
	I1218 13:01:55.883666    8288 main.go:141] libmachine: Using SSH client type: native
	I1218 13:01:55.884248    8288 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xdb4f40] 0xdb7a80 <nil>  [] 0s} 192.168.238.77 22 <nil> <nil>}
	I1218 13:01:55.884248    8288 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1218 13:01:56.024940    8288 main.go:141] libmachine: SSH cmd err, output: <nil>: 1702904516.035095238
	
	I1218 13:01:56.024940    8288 fix.go:206] guest clock: 1702904516.035095238
	I1218 13:01:56.025493    8288 fix.go:219] Guest: 2023-12-18 13:01:56.035095238 +0000 UTC Remote: 2023-12-18 13:01:51.2894462 +0000 UTC m=+87.576013101 (delta=4.745649038s)
	I1218 13:01:56.025592    8288 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-015900 ).state
	I1218 13:01:58.121213    8288 main.go:141] libmachine: [stdout =====>] : Running
	
	I1218 13:01:58.121323    8288 main.go:141] libmachine: [stderr =====>] : 
	I1218 13:01:58.121323    8288 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-015900 ).networkadapters[0]).ipaddresses[0]
	I1218 13:02:00.662736    8288 main.go:141] libmachine: [stdout =====>] : 192.168.238.77
	
	I1218 13:02:00.662736    8288 main.go:141] libmachine: [stderr =====>] : 
	I1218 13:02:00.668992    8288 main.go:141] libmachine: Using SSH client type: native
	I1218 13:02:00.670098    8288 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xdb4f40] 0xdb7a80 <nil>  [] 0s} 192.168.238.77 22 <nil> <nil>}
	I1218 13:02:00.670098    8288 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1702904516
	I1218 13:02:00.820291    8288 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Dec 18 13:01:56 UTC 2023
	
	I1218 13:02:00.820291    8288 fix.go:226] clock set: Mon Dec 18 13:01:56 UTC 2023
	 (err=<nil>)
	I1218 13:02:00.820291    8288 start.go:83] releasing machines lock for "multinode-015900", held for 1m31.3724615s
	I1218 13:02:00.820829    8288 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-015900 ).state
	I1218 13:02:02.883340    8288 main.go:141] libmachine: [stdout =====>] : Running
	
	I1218 13:02:02.883340    8288 main.go:141] libmachine: [stderr =====>] : 
	I1218 13:02:02.883721    8288 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-015900 ).networkadapters[0]).ipaddresses[0]
	I1218 13:02:05.389667    8288 main.go:141] libmachine: [stdout =====>] : 192.168.238.77
	
	I1218 13:02:05.389871    8288 main.go:141] libmachine: [stderr =====>] : 
	I1218 13:02:05.394715    8288 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1218 13:02:05.394715    8288 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-015900 ).state
	I1218 13:02:05.409842    8288 ssh_runner.go:195] Run: cat /version.json
	I1218 13:02:05.409842    8288 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-015900 ).state
	I1218 13:02:07.620954    8288 main.go:141] libmachine: [stdout =====>] : Running
	
	I1218 13:02:07.621232    8288 main.go:141] libmachine: [stderr =====>] : 
	I1218 13:02:07.621232    8288 main.go:141] libmachine: [stdout =====>] : Running
	
	I1218 13:02:07.621487    8288 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-015900 ).networkadapters[0]).ipaddresses[0]
	I1218 13:02:07.621684    8288 main.go:141] libmachine: [stderr =====>] : 
	I1218 13:02:07.622332    8288 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-015900 ).networkadapters[0]).ipaddresses[0]
	I1218 13:02:10.298289    8288 main.go:141] libmachine: [stdout =====>] : 192.168.238.77
	
	I1218 13:02:10.298289    8288 main.go:141] libmachine: [stderr =====>] : 
	I1218 13:02:10.299463    8288 sshutil.go:53] new ssh client: &{IP:192.168.238.77 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-015900\id_rsa Username:docker}
	I1218 13:02:10.318846    8288 main.go:141] libmachine: [stdout =====>] : 192.168.238.77
	
	I1218 13:02:10.318904    8288 main.go:141] libmachine: [stderr =====>] : 
	I1218 13:02:10.319097    8288 sshutil.go:53] new ssh client: &{IP:192.168.238.77 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-015900\id_rsa Username:docker}
	I1218 13:02:10.490263    8288 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1218 13:02:10.490405    8288 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (5.0955302s)
	I1218 13:02:10.490516    8288 command_runner.go:130] > {"iso_version": "v1.32.1-1702490427-17765", "kicbase_version": "v0.0.42-1702394725-17761", "minikube_version": "v1.32.0", "commit": "2780c4af854905e5cd4b94dc93de1f9d00b9040d"}
	I1218 13:02:10.490516    8288 ssh_runner.go:235] Completed: cat /version.json: (5.0806558s)
	I1218 13:02:10.503948    8288 ssh_runner.go:195] Run: systemctl --version
	I1218 13:02:10.512263    8288 command_runner.go:130] > systemd 247 (247)
	I1218 13:02:10.512263    8288 command_runner.go:130] > -PAM -AUDIT -SELINUX -IMA -APPARMOR -SMACK -SYSVINIT -UTMP -LIBCRYPTSETUP -GCRYPT -GNUTLS +ACL +XZ +LZ4 -ZSTD +SECCOMP +BLKID -ELFUTILS +KMOD -IDN2 -IDN -PCRE2 default-hierarchy=hybrid
	I1218 13:02:10.524260    8288 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1218 13:02:10.531416    8288 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1218 13:02:10.532487    8288 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1218 13:02:10.544594    8288 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1218 13:02:10.567329    8288 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I1218 13:02:10.567329    8288 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1218 13:02:10.567463    8288 start.go:475] detecting cgroup driver to use...
	I1218 13:02:10.567740    8288 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1218 13:02:10.597314    8288 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I1218 13:02:10.609326    8288 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I1218 13:02:10.638931    8288 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1218 13:02:10.655061    8288 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I1218 13:02:10.668877    8288 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1218 13:02:10.698728    8288 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1218 13:02:10.733446    8288 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1218 13:02:10.761431    8288 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1218 13:02:10.791980    8288 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1218 13:02:10.821017    8288 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1218 13:02:10.855108    8288 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1218 13:02:10.869173    8288 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1218 13:02:10.882545    8288 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1218 13:02:10.915488    8288 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1218 13:02:11.075153    8288 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1218 13:02:11.104491    8288 start.go:475] detecting cgroup driver to use...
	I1218 13:02:11.118872    8288 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1218 13:02:11.148035    8288 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I1218 13:02:11.148082    8288 command_runner.go:130] > [Unit]
	I1218 13:02:11.148082    8288 command_runner.go:130] > Description=Docker Application Container Engine
	I1218 13:02:11.148082    8288 command_runner.go:130] > Documentation=https://docs.docker.com
	I1218 13:02:11.148138    8288 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I1218 13:02:11.148138    8288 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I1218 13:02:11.148138    8288 command_runner.go:130] > StartLimitBurst=3
	I1218 13:02:11.148179    8288 command_runner.go:130] > StartLimitIntervalSec=60
	I1218 13:02:11.148179    8288 command_runner.go:130] > [Service]
	I1218 13:02:11.148179    8288 command_runner.go:130] > Type=notify
	I1218 13:02:11.148179    8288 command_runner.go:130] > Restart=on-failure
	I1218 13:02:11.148179    8288 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I1218 13:02:11.148179    8288 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I1218 13:02:11.148179    8288 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I1218 13:02:11.148243    8288 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I1218 13:02:11.148243    8288 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I1218 13:02:11.148278    8288 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I1218 13:02:11.148278    8288 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I1218 13:02:11.148278    8288 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I1218 13:02:11.148278    8288 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I1218 13:02:11.148278    8288 command_runner.go:130] > ExecStart=
	I1218 13:02:11.148278    8288 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I1218 13:02:11.148278    8288 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I1218 13:02:11.148278    8288 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I1218 13:02:11.148278    8288 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I1218 13:02:11.148278    8288 command_runner.go:130] > LimitNOFILE=infinity
	I1218 13:02:11.148278    8288 command_runner.go:130] > LimitNPROC=infinity
	I1218 13:02:11.148278    8288 command_runner.go:130] > LimitCORE=infinity
	I1218 13:02:11.148278    8288 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I1218 13:02:11.148278    8288 command_runner.go:130] > # Only systemd 226 and above support this version.
	I1218 13:02:11.148278    8288 command_runner.go:130] > TasksMax=infinity
	I1218 13:02:11.148278    8288 command_runner.go:130] > TimeoutStartSec=0
	I1218 13:02:11.148278    8288 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I1218 13:02:11.148278    8288 command_runner.go:130] > Delegate=yes
	I1218 13:02:11.148278    8288 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I1218 13:02:11.148278    8288 command_runner.go:130] > KillMode=process
	I1218 13:02:11.148278    8288 command_runner.go:130] > [Install]
	I1218 13:02:11.148278    8288 command_runner.go:130] > WantedBy=multi-user.target
	I1218 13:02:11.165543    8288 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1218 13:02:11.201308    8288 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1218 13:02:11.251218    8288 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1218 13:02:11.284765    8288 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1218 13:02:11.321147    8288 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1218 13:02:11.372239    8288 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1218 13:02:11.392319    8288 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1218 13:02:11.423841    8288 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I1218 13:02:11.439900    8288 ssh_runner.go:195] Run: which cri-dockerd
	I1218 13:02:11.445294    8288 command_runner.go:130] > /usr/bin/cri-dockerd
	I1218 13:02:11.462149    8288 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1218 13:02:11.476761    8288 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I1218 13:02:11.518973    8288 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1218 13:02:11.693083    8288 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1218 13:02:11.851749    8288 docker.go:560] configuring docker to use "cgroupfs" as cgroup driver...
	I1218 13:02:11.852221    8288 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1218 13:02:11.894276    8288 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1218 13:02:12.054256    8288 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1218 13:03:13.163771    8288 command_runner.go:130] ! Job for docker.service failed because the control process exited with error code.
	I1218 13:03:13.163976    8288 command_runner.go:130] ! See "systemctl status docker.service" and "journalctl -xe" for details.
	I1218 13:03:13.165491    8288 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m1.1110154s)
	I1218 13:03:13.183073    8288 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I1218 13:03:13.203573    8288 command_runner.go:130] > -- Journal begins at Mon 2023-12-18 13:00:50 UTC, ends at Mon 2023-12-18 13:03:13 UTC. --
	I1218 13:03:13.203642    8288 command_runner.go:130] > Dec 18 13:01:45 multinode-015900 systemd[1]: Starting Docker Application Container Engine...
	I1218 13:03:13.203642    8288 command_runner.go:130] > Dec 18 13:01:45 multinode-015900 dockerd[665]: time="2023-12-18T13:01:45.851330219Z" level=info msg="Starting up"
	I1218 13:03:13.203642    8288 command_runner.go:130] > Dec 18 13:01:45 multinode-015900 dockerd[665]: time="2023-12-18T13:01:45.852470695Z" level=info msg="containerd not running, starting managed containerd"
	I1218 13:03:13.203642    8288 command_runner.go:130] > Dec 18 13:01:45 multinode-015900 dockerd[665]: time="2023-12-18T13:01:45.853966196Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=671
	I1218 13:03:13.203711    8288 command_runner.go:130] > Dec 18 13:01:45 multinode-015900 dockerd[671]: time="2023-12-18T13:01:45.889993220Z" level=info msg="starting containerd" revision=64b8a811b07ba6288238eefc14d898ee0b5b99ba version=v1.7.11
	I1218 13:03:13.203774    8288 command_runner.go:130] > Dec 18 13:01:45 multinode-015900 dockerd[671]: time="2023-12-18T13:01:45.917647381Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I1218 13:03:13.203774    8288 command_runner.go:130] > Dec 18 13:01:45 multinode-015900 dockerd[671]: time="2023-12-18T13:01:45.917709185Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I1218 13:03:13.203832    8288 command_runner.go:130] > Dec 18 13:01:45 multinode-015900 dockerd[671]: time="2023-12-18T13:01:45.920227254Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.57\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I1218 13:03:13.203897    8288 command_runner.go:130] > Dec 18 13:01:45 multinode-015900 dockerd[671]: time="2023-12-18T13:01:45.920346762Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I1218 13:03:13.203967    8288 command_runner.go:130] > Dec 18 13:01:45 multinode-015900 dockerd[671]: time="2023-12-18T13:01:45.920557977Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I1218 13:03:13.203967    8288 command_runner.go:130] > Dec 18 13:01:45 multinode-015900 dockerd[671]: time="2023-12-18T13:01:45.920664084Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I1218 13:03:13.203967    8288 command_runner.go:130] > Dec 18 13:01:45 multinode-015900 dockerd[671]: time="2023-12-18T13:01:45.921448636Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I1218 13:03:13.204029    8288 command_runner.go:130] > Dec 18 13:01:45 multinode-015900 dockerd[671]: time="2023-12-18T13:01:45.922019575Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I1218 13:03:13.204116    8288 command_runner.go:130] > Dec 18 13:01:45 multinode-015900 dockerd[671]: time="2023-12-18T13:01:45.922125682Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I1218 13:03:13.204116    8288 command_runner.go:130] > Dec 18 13:01:45 multinode-015900 dockerd[671]: time="2023-12-18T13:01:45.922619715Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I1218 13:03:13.204116    8288 command_runner.go:130] > Dec 18 13:01:45 multinode-015900 dockerd[671]: time="2023-12-18T13:01:45.923464372Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I1218 13:03:13.204213    8288 command_runner.go:130] > Dec 18 13:01:45 multinode-015900 dockerd[671]: time="2023-12-18T13:01:45.923588080Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	I1218 13:03:13.204213    8288 command_runner.go:130] > Dec 18 13:01:45 multinode-015900 dockerd[671]: time="2023-12-18T13:01:45.923607582Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I1218 13:03:13.204288    8288 command_runner.go:130] > Dec 18 13:01:45 multinode-015900 dockerd[671]: time="2023-12-18T13:01:45.923947405Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I1218 13:03:13.204288    8288 command_runner.go:130] > Dec 18 13:01:45 multinode-015900 dockerd[671]: time="2023-12-18T13:01:45.924049711Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I1218 13:03:13.204355    8288 command_runner.go:130] > Dec 18 13:01:45 multinode-015900 dockerd[671]: time="2023-12-18T13:01:45.924090014Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	I1218 13:03:13.204355    8288 command_runner.go:130] > Dec 18 13:01:45 multinode-015900 dockerd[671]: time="2023-12-18T13:01:45.924107415Z" level=info msg="metadata content store policy set" policy=shared
	I1218 13:03:13.204355    8288 command_runner.go:130] > Dec 18 13:01:45 multinode-015900 dockerd[671]: time="2023-12-18T13:01:45.928340500Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I1218 13:03:13.204426    8288 command_runner.go:130] > Dec 18 13:01:45 multinode-015900 dockerd[671]: time="2023-12-18T13:01:45.928450008Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I1218 13:03:13.204426    8288 command_runner.go:130] > Dec 18 13:01:45 multinode-015900 dockerd[671]: time="2023-12-18T13:01:45.928473009Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I1218 13:03:13.204426    8288 command_runner.go:130] > Dec 18 13:01:45 multinode-015900 dockerd[671]: time="2023-12-18T13:01:45.928521212Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I1218 13:03:13.204492    8288 command_runner.go:130] > Dec 18 13:01:45 multinode-015900 dockerd[671]: time="2023-12-18T13:01:45.928540614Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I1218 13:03:13.204492    8288 command_runner.go:130] > Dec 18 13:01:45 multinode-015900 dockerd[671]: time="2023-12-18T13:01:45.928553114Z" level=info msg="NRI interface is disabled by configuration."
	I1218 13:03:13.204545    8288 command_runner.go:130] > Dec 18 13:01:45 multinode-015900 dockerd[671]: time="2023-12-18T13:01:45.928566015Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I1218 13:03:13.204545    8288 command_runner.go:130] > Dec 18 13:01:45 multinode-015900 dockerd[671]: time="2023-12-18T13:01:45.928638220Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I1218 13:03:13.204587    8288 command_runner.go:130] > Dec 18 13:01:45 multinode-015900 dockerd[671]: time="2023-12-18T13:01:45.928686123Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I1218 13:03:13.204587    8288 command_runner.go:130] > Dec 18 13:01:45 multinode-015900 dockerd[671]: time="2023-12-18T13:01:45.928705425Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I1218 13:03:13.204666    8288 command_runner.go:130] > Dec 18 13:01:45 multinode-015900 dockerd[671]: time="2023-12-18T13:01:45.928720526Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I1218 13:03:13.204666    8288 command_runner.go:130] > Dec 18 13:01:45 multinode-015900 dockerd[671]: time="2023-12-18T13:01:45.928837034Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I1218 13:03:13.204710    8288 command_runner.go:130] > Dec 18 13:01:45 multinode-015900 dockerd[671]: time="2023-12-18T13:01:45.928858435Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I1218 13:03:13.204762    8288 command_runner.go:130] > Dec 18 13:01:45 multinode-015900 dockerd[671]: time="2023-12-18T13:01:45.928872936Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I1218 13:03:13.204801    8288 command_runner.go:130] > Dec 18 13:01:45 multinode-015900 dockerd[671]: time="2023-12-18T13:01:45.928886237Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I1218 13:03:13.204801    8288 command_runner.go:130] > Dec 18 13:01:45 multinode-015900 dockerd[671]: time="2023-12-18T13:01:45.928901238Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I1218 13:03:13.204801    8288 command_runner.go:130] > Dec 18 13:01:45 multinode-015900 dockerd[671]: time="2023-12-18T13:01:45.928915639Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I1218 13:03:13.204851    8288 command_runner.go:130] > Dec 18 13:01:45 multinode-015900 dockerd[671]: time="2023-12-18T13:01:45.928929640Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I1218 13:03:13.204939    8288 command_runner.go:130] > Dec 18 13:01:45 multinode-015900 dockerd[671]: time="2023-12-18T13:01:45.928943641Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I1218 13:03:13.204939    8288 command_runner.go:130] > Dec 18 13:01:45 multinode-015900 dockerd[671]: time="2023-12-18T13:01:45.929025246Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I1218 13:03:13.204977    8288 command_runner.go:130] > Dec 18 13:01:45 multinode-015900 dockerd[671]: time="2023-12-18T13:01:45.929573983Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I1218 13:03:13.204977    8288 command_runner.go:130] > Dec 18 13:01:45 multinode-015900 dockerd[671]: time="2023-12-18T13:01:45.929687491Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I1218 13:03:13.205027    8288 command_runner.go:130] > Dec 18 13:01:45 multinode-015900 dockerd[671]: time="2023-12-18T13:01:45.929709792Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I1218 13:03:13.205065    8288 command_runner.go:130] > Dec 18 13:01:45 multinode-015900 dockerd[671]: time="2023-12-18T13:01:45.929813899Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I1218 13:03:13.205065    8288 command_runner.go:130] > Dec 18 13:01:45 multinode-015900 dockerd[671]: time="2023-12-18T13:01:45.929880504Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I1218 13:03:13.205065    8288 command_runner.go:130] > Dec 18 13:01:45 multinode-015900 dockerd[671]: time="2023-12-18T13:01:45.929974110Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I1218 13:03:13.205110    8288 command_runner.go:130] > Dec 18 13:01:45 multinode-015900 dockerd[671]: time="2023-12-18T13:01:45.929995011Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I1218 13:03:13.205110    8288 command_runner.go:130] > Dec 18 13:01:45 multinode-015900 dockerd[671]: time="2023-12-18T13:01:45.930008112Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I1218 13:03:13.205149    8288 command_runner.go:130] > Dec 18 13:01:45 multinode-015900 dockerd[671]: time="2023-12-18T13:01:45.930020713Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I1218 13:03:13.205149    8288 command_runner.go:130] > Dec 18 13:01:45 multinode-015900 dockerd[671]: time="2023-12-18T13:01:45.930038914Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I1218 13:03:13.205149    8288 command_runner.go:130] > Dec 18 13:01:45 multinode-015900 dockerd[671]: time="2023-12-18T13:01:45.930055116Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I1218 13:03:13.205149    8288 command_runner.go:130] > Dec 18 13:01:45 multinode-015900 dockerd[671]: time="2023-12-18T13:01:45.930073317Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I1218 13:03:13.205221    8288 command_runner.go:130] > Dec 18 13:01:45 multinode-015900 dockerd[671]: time="2023-12-18T13:01:45.930089418Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I1218 13:03:13.205221    8288 command_runner.go:130] > Dec 18 13:01:45 multinode-015900 dockerd[671]: time="2023-12-18T13:01:45.930146322Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I1218 13:03:13.205221    8288 command_runner.go:130] > Dec 18 13:01:45 multinode-015900 dockerd[671]: time="2023-12-18T13:01:45.930241128Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I1218 13:03:13.205221    8288 command_runner.go:130] > Dec 18 13:01:45 multinode-015900 dockerd[671]: time="2023-12-18T13:01:45.930260829Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I1218 13:03:13.205304    8288 command_runner.go:130] > Dec 18 13:01:45 multinode-015900 dockerd[671]: time="2023-12-18T13:01:45.930274130Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I1218 13:03:13.205304    8288 command_runner.go:130] > Dec 18 13:01:45 multinode-015900 dockerd[671]: time="2023-12-18T13:01:45.930293432Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I1218 13:03:13.205364    8288 command_runner.go:130] > Dec 18 13:01:45 multinode-015900 dockerd[671]: time="2023-12-18T13:01:45.930310033Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I1218 13:03:13.205364    8288 command_runner.go:130] > Dec 18 13:01:45 multinode-015900 dockerd[671]: time="2023-12-18T13:01:45.930324034Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I1218 13:03:13.205428    8288 command_runner.go:130] > Dec 18 13:01:45 multinode-015900 dockerd[671]: time="2023-12-18T13:01:45.930336034Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I1218 13:03:13.205428    8288 command_runner.go:130] > Dec 18 13:01:45 multinode-015900 dockerd[671]: time="2023-12-18T13:01:45.930353036Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	I1218 13:03:13.205473    8288 command_runner.go:130] > Dec 18 13:01:45 multinode-015900 dockerd[671]: time="2023-12-18T13:01:45.930365636Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I1218 13:03:13.205473    8288 command_runner.go:130] > Dec 18 13:01:45 multinode-015900 dockerd[671]: time="2023-12-18T13:01:45.930379237Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	I1218 13:03:13.205540    8288 command_runner.go:130] > Dec 18 13:01:45 multinode-015900 dockerd[671]: time="2023-12-18T13:01:45.930832868Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I1218 13:03:13.205540    8288 command_runner.go:130] > Dec 18 13:01:45 multinode-015900 dockerd[671]: time="2023-12-18T13:01:45.930974077Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I1218 13:03:13.205540    8288 command_runner.go:130] > Dec 18 13:01:45 multinode-015900 dockerd[671]: time="2023-12-18T13:01:45.931117487Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I1218 13:03:13.205615    8288 command_runner.go:130] > Dec 18 13:01:45 multinode-015900 dockerd[671]: time="2023-12-18T13:01:45.931210093Z" level=info msg="containerd successfully booted in 0.046147s"
	I1218 13:03:13.205615    8288 command_runner.go:130] > Dec 18 13:01:45 multinode-015900 dockerd[665]: time="2023-12-18T13:01:45.965175279Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I1218 13:03:13.205615    8288 command_runner.go:130] > Dec 18 13:01:46 multinode-015900 dockerd[665]: time="2023-12-18T13:01:46.117628245Z" level=info msg="Loading containers: start."
	I1218 13:03:13.205615    8288 command_runner.go:130] > Dec 18 13:01:46 multinode-015900 dockerd[665]: time="2023-12-18T13:01:46.395353764Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	I1218 13:03:13.205615    8288 command_runner.go:130] > Dec 18 13:01:46 multinode-015900 dockerd[665]: time="2023-12-18T13:01:46.463661373Z" level=info msg="Loading containers: done."
	I1218 13:03:13.205615    8288 command_runner.go:130] > Dec 18 13:01:46 multinode-015900 dockerd[665]: time="2023-12-18T13:01:46.481368290Z" level=warning msg="WARNING: No blkio throttle.read_bps_device support"
	I1218 13:03:13.205615    8288 command_runner.go:130] > Dec 18 13:01:46 multinode-015900 dockerd[665]: time="2023-12-18T13:01:46.481466096Z" level=warning msg="WARNING: No blkio throttle.write_bps_device support"
	I1218 13:03:13.205615    8288 command_runner.go:130] > Dec 18 13:01:46 multinode-015900 dockerd[665]: time="2023-12-18T13:01:46.481512999Z" level=warning msg="WARNING: No blkio throttle.read_iops_device support"
	I1218 13:03:13.205615    8288 command_runner.go:130] > Dec 18 13:01:46 multinode-015900 dockerd[665]: time="2023-12-18T13:01:46.481559302Z" level=warning msg="WARNING: No blkio throttle.write_iops_device support"
	I1218 13:03:13.205615    8288 command_runner.go:130] > Dec 18 13:01:46 multinode-015900 dockerd[665]: time="2023-12-18T13:01:46.481612505Z" level=info msg="Docker daemon" commit=311b9ff graphdriver=overlay2 version=24.0.7
	I1218 13:03:13.205615    8288 command_runner.go:130] > Dec 18 13:01:46 multinode-015900 dockerd[665]: time="2023-12-18T13:01:46.482204643Z" level=info msg="Daemon has completed initialization"
	I1218 13:03:13.205615    8288 command_runner.go:130] > Dec 18 13:01:46 multinode-015900 dockerd[665]: time="2023-12-18T13:01:46.519941423Z" level=info msg="API listen on [::]:2376"
	I1218 13:03:13.205615    8288 command_runner.go:130] > Dec 18 13:01:46 multinode-015900 dockerd[665]: time="2023-12-18T13:01:46.520077532Z" level=info msg="API listen on /var/run/docker.sock"
	I1218 13:03:13.205615    8288 command_runner.go:130] > Dec 18 13:01:46 multinode-015900 systemd[1]: Started Docker Application Container Engine.
	I1218 13:03:13.205615    8288 command_runner.go:130] > Dec 18 13:02:12 multinode-015900 dockerd[665]: time="2023-12-18T13:02:12.086813477Z" level=info msg="Processing signal 'terminated'"
	I1218 13:03:13.205615    8288 command_runner.go:130] > Dec 18 13:02:12 multinode-015900 systemd[1]: Stopping Docker Application Container Engine...
	I1218 13:03:13.205615    8288 command_runner.go:130] > Dec 18 13:02:12 multinode-015900 dockerd[665]: time="2023-12-18T13:02:12.088645877Z" level=info msg="Daemon shutdown complete"
	I1218 13:03:13.205615    8288 command_runner.go:130] > Dec 18 13:02:12 multinode-015900 dockerd[665]: time="2023-12-18T13:02:12.088671577Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	I1218 13:03:13.205615    8288 command_runner.go:130] > Dec 18 13:02:12 multinode-015900 dockerd[665]: time="2023-12-18T13:02:12.088733577Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	I1218 13:03:13.205615    8288 command_runner.go:130] > Dec 18 13:02:12 multinode-015900 dockerd[665]: time="2023-12-18T13:02:12.088844877Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	I1218 13:03:13.205615    8288 command_runner.go:130] > Dec 18 13:02:13 multinode-015900 systemd[1]: docker.service: Succeeded.
	I1218 13:03:13.205615    8288 command_runner.go:130] > Dec 18 13:02:13 multinode-015900 systemd[1]: Stopped Docker Application Container Engine.
	I1218 13:03:13.205615    8288 command_runner.go:130] > Dec 18 13:02:13 multinode-015900 systemd[1]: Starting Docker Application Container Engine...
	I1218 13:03:13.206202    8288 command_runner.go:130] > Dec 18 13:02:13 multinode-015900 dockerd[1038]: time="2023-12-18T13:02:13.164849577Z" level=info msg="Starting up"
	I1218 13:03:13.206202    8288 command_runner.go:130] > Dec 18 13:03:13 multinode-015900 dockerd[1038]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	I1218 13:03:13.206202    8288 command_runner.go:130] > Dec 18 13:03:13 multinode-015900 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	I1218 13:03:13.206202    8288 command_runner.go:130] > Dec 18 13:03:13 multinode-015900 systemd[1]: docker.service: Failed with result 'exit-code'.
	I1218 13:03:13.206202    8288 command_runner.go:130] > Dec 18 13:03:13 multinode-015900 systemd[1]: Failed to start Docker Application Container Engine.
	I1218 13:03:13.212590    8288 out.go:177] 
	W1218 13:03:13.213401    8288 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	-- Journal begins at Mon 2023-12-18 13:00:50 UTC, ends at Mon 2023-12-18 13:03:13 UTC. --
	Dec 18 13:01:45 multinode-015900 systemd[1]: Starting Docker Application Container Engine...
	Dec 18 13:01:45 multinode-015900 dockerd[665]: time="2023-12-18T13:01:45.851330219Z" level=info msg="Starting up"
	Dec 18 13:01:45 multinode-015900 dockerd[665]: time="2023-12-18T13:01:45.852470695Z" level=info msg="containerd not running, starting managed containerd"
	Dec 18 13:01:45 multinode-015900 dockerd[665]: time="2023-12-18T13:01:45.853966196Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=671
	Dec 18 13:01:45 multinode-015900 dockerd[671]: time="2023-12-18T13:01:45.889993220Z" level=info msg="starting containerd" revision=64b8a811b07ba6288238eefc14d898ee0b5b99ba version=v1.7.11
	Dec 18 13:01:45 multinode-015900 dockerd[671]: time="2023-12-18T13:01:45.917647381Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Dec 18 13:01:45 multinode-015900 dockerd[671]: time="2023-12-18T13:01:45.917709185Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Dec 18 13:01:45 multinode-015900 dockerd[671]: time="2023-12-18T13:01:45.920227254Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.57\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Dec 18 13:01:45 multinode-015900 dockerd[671]: time="2023-12-18T13:01:45.920346762Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Dec 18 13:01:45 multinode-015900 dockerd[671]: time="2023-12-18T13:01:45.920557977Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Dec 18 13:01:45 multinode-015900 dockerd[671]: time="2023-12-18T13:01:45.920664084Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Dec 18 13:01:45 multinode-015900 dockerd[671]: time="2023-12-18T13:01:45.921448636Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Dec 18 13:01:45 multinode-015900 dockerd[671]: time="2023-12-18T13:01:45.922019575Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Dec 18 13:01:45 multinode-015900 dockerd[671]: time="2023-12-18T13:01:45.922125682Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Dec 18 13:01:45 multinode-015900 dockerd[671]: time="2023-12-18T13:01:45.922619715Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Dec 18 13:01:45 multinode-015900 dockerd[671]: time="2023-12-18T13:01:45.923464372Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Dec 18 13:01:45 multinode-015900 dockerd[671]: time="2023-12-18T13:01:45.923588080Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Dec 18 13:01:45 multinode-015900 dockerd[671]: time="2023-12-18T13:01:45.923607582Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Dec 18 13:01:45 multinode-015900 dockerd[671]: time="2023-12-18T13:01:45.923947405Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Dec 18 13:01:45 multinode-015900 dockerd[671]: time="2023-12-18T13:01:45.924049711Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Dec 18 13:01:45 multinode-015900 dockerd[671]: time="2023-12-18T13:01:45.924090014Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Dec 18 13:01:45 multinode-015900 dockerd[671]: time="2023-12-18T13:01:45.924107415Z" level=info msg="metadata content store policy set" policy=shared
	Dec 18 13:01:45 multinode-015900 dockerd[671]: time="2023-12-18T13:01:45.928340500Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Dec 18 13:01:45 multinode-015900 dockerd[671]: time="2023-12-18T13:01:45.928450008Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Dec 18 13:01:45 multinode-015900 dockerd[671]: time="2023-12-18T13:01:45.928473009Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Dec 18 13:01:45 multinode-015900 dockerd[671]: time="2023-12-18T13:01:45.928521212Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Dec 18 13:01:45 multinode-015900 dockerd[671]: time="2023-12-18T13:01:45.928540614Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Dec 18 13:01:45 multinode-015900 dockerd[671]: time="2023-12-18T13:01:45.928553114Z" level=info msg="NRI interface is disabled by configuration."
	Dec 18 13:01:45 multinode-015900 dockerd[671]: time="2023-12-18T13:01:45.928566015Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Dec 18 13:01:45 multinode-015900 dockerd[671]: time="2023-12-18T13:01:45.928638220Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Dec 18 13:01:45 multinode-015900 dockerd[671]: time="2023-12-18T13:01:45.928686123Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Dec 18 13:01:45 multinode-015900 dockerd[671]: time="2023-12-18T13:01:45.928705425Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Dec 18 13:01:45 multinode-015900 dockerd[671]: time="2023-12-18T13:01:45.928720526Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Dec 18 13:01:45 multinode-015900 dockerd[671]: time="2023-12-18T13:01:45.928837034Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Dec 18 13:01:45 multinode-015900 dockerd[671]: time="2023-12-18T13:01:45.928858435Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Dec 18 13:01:45 multinode-015900 dockerd[671]: time="2023-12-18T13:01:45.928872936Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Dec 18 13:01:45 multinode-015900 dockerd[671]: time="2023-12-18T13:01:45.928886237Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Dec 18 13:01:45 multinode-015900 dockerd[671]: time="2023-12-18T13:01:45.928901238Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Dec 18 13:01:45 multinode-015900 dockerd[671]: time="2023-12-18T13:01:45.928915639Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Dec 18 13:01:45 multinode-015900 dockerd[671]: time="2023-12-18T13:01:45.928929640Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Dec 18 13:01:45 multinode-015900 dockerd[671]: time="2023-12-18T13:01:45.928943641Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Dec 18 13:01:45 multinode-015900 dockerd[671]: time="2023-12-18T13:01:45.929025246Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Dec 18 13:01:45 multinode-015900 dockerd[671]: time="2023-12-18T13:01:45.929573983Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Dec 18 13:01:45 multinode-015900 dockerd[671]: time="2023-12-18T13:01:45.929687491Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Dec 18 13:01:45 multinode-015900 dockerd[671]: time="2023-12-18T13:01:45.929709792Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Dec 18 13:01:45 multinode-015900 dockerd[671]: time="2023-12-18T13:01:45.929813899Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Dec 18 13:01:45 multinode-015900 dockerd[671]: time="2023-12-18T13:01:45.929880504Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Dec 18 13:01:45 multinode-015900 dockerd[671]: time="2023-12-18T13:01:45.929974110Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Dec 18 13:01:45 multinode-015900 dockerd[671]: time="2023-12-18T13:01:45.929995011Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Dec 18 13:01:45 multinode-015900 dockerd[671]: time="2023-12-18T13:01:45.930008112Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Dec 18 13:01:45 multinode-015900 dockerd[671]: time="2023-12-18T13:01:45.930020713Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Dec 18 13:01:45 multinode-015900 dockerd[671]: time="2023-12-18T13:01:45.930038914Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Dec 18 13:01:45 multinode-015900 dockerd[671]: time="2023-12-18T13:01:45.930055116Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Dec 18 13:01:45 multinode-015900 dockerd[671]: time="2023-12-18T13:01:45.930073317Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Dec 18 13:01:45 multinode-015900 dockerd[671]: time="2023-12-18T13:01:45.930089418Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Dec 18 13:01:45 multinode-015900 dockerd[671]: time="2023-12-18T13:01:45.930146322Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Dec 18 13:01:45 multinode-015900 dockerd[671]: time="2023-12-18T13:01:45.930241128Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Dec 18 13:01:45 multinode-015900 dockerd[671]: time="2023-12-18T13:01:45.930260829Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Dec 18 13:01:45 multinode-015900 dockerd[671]: time="2023-12-18T13:01:45.930274130Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Dec 18 13:01:45 multinode-015900 dockerd[671]: time="2023-12-18T13:01:45.930293432Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Dec 18 13:01:45 multinode-015900 dockerd[671]: time="2023-12-18T13:01:45.930310033Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Dec 18 13:01:45 multinode-015900 dockerd[671]: time="2023-12-18T13:01:45.930324034Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Dec 18 13:01:45 multinode-015900 dockerd[671]: time="2023-12-18T13:01:45.930336034Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Dec 18 13:01:45 multinode-015900 dockerd[671]: time="2023-12-18T13:01:45.930353036Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	Dec 18 13:01:45 multinode-015900 dockerd[671]: time="2023-12-18T13:01:45.930365636Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Dec 18 13:01:45 multinode-015900 dockerd[671]: time="2023-12-18T13:01:45.930379237Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	Dec 18 13:01:45 multinode-015900 dockerd[671]: time="2023-12-18T13:01:45.930832868Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Dec 18 13:01:45 multinode-015900 dockerd[671]: time="2023-12-18T13:01:45.930974077Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Dec 18 13:01:45 multinode-015900 dockerd[671]: time="2023-12-18T13:01:45.931117487Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Dec 18 13:01:45 multinode-015900 dockerd[671]: time="2023-12-18T13:01:45.931210093Z" level=info msg="containerd successfully booted in 0.046147s"
	Dec 18 13:01:45 multinode-015900 dockerd[665]: time="2023-12-18T13:01:45.965175279Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Dec 18 13:01:46 multinode-015900 dockerd[665]: time="2023-12-18T13:01:46.117628245Z" level=info msg="Loading containers: start."
	Dec 18 13:01:46 multinode-015900 dockerd[665]: time="2023-12-18T13:01:46.395353764Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Dec 18 13:01:46 multinode-015900 dockerd[665]: time="2023-12-18T13:01:46.463661373Z" level=info msg="Loading containers: done."
	Dec 18 13:01:46 multinode-015900 dockerd[665]: time="2023-12-18T13:01:46.481368290Z" level=warning msg="WARNING: No blkio throttle.read_bps_device support"
	Dec 18 13:01:46 multinode-015900 dockerd[665]: time="2023-12-18T13:01:46.481466096Z" level=warning msg="WARNING: No blkio throttle.write_bps_device support"
	Dec 18 13:01:46 multinode-015900 dockerd[665]: time="2023-12-18T13:01:46.481512999Z" level=warning msg="WARNING: No blkio throttle.read_iops_device support"
	Dec 18 13:01:46 multinode-015900 dockerd[665]: time="2023-12-18T13:01:46.481559302Z" level=warning msg="WARNING: No blkio throttle.write_iops_device support"
	Dec 18 13:01:46 multinode-015900 dockerd[665]: time="2023-12-18T13:01:46.481612505Z" level=info msg="Docker daemon" commit=311b9ff graphdriver=overlay2 version=24.0.7
	Dec 18 13:01:46 multinode-015900 dockerd[665]: time="2023-12-18T13:01:46.482204643Z" level=info msg="Daemon has completed initialization"
	Dec 18 13:01:46 multinode-015900 dockerd[665]: time="2023-12-18T13:01:46.519941423Z" level=info msg="API listen on [::]:2376"
	Dec 18 13:01:46 multinode-015900 dockerd[665]: time="2023-12-18T13:01:46.520077532Z" level=info msg="API listen on /var/run/docker.sock"
	Dec 18 13:01:46 multinode-015900 systemd[1]: Started Docker Application Container Engine.
	Dec 18 13:02:12 multinode-015900 dockerd[665]: time="2023-12-18T13:02:12.086813477Z" level=info msg="Processing signal 'terminated'"
	Dec 18 13:02:12 multinode-015900 systemd[1]: Stopping Docker Application Container Engine...
	Dec 18 13:02:12 multinode-015900 dockerd[665]: time="2023-12-18T13:02:12.088645877Z" level=info msg="Daemon shutdown complete"
	Dec 18 13:02:12 multinode-015900 dockerd[665]: time="2023-12-18T13:02:12.088671577Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Dec 18 13:02:12 multinode-015900 dockerd[665]: time="2023-12-18T13:02:12.088733577Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Dec 18 13:02:12 multinode-015900 dockerd[665]: time="2023-12-18T13:02:12.088844877Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Dec 18 13:02:13 multinode-015900 systemd[1]: docker.service: Succeeded.
	Dec 18 13:02:13 multinode-015900 systemd[1]: Stopped Docker Application Container Engine.
	Dec 18 13:02:13 multinode-015900 systemd[1]: Starting Docker Application Container Engine...
	Dec 18 13:02:13 multinode-015900 dockerd[1038]: time="2023-12-18T13:02:13.164849577Z" level=info msg="Starting up"
	Dec 18 13:03:13 multinode-015900 dockerd[1038]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Dec 18 13:03:13 multinode-015900 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Dec 18 13:03:13 multinode-015900 systemd[1]: docker.service: Failed with result 'exit-code'.
	Dec 18 13:03:13 multinode-015900 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	-- Journal begins at Mon 2023-12-18 13:00:50 UTC, ends at Mon 2023-12-18 13:03:13 UTC. --
	Dec 18 13:01:45 multinode-015900 systemd[1]: Starting Docker Application Container Engine...
	Dec 18 13:01:45 multinode-015900 dockerd[665]: time="2023-12-18T13:01:45.851330219Z" level=info msg="Starting up"
	Dec 18 13:01:45 multinode-015900 dockerd[665]: time="2023-12-18T13:01:45.852470695Z" level=info msg="containerd not running, starting managed containerd"
	Dec 18 13:01:45 multinode-015900 dockerd[665]: time="2023-12-18T13:01:45.853966196Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=671
	Dec 18 13:01:45 multinode-015900 dockerd[671]: time="2023-12-18T13:01:45.889993220Z" level=info msg="starting containerd" revision=64b8a811b07ba6288238eefc14d898ee0b5b99ba version=v1.7.11
	Dec 18 13:01:45 multinode-015900 dockerd[671]: time="2023-12-18T13:01:45.917647381Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Dec 18 13:01:45 multinode-015900 dockerd[671]: time="2023-12-18T13:01:45.917709185Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Dec 18 13:01:45 multinode-015900 dockerd[671]: time="2023-12-18T13:01:45.920227254Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.57\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Dec 18 13:01:45 multinode-015900 dockerd[671]: time="2023-12-18T13:01:45.920346762Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Dec 18 13:01:45 multinode-015900 dockerd[671]: time="2023-12-18T13:01:45.920557977Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Dec 18 13:01:45 multinode-015900 dockerd[671]: time="2023-12-18T13:01:45.920664084Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Dec 18 13:01:45 multinode-015900 dockerd[671]: time="2023-12-18T13:01:45.921448636Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Dec 18 13:01:45 multinode-015900 dockerd[671]: time="2023-12-18T13:01:45.922019575Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Dec 18 13:01:45 multinode-015900 dockerd[671]: time="2023-12-18T13:01:45.922125682Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Dec 18 13:01:45 multinode-015900 dockerd[671]: time="2023-12-18T13:01:45.922619715Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Dec 18 13:01:45 multinode-015900 dockerd[671]: time="2023-12-18T13:01:45.923464372Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Dec 18 13:01:45 multinode-015900 dockerd[671]: time="2023-12-18T13:01:45.923588080Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Dec 18 13:01:45 multinode-015900 dockerd[671]: time="2023-12-18T13:01:45.923607582Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Dec 18 13:01:45 multinode-015900 dockerd[671]: time="2023-12-18T13:01:45.923947405Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Dec 18 13:01:45 multinode-015900 dockerd[671]: time="2023-12-18T13:01:45.924049711Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Dec 18 13:01:45 multinode-015900 dockerd[671]: time="2023-12-18T13:01:45.924090014Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Dec 18 13:01:45 multinode-015900 dockerd[671]: time="2023-12-18T13:01:45.924107415Z" level=info msg="metadata content store policy set" policy=shared
	Dec 18 13:01:45 multinode-015900 dockerd[671]: time="2023-12-18T13:01:45.928340500Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Dec 18 13:01:45 multinode-015900 dockerd[671]: time="2023-12-18T13:01:45.928450008Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Dec 18 13:01:45 multinode-015900 dockerd[671]: time="2023-12-18T13:01:45.928473009Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Dec 18 13:01:45 multinode-015900 dockerd[671]: time="2023-12-18T13:01:45.928521212Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Dec 18 13:01:45 multinode-015900 dockerd[671]: time="2023-12-18T13:01:45.928540614Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Dec 18 13:01:45 multinode-015900 dockerd[671]: time="2023-12-18T13:01:45.928553114Z" level=info msg="NRI interface is disabled by configuration."
	Dec 18 13:01:45 multinode-015900 dockerd[671]: time="2023-12-18T13:01:45.928566015Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Dec 18 13:01:45 multinode-015900 dockerd[671]: time="2023-12-18T13:01:45.928638220Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Dec 18 13:01:45 multinode-015900 dockerd[671]: time="2023-12-18T13:01:45.928686123Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Dec 18 13:01:45 multinode-015900 dockerd[671]: time="2023-12-18T13:01:45.928705425Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Dec 18 13:01:45 multinode-015900 dockerd[671]: time="2023-12-18T13:01:45.928720526Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Dec 18 13:01:45 multinode-015900 dockerd[671]: time="2023-12-18T13:01:45.928837034Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Dec 18 13:01:45 multinode-015900 dockerd[671]: time="2023-12-18T13:01:45.928858435Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Dec 18 13:01:45 multinode-015900 dockerd[671]: time="2023-12-18T13:01:45.928872936Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Dec 18 13:01:45 multinode-015900 dockerd[671]: time="2023-12-18T13:01:45.928886237Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Dec 18 13:01:45 multinode-015900 dockerd[671]: time="2023-12-18T13:01:45.928901238Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Dec 18 13:01:45 multinode-015900 dockerd[671]: time="2023-12-18T13:01:45.928915639Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Dec 18 13:01:45 multinode-015900 dockerd[671]: time="2023-12-18T13:01:45.928929640Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Dec 18 13:01:45 multinode-015900 dockerd[671]: time="2023-12-18T13:01:45.928943641Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Dec 18 13:01:45 multinode-015900 dockerd[671]: time="2023-12-18T13:01:45.929025246Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Dec 18 13:01:45 multinode-015900 dockerd[671]: time="2023-12-18T13:01:45.929573983Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Dec 18 13:01:45 multinode-015900 dockerd[671]: time="2023-12-18T13:01:45.929687491Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Dec 18 13:01:45 multinode-015900 dockerd[671]: time="2023-12-18T13:01:45.929709792Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Dec 18 13:01:45 multinode-015900 dockerd[671]: time="2023-12-18T13:01:45.929813899Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Dec 18 13:01:45 multinode-015900 dockerd[671]: time="2023-12-18T13:01:45.929880504Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Dec 18 13:01:45 multinode-015900 dockerd[671]: time="2023-12-18T13:01:45.929974110Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Dec 18 13:01:45 multinode-015900 dockerd[671]: time="2023-12-18T13:01:45.929995011Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Dec 18 13:01:45 multinode-015900 dockerd[671]: time="2023-12-18T13:01:45.930008112Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Dec 18 13:01:45 multinode-015900 dockerd[671]: time="2023-12-18T13:01:45.930020713Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Dec 18 13:01:45 multinode-015900 dockerd[671]: time="2023-12-18T13:01:45.930038914Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Dec 18 13:01:45 multinode-015900 dockerd[671]: time="2023-12-18T13:01:45.930055116Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Dec 18 13:01:45 multinode-015900 dockerd[671]: time="2023-12-18T13:01:45.930073317Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Dec 18 13:01:45 multinode-015900 dockerd[671]: time="2023-12-18T13:01:45.930089418Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Dec 18 13:01:45 multinode-015900 dockerd[671]: time="2023-12-18T13:01:45.930146322Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Dec 18 13:01:45 multinode-015900 dockerd[671]: time="2023-12-18T13:01:45.930241128Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Dec 18 13:01:45 multinode-015900 dockerd[671]: time="2023-12-18T13:01:45.930260829Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Dec 18 13:01:45 multinode-015900 dockerd[671]: time="2023-12-18T13:01:45.930274130Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Dec 18 13:01:45 multinode-015900 dockerd[671]: time="2023-12-18T13:01:45.930293432Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Dec 18 13:01:45 multinode-015900 dockerd[671]: time="2023-12-18T13:01:45.930310033Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Dec 18 13:01:45 multinode-015900 dockerd[671]: time="2023-12-18T13:01:45.930324034Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Dec 18 13:01:45 multinode-015900 dockerd[671]: time="2023-12-18T13:01:45.930336034Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Dec 18 13:01:45 multinode-015900 dockerd[671]: time="2023-12-18T13:01:45.930353036Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	Dec 18 13:01:45 multinode-015900 dockerd[671]: time="2023-12-18T13:01:45.930365636Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Dec 18 13:01:45 multinode-015900 dockerd[671]: time="2023-12-18T13:01:45.930379237Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	Dec 18 13:01:45 multinode-015900 dockerd[671]: time="2023-12-18T13:01:45.930832868Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Dec 18 13:01:45 multinode-015900 dockerd[671]: time="2023-12-18T13:01:45.930974077Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Dec 18 13:01:45 multinode-015900 dockerd[671]: time="2023-12-18T13:01:45.931117487Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Dec 18 13:01:45 multinode-015900 dockerd[671]: time="2023-12-18T13:01:45.931210093Z" level=info msg="containerd successfully booted in 0.046147s"
	Dec 18 13:01:45 multinode-015900 dockerd[665]: time="2023-12-18T13:01:45.965175279Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Dec 18 13:01:46 multinode-015900 dockerd[665]: time="2023-12-18T13:01:46.117628245Z" level=info msg="Loading containers: start."
	Dec 18 13:01:46 multinode-015900 dockerd[665]: time="2023-12-18T13:01:46.395353764Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Dec 18 13:01:46 multinode-015900 dockerd[665]: time="2023-12-18T13:01:46.463661373Z" level=info msg="Loading containers: done."
	Dec 18 13:01:46 multinode-015900 dockerd[665]: time="2023-12-18T13:01:46.481368290Z" level=warning msg="WARNING: No blkio throttle.read_bps_device support"
	Dec 18 13:01:46 multinode-015900 dockerd[665]: time="2023-12-18T13:01:46.481466096Z" level=warning msg="WARNING: No blkio throttle.write_bps_device support"
	Dec 18 13:01:46 multinode-015900 dockerd[665]: time="2023-12-18T13:01:46.481512999Z" level=warning msg="WARNING: No blkio throttle.read_iops_device support"
	Dec 18 13:01:46 multinode-015900 dockerd[665]: time="2023-12-18T13:01:46.481559302Z" level=warning msg="WARNING: No blkio throttle.write_iops_device support"
	Dec 18 13:01:46 multinode-015900 dockerd[665]: time="2023-12-18T13:01:46.481612505Z" level=info msg="Docker daemon" commit=311b9ff graphdriver=overlay2 version=24.0.7
	Dec 18 13:01:46 multinode-015900 dockerd[665]: time="2023-12-18T13:01:46.482204643Z" level=info msg="Daemon has completed initialization"
	Dec 18 13:01:46 multinode-015900 dockerd[665]: time="2023-12-18T13:01:46.519941423Z" level=info msg="API listen on [::]:2376"
	Dec 18 13:01:46 multinode-015900 dockerd[665]: time="2023-12-18T13:01:46.520077532Z" level=info msg="API listen on /var/run/docker.sock"
	Dec 18 13:01:46 multinode-015900 systemd[1]: Started Docker Application Container Engine.
	Dec 18 13:02:12 multinode-015900 dockerd[665]: time="2023-12-18T13:02:12.086813477Z" level=info msg="Processing signal 'terminated'"
	Dec 18 13:02:12 multinode-015900 systemd[1]: Stopping Docker Application Container Engine...
	Dec 18 13:02:12 multinode-015900 dockerd[665]: time="2023-12-18T13:02:12.088645877Z" level=info msg="Daemon shutdown complete"
	Dec 18 13:02:12 multinode-015900 dockerd[665]: time="2023-12-18T13:02:12.088671577Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Dec 18 13:02:12 multinode-015900 dockerd[665]: time="2023-12-18T13:02:12.088733577Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Dec 18 13:02:12 multinode-015900 dockerd[665]: time="2023-12-18T13:02:12.088844877Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Dec 18 13:02:13 multinode-015900 systemd[1]: docker.service: Succeeded.
	Dec 18 13:02:13 multinode-015900 systemd[1]: Stopped Docker Application Container Engine.
	Dec 18 13:02:13 multinode-015900 systemd[1]: Starting Docker Application Container Engine...
	Dec 18 13:02:13 multinode-015900 dockerd[1038]: time="2023-12-18T13:02:13.164849577Z" level=info msg="Starting up"
	Dec 18 13:03:13 multinode-015900 dockerd[1038]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Dec 18 13:03:13 multinode-015900 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Dec 18 13:03:13 multinode-015900 systemd[1]: docker.service: Failed with result 'exit-code'.
	Dec 18 13:03:13 multinode-015900 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W1218 13:03:13.213401    8288 out.go:239] * 
	* 
	W1218 13:03:13.215408    8288 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1218 13:03:13.216228    8288 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:384: failed to start cluster. args "out/minikube-windows-amd64.exe start -p multinode-015900 --wait=true -v=8 --alsologtostderr --driver=hyperv" : exit status 90
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-015900 -n multinode-015900
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-015900 -n multinode-015900: exit status 6 (11.8236337s)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	W1218 13:03:13.603625    6748 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	E1218 13:03:25.207639    6748 status.go:415] kubeconfig endpoint: extract IP: "multinode-015900" does not appear in C:\Users\jenkins.minikube7\minikube-integration\kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "multinode-015900" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestMultiNode/serial/RestartMultiNode (181.59s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (440.8s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:471: (dbg) Run:  out/minikube-windows-amd64.exe node list -p multinode-015900
multinode_test.go:480: (dbg) Run:  out/minikube-windows-amd64.exe start -p multinode-015900-m01 --driver=hyperv
E1218 13:03:42.374408   14928 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-806500\client.crt: The system cannot find the path specified.
E1218 13:03:56.562447   14928 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ingress-addon-legacy-134100\client.crt: The system cannot find the path specified.
E1218 13:04:02.418874   14928 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-922300\client.crt: The system cannot find the path specified.
multinode_test.go:480: (dbg) Done: out/minikube-windows-amd64.exe start -p multinode-015900-m01 --driver=hyperv: (3m9.3514972s)
multinode_test.go:482: expected start profile command to fail. args "out/minikube-windows-amd64.exe start -p multinode-015900-m01 --driver=hyperv"
multinode_test.go:488: (dbg) Run:  out/minikube-windows-amd64.exe start -p multinode-015900-m02 --driver=hyperv
E1218 13:08:25.557096   14928 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-806500\client.crt: The system cannot find the path specified.
E1218 13:08:42.368733   14928 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-806500\client.crt: The system cannot find the path specified.
E1218 13:08:56.556157   14928 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ingress-addon-legacy-134100\client.crt: The system cannot find the path specified.
E1218 13:09:02.410704   14928 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-922300\client.crt: The system cannot find the path specified.
multinode_test.go:488: (dbg) Done: out/minikube-windows-amd64.exe start -p multinode-015900-m02 --driver=hyperv: (3m10.0720857s)
multinode_test.go:495: (dbg) Run:  out/minikube-windows-amd64.exe node add -p multinode-015900
multinode_test.go:495: (dbg) Non-zero exit: out/minikube-windows-amd64.exe node add -p multinode-015900: exit status 119 (7.3995292s)

                                                
                                                
-- stdout --
	* This control plane is not running! (state=Stopped)
	  To start a cluster, run: "minikube start -p multinode-015900"

                                                
                                                
-- /stdout --
** stderr ** 
	W1218 13:09:45.078433    2972 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	! This is unusual - you may want to investigate using "minikube logs -p multinode-015900"

                                                
                                                
** /stderr **
multinode_test.go:500: (dbg) Run:  out/minikube-windows-amd64.exe delete -p multinode-015900-m02
multinode_test.go:500: (dbg) Done: out/minikube-windows-amd64.exe delete -p multinode-015900-m02: (41.6517156s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-015900 -n multinode-015900
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-015900 -n multinode-015900: exit status 6 (12.0672692s)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	W1218 13:10:34.127105   12796 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	E1218 13:10:46.002275   12796 status.go:415] kubeconfig endpoint: extract IP: "multinode-015900" does not appear in C:\Users\jenkins.minikube7\minikube-integration\kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "multinode-015900" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestMultiNode/serial/ValidateNameConflict (440.80s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (540.45s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:133: (dbg) Run:  C:\Users\jenkins.minikube7\AppData\Local\Temp\minikube-v1.6.2.1269155921.exe start -p running-upgrade-208100 --memory=2200 --vm-driver=hyperv
E1218 13:28:42.377624   14928 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-806500\client.crt: The system cannot find the path specified.
version_upgrade_test.go:133: (dbg) Done: C:\Users\jenkins.minikube7\AppData\Local\Temp\minikube-v1.6.2.1269155921.exe start -p running-upgrade-208100 --memory=2200 --vm-driver=hyperv: (4m41.813971s)
version_upgrade_test.go:143: (dbg) Run:  out/minikube-windows-amd64.exe start -p running-upgrade-208100 --memory=2200 --alsologtostderr -v=1 --driver=hyperv
version_upgrade_test.go:143: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p running-upgrade-208100 --memory=2200 --alsologtostderr -v=1 --driver=hyperv: exit status 90 (3m18.7757074s)

                                                
                                                
-- stdout --
	* [running-upgrade-208100] minikube v1.32.0 on Microsoft Windows 10 Enterprise N 10.0.19045.3803 Build 19045.3803
	  - KUBECONFIG=C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube7\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=17824
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	* Using the hyperv driver based on existing profile
	* Starting control plane node running-upgrade-208100 in cluster running-upgrade-208100
	* Updating the running hyperv "running-upgrade-208100" VM ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	W1218 13:30:06.726058   11484 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I1218 13:30:06.806380   11484 out.go:296] Setting OutFile to fd 1724 ...
	I1218 13:30:06.807350   11484 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1218 13:30:06.807350   11484 out.go:309] Setting ErrFile to fd 1712...
	I1218 13:30:06.807350   11484 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1218 13:30:06.831365   11484 out.go:303] Setting JSON to false
	I1218 13:30:06.835326   11484 start.go:128] hostinfo: {"hostname":"minikube7","uptime":6681,"bootTime":1702899525,"procs":210,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.3803 Build 19045.3803","kernelVersion":"10.0.19045.3803 Build 19045.3803","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f66ed2ea-6c04-4a6b-8eea-b2fb0953e990"}
	W1218 13:30:06.835326   11484 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1218 13:30:06.837348   11484 out.go:177] * [running-upgrade-208100] minikube v1.32.0 on Microsoft Windows 10 Enterprise N 10.0.19045.3803 Build 19045.3803
	I1218 13:30:06.838344   11484 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	I1218 13:30:06.838344   11484 notify.go:220] Checking for updates...
	I1218 13:30:06.839350   11484 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1218 13:30:06.840372   11484 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube7\minikube-integration\.minikube
	I1218 13:30:06.841342   11484 out.go:177]   - MINIKUBE_LOCATION=17824
	I1218 13:30:06.841342   11484 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1218 13:30:06.843342   11484 config.go:182] Loaded profile config "running-upgrade-208100": Driver=, ContainerRuntime=docker, KubernetesVersion=v1.17.0
	I1218 13:30:06.844347   11484 start_flags.go:694] config upgrade: Driver=hyperv
	I1218 13:30:06.844347   11484 start_flags.go:706] config upgrade: KicBaseImage=gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702585004-17765@sha256:ba2fbb9efd5b81a443834ba0800f3bc13feea942ce199df74b0054a9bdb32bbd
	I1218 13:30:06.844347   11484 profile.go:148] Saving config to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\running-upgrade-208100\config.json ...
	I1218 13:30:06.849353   11484 out.go:177] * Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	I1218 13:30:06.849353   11484 driver.go:392] Setting default libvirt URI to qemu:///system
	I1218 13:30:14.465874   11484 out.go:177] * Using the hyperv driver based on existing profile
	I1218 13:30:14.467779   11484 start.go:298] selected driver: hyperv
	I1218 13:30:14.467779   11484 start.go:902] validating driver "hyperv" against &{Name:running-upgrade-208100 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.6.0.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702585004-17765@sha256:ba2fbb9efd5b81a443834ba0800f3bc13feea942ce199df74b0054a9bdb32bbd Memory:2200 CPUs:2 DiskSize:20000 VMDriver:hyperv Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.17.0
ClusterName: Namespace: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:minikube IP:192.168.226.193 Port:8443 KubernetesVersion:v1.17.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[] StartHostTimeout:0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP
: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:0s GPUs:}
	I1218 13:30:14.467971   11484 start.go:913] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1218 13:30:14.530819   11484 cni.go:84] Creating CNI manager for ""
	I1218 13:30:14.530819   11484 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I1218 13:30:14.530819   11484 start_flags.go:323] config:
	{Name:running-upgrade-208100 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.6.0.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702585004-17765@sha256:ba2fbb9efd5b81a443834ba0800f3bc13feea942ce199df74b0054a9bdb32bbd Memory:2200 CPUs:2 DiskSize:20000 VMDriver:hyperv Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.17.0 ClusterName: Namespace: APIServerName:minikubeCA APIServerNames:[] APIServerIPs
:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:minikube IP:192.168.226.193 Port:8443 KubernetesVersion:v1.17.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[] StartHostTimeout:0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:0s GPUs:}
	I1218 13:30:14.530819   11484 iso.go:125] acquiring lock: {Name:mka7fdf4332632f41b99a369cc525e40499a0030 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1218 13:30:14.566833   11484 out.go:177] * Starting control plane node running-upgrade-208100 in cluster running-upgrade-208100
	I1218 13:30:14.568031   11484 preload.go:132] Checking if preload exists for k8s version v1.17.0 and runtime docker
	W1218 13:30:14.632143   11484 preload.go:115] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.17.0/preloaded-images-k8s-v18-v1.17.0-docker-overlay2-amd64.tar.lz4 status code: 404
	I1218 13:30:14.632492   11484 profile.go:148] Saving config to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\running-upgrade-208100\config.json ...
	I1218 13:30:14.632632   11484 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner:v5 -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner_v5
	I1218 13:30:14.632675   11484 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler:v1.17.0 -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler_v1.17.0
	I1218 13:30:14.632675   11484 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns:1.6.5 -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns_1.6.5
	I1218 13:30:14.632675   11484 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd:3.4.3-0 -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.4.3-0
	I1218 13:30:14.632675   11484 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause:3.1 -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause_3.1
	I1218 13:30:14.632557   11484 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver:v1.17.0 -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver_v1.17.0
	I1218 13:30:14.632675   11484 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager:v1.17.0 -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager_v1.17.0
	I1218 13:30:14.635467   11484 start.go:365] acquiring machines lock for running-upgrade-208100: {Name:mk814f158b6187cc9297257c36fdbe0d2871c950 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1218 13:30:14.632632   11484 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy:v1.17.0 -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.17.0
	I1218 13:30:14.885273   11484 cache.go:107] acquiring lock: {Name:mke680978131adbec647605a81bab7c783de93d6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1218 13:30:14.885751   11484 image.go:134] retrieving image: registry.k8s.io/pause:3.1
	I1218 13:30:14.887763   11484 cache.go:107] acquiring lock: {Name:mk3a663ba67028a054dd5a6e96ba367c56e950d4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1218 13:30:14.887763   11484 cache.go:107] acquiring lock: {Name:mkeac0ccf1d6f0e0eb0c19801602a218964c6025 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1218 13:30:14.887763   11484 cache.go:107] acquiring lock: {Name:mk6522f86f404131d1768d0de0ce775513ec42e5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1218 13:30:14.888720   11484 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.3-0
	I1218 13:30:14.888720   11484 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.17.0
	I1218 13:30:14.888720   11484 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.5
	I1218 13:30:14.895723   11484 cache.go:107] acquiring lock: {Name:mk43c24b3570a50e54ec9f1dc43aba5ea2e54859 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1218 13:30:14.895723   11484 cache.go:107] acquiring lock: {Name:mk945c9573a262bf2c410f3ec338c9e4cbac7ce3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1218 13:30:14.895723   11484 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner_v5 exists
	I1218 13:30:14.895723   11484 cache.go:107] acquiring lock: {Name:mk1869bccfa4db5e538bd31af28e9c95a48df16c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1218 13:30:14.895723   11484 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\cache\\images\\amd64\\gcr.io\\k8s-minikube\\storage-provisioner_v5" took 263.0465ms
	I1218 13:30:14.895723   11484 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner_v5 succeeded
	I1218 13:30:14.895723   11484 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.17.0
	I1218 13:30:14.895723   11484 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.17.0
	I1218 13:30:14.904730   11484 cache.go:107] acquiring lock: {Name:mkc6e9060bea9211e4f8126ac5de344442cb8c23 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1218 13:30:14.904730   11484 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.17.0
	I1218 13:30:14.921726   11484 image.go:177] daemon lookup for registry.k8s.io/pause:3.1: Error response from daemon: No such image: registry.k8s.io/pause:3.1
	I1218 13:30:14.921726   11484 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.3-0
	I1218 13:30:14.922712   11484 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.17.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.17.0
	I1218 13:30:14.929745   11484 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.17.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.17.0
	I1218 13:30:14.936717   11484 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.5: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.5
	I1218 13:30:14.937745   11484 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.17.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.17.0
	I1218 13:30:14.937745   11484 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.17.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.17.0
	W1218 13:30:15.072071   11484 image.go:187] authn lookup for registry.k8s.io/pause:3.1 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W1218 13:30:15.197424   11484 image.go:187] authn lookup for registry.k8s.io/etcd:3.4.3-0 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W1218 13:30:15.307900   11484 image.go:187] authn lookup for registry.k8s.io/kube-scheduler:v1.17.0 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W1218 13:30:15.433896   11484 image.go:187] authn lookup for registry.k8s.io/kube-apiserver:v1.17.0 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	I1218 13:30:15.545252   11484 cache.go:162] opening:  \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause_3.1
	W1218 13:30:15.556579   11484 image.go:187] authn lookup for registry.k8s.io/coredns:1.6.5 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	I1218 13:30:15.569579   11484 cache.go:162] opening:  \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.4.3-0
	I1218 13:30:15.625600   11484 cache.go:162] opening:  \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler_v1.17.0
	W1218 13:30:15.662623   11484 image.go:187] authn lookup for registry.k8s.io/kube-proxy:v1.17.0 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	I1218 13:30:15.752517   11484 cache.go:157] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause_3.1 exists
	I1218 13:30:15.752788   11484 cache.go:96] cache image "registry.k8s.io/pause:3.1" -> "C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\pause_3.1" took 1.1199319s
	I1218 13:30:15.752877   11484 cache.go:80] save to tar file registry.k8s.io/pause:3.1 -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause_3.1 succeeded
	W1218 13:30:15.771573   11484 image.go:187] authn lookup for registry.k8s.io/kube-controller-manager:v1.17.0 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	I1218 13:30:15.837243   11484 cache.go:162] opening:  \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver_v1.17.0
	I1218 13:30:15.904459   11484 cache.go:162] opening:  \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns_1.6.5
	I1218 13:30:15.951199   11484 cache.go:162] opening:  \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.17.0
	I1218 13:30:16.050533   11484 cache.go:162] opening:  \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager_v1.17.0
	I1218 13:30:16.689956   11484 cache.go:157] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns_1.6.5 exists
	I1218 13:30:16.689956   11484 cache.go:96] cache image "registry.k8s.io/coredns:1.6.5" -> "C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\coredns_1.6.5" took 2.0572719s
	I1218 13:30:16.689956   11484 cache.go:80] save to tar file registry.k8s.io/coredns:1.6.5 -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns_1.6.5 succeeded
	I1218 13:30:16.738569   11484 cache.go:157] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler_v1.17.0 exists
	I1218 13:30:16.739570   11484 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.17.0" -> "C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\kube-scheduler_v1.17.0" took 2.1068853s
	I1218 13:30:16.739570   11484 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.17.0 -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler_v1.17.0 succeeded
	I1218 13:30:17.094422   11484 cache.go:157] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver_v1.17.0 exists
	I1218 13:30:17.094422   11484 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.17.0" -> "C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\kube-apiserver_v1.17.0" took 2.4601321s
	I1218 13:30:17.094422   11484 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.17.0 -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver_v1.17.0 succeeded
	I1218 13:30:17.249656   11484 cache.go:157] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager_v1.17.0 exists
	I1218 13:30:17.249656   11484 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.17.0" -> "C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\kube-controller-manager_v1.17.0" took 2.6149853s
	I1218 13:30:17.249656   11484 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.17.0 -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager_v1.17.0 succeeded
	I1218 13:30:17.784778   11484 cache.go:157] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.17.0 exists
	I1218 13:30:17.784778   11484 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.17.0" -> "C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\kube-proxy_v1.17.0" took 3.1442658s
	I1218 13:30:17.784778   11484 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.17.0 -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.17.0 succeeded
	I1218 13:30:18.318670   11484 cache.go:157] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.4.3-0 exists
	I1218 13:30:18.318916   11484 cache.go:96] cache image "registry.k8s.io/etcd:3.4.3-0" -> "C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\etcd_3.4.3-0" took 3.6860928s
	I1218 13:30:18.318916   11484 cache.go:80] save to tar file registry.k8s.io/etcd:3.4.3-0 -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.4.3-0 succeeded
	I1218 13:30:18.319015   11484 cache.go:87] Successfully saved all images to host disk.
	I1218 13:31:45.263780   11484 start.go:369] acquired machines lock for "running-upgrade-208100" in 1m30.6279357s
	I1218 13:31:45.264649   11484 start.go:96] Skipping create...Using existing machine configuration
	I1218 13:31:45.264716   11484 fix.go:54] fixHost starting: minikube
	I1218 13:31:45.265786   11484 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM running-upgrade-208100 ).state
	I1218 13:31:47.659215   11484 main.go:141] libmachine: [stdout =====>] : Running
	
	I1218 13:31:47.659603   11484 main.go:141] libmachine: [stderr =====>] : 
	I1218 13:31:47.659603   11484 fix.go:102] recreateIfNeeded on running-upgrade-208100: state=Running err=<nil>
	W1218 13:31:47.659603   11484 fix.go:128] unexpected machine state, will restart: <nil>
	I1218 13:31:47.660535   11484 out.go:177] * Updating the running hyperv "running-upgrade-208100" VM ...
	I1218 13:31:47.661448   11484 machine.go:88] provisioning docker machine ...
	I1218 13:31:47.661519   11484 buildroot.go:166] provisioning hostname "running-upgrade-208100"
	I1218 13:31:47.661519   11484 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM running-upgrade-208100 ).state
	I1218 13:31:50.032016   11484 main.go:141] libmachine: [stdout =====>] : Running
	
	I1218 13:31:50.032016   11484 main.go:141] libmachine: [stderr =====>] : 
	I1218 13:31:50.032016   11484 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM running-upgrade-208100 ).networkadapters[0]).ipaddresses[0]
	I1218 13:31:53.312586   11484 main.go:141] libmachine: [stdout =====>] : 192.168.226.193
	
	I1218 13:31:53.312586   11484 main.go:141] libmachine: [stderr =====>] : 
	I1218 13:31:53.321552   11484 main.go:141] libmachine: Using SSH client type: native
	I1218 13:31:53.322546   11484 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xdb4f40] 0xdb7a80 <nil>  [] 0s} 192.168.226.193 22 <nil> <nil>}
	I1218 13:31:53.322546   11484 main.go:141] libmachine: About to run SSH command:
	sudo hostname running-upgrade-208100 && echo "running-upgrade-208100" | sudo tee /etc/hostname
	I1218 13:31:53.509681   11484 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-208100
	
	I1218 13:31:53.509681   11484 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM running-upgrade-208100 ).state
	I1218 13:31:56.154825   11484 main.go:141] libmachine: [stdout =====>] : Running
	
	I1218 13:31:56.155003   11484 main.go:141] libmachine: [stderr =====>] : 
	I1218 13:31:56.155003   11484 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM running-upgrade-208100 ).networkadapters[0]).ipaddresses[0]
	I1218 13:31:58.767328   11484 main.go:141] libmachine: [stdout =====>] : 192.168.226.193
	
	I1218 13:31:58.767546   11484 main.go:141] libmachine: [stderr =====>] : 
	I1218 13:31:58.773697   11484 main.go:141] libmachine: Using SSH client type: native
	I1218 13:31:58.774414   11484 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xdb4f40] 0xdb7a80 <nil>  [] 0s} 192.168.226.193 22 <nil> <nil>}
	I1218 13:31:58.774414   11484 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\srunning-upgrade-208100' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 running-upgrade-208100/g' /etc/hosts;
				else 
					echo '127.0.1.1 running-upgrade-208100' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1218 13:31:58.930451   11484 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1218 13:31:58.930451   11484 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube7\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube7\minikube-integration\.minikube}
	I1218 13:31:58.930451   11484 buildroot.go:174] setting up certificates
	I1218 13:31:58.930451   11484 provision.go:83] configureAuth start
	I1218 13:31:58.930451   11484 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM running-upgrade-208100 ).state
	I1218 13:32:01.169806   11484 main.go:141] libmachine: [stdout =====>] : Running
	
	I1218 13:32:01.169806   11484 main.go:141] libmachine: [stderr =====>] : 
	I1218 13:32:01.169910   11484 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM running-upgrade-208100 ).networkadapters[0]).ipaddresses[0]
	I1218 13:32:04.239017   11484 main.go:141] libmachine: [stdout =====>] : 192.168.226.193
	
	I1218 13:32:04.239017   11484 main.go:141] libmachine: [stderr =====>] : 
	I1218 13:32:04.239017   11484 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM running-upgrade-208100 ).state
	I1218 13:32:06.544274   11484 main.go:141] libmachine: [stdout =====>] : Running
	
	I1218 13:32:06.544274   11484 main.go:141] libmachine: [stderr =====>] : 
	I1218 13:32:06.544480   11484 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM running-upgrade-208100 ).networkadapters[0]).ipaddresses[0]
	I1218 13:32:09.227887   11484 main.go:141] libmachine: [stdout =====>] : 192.168.226.193
	
	I1218 13:32:09.227887   11484 main.go:141] libmachine: [stderr =====>] : 
	I1218 13:32:09.227887   11484 provision.go:138] copyHostCerts
	I1218 13:32:09.228426   11484 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/ca.pem, removing ...
	I1218 13:32:09.228426   11484 exec_runner.go:203] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.pem
	I1218 13:32:09.228750   11484 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/ca.pem (1078 bytes)
	I1218 13:32:09.230395   11484 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/cert.pem, removing ...
	I1218 13:32:09.230395   11484 exec_runner.go:203] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cert.pem
	I1218 13:32:09.230395   11484 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/cert.pem (1123 bytes)
	I1218 13:32:09.232558   11484 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/key.pem, removing ...
	I1218 13:32:09.232558   11484 exec_runner.go:203] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\key.pem
	I1218 13:32:09.232999   11484 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/key.pem (1679 bytes)
	I1218 13:32:09.234101   11484 provision.go:112] generating server cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.running-upgrade-208100 san=[192.168.226.193 192.168.226.193 localhost 127.0.0.1 minikube running-upgrade-208100]
	I1218 13:32:09.493362   11484 provision.go:172] copyRemoteCerts
	I1218 13:32:09.506340   11484 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1218 13:32:09.506340   11484 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM running-upgrade-208100 ).state
	I1218 13:32:11.757255   11484 main.go:141] libmachine: [stdout =====>] : Running
	
	I1218 13:32:11.757341   11484 main.go:141] libmachine: [stderr =====>] : 
	I1218 13:32:11.757420   11484 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM running-upgrade-208100 ).networkadapters[0]).ipaddresses[0]
	I1218 13:32:14.564843   11484 main.go:141] libmachine: [stdout =====>] : 192.168.226.193
	
	I1218 13:32:14.564843   11484 main.go:141] libmachine: [stderr =====>] : 
	I1218 13:32:14.565495   11484 sshutil.go:53] new ssh client: &{IP:192.168.226.193 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\running-upgrade-208100\id_rsa Username:docker}
	I1218 13:32:14.682350   11484 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (5.1759884s)
	I1218 13:32:14.682843   11484 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1218 13:32:14.702762   11484 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1241 bytes)
	I1218 13:32:14.724429   11484 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1218 13:32:14.744355   11484 provision.go:86] duration metric: configureAuth took 15.813839s
	I1218 13:32:14.744355   11484 buildroot.go:189] setting minikube options for container-runtime
	I1218 13:32:14.744355   11484 config.go:182] Loaded profile config "running-upgrade-208100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.17.0
	I1218 13:32:14.744355   11484 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM running-upgrade-208100 ).state
	I1218 13:32:16.913324   11484 main.go:141] libmachine: [stdout =====>] : Running
	
	I1218 13:32:16.913466   11484 main.go:141] libmachine: [stderr =====>] : 
	I1218 13:32:16.913466   11484 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM running-upgrade-208100 ).networkadapters[0]).ipaddresses[0]
	I1218 13:32:19.584964   11484 main.go:141] libmachine: [stdout =====>] : 192.168.226.193
	
	I1218 13:32:19.584964   11484 main.go:141] libmachine: [stderr =====>] : 
	I1218 13:32:19.590515   11484 main.go:141] libmachine: Using SSH client type: native
	I1218 13:32:19.591295   11484 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xdb4f40] 0xdb7a80 <nil>  [] 0s} 192.168.226.193 22 <nil> <nil>}
	I1218 13:32:19.591295   11484 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1218 13:32:19.750767   11484 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1218 13:32:19.750858   11484 buildroot.go:70] root file system type: tmpfs
	I1218 13:32:19.751124   11484 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I1218 13:32:19.751248   11484 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM running-upgrade-208100 ).state
	I1218 13:32:21.940307   11484 main.go:141] libmachine: [stdout =====>] : Running
	
	I1218 13:32:21.940597   11484 main.go:141] libmachine: [stderr =====>] : 
	I1218 13:32:21.940802   11484 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM running-upgrade-208100 ).networkadapters[0]).ipaddresses[0]
	I1218 13:32:24.564543   11484 main.go:141] libmachine: [stdout =====>] : 192.168.226.193
	
	I1218 13:32:24.564808   11484 main.go:141] libmachine: [stderr =====>] : 
	I1218 13:32:24.572384   11484 main.go:141] libmachine: Using SSH client type: native
	I1218 13:32:24.572787   11484 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xdb4f40] 0xdb7a80 <nil>  [] 0s} 192.168.226.193 22 <nil> <nil>}
	I1218 13:32:24.573361   11484 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1218 13:32:24.736427   11484 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1218 13:32:24.736427   11484 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM running-upgrade-208100 ).state
	I1218 13:32:26.948046   11484 main.go:141] libmachine: [stdout =====>] : Running
	
	I1218 13:32:26.948046   11484 main.go:141] libmachine: [stderr =====>] : 
	I1218 13:32:26.948046   11484 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM running-upgrade-208100 ).networkadapters[0]).ipaddresses[0]
	I1218 13:32:29.767785   11484 main.go:141] libmachine: [stdout =====>] : 192.168.226.193
	
	I1218 13:32:29.767785   11484 main.go:141] libmachine: [stderr =====>] : 
	I1218 13:32:29.773723   11484 main.go:141] libmachine: Using SSH client type: native
	I1218 13:32:29.774450   11484 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xdb4f40] 0xdb7a80 <nil>  [] 0s} 192.168.226.193 22 <nil> <nil>}
	I1218 13:32:29.774450   11484 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1218 13:32:43.866342   11484 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service
	+++ /lib/systemd/system/docker.service.new
	@@ -3,9 +3,12 @@
	 Documentation=https://docs.docker.com
	 After=network.target  minikube-automount.service docker.socket
	 Requires= minikube-automount.service docker.socket 
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	+Restart=on-failure
	 
	 
	 
	@@ -21,7 +24,7 @@
	 # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	 ExecStart=
	 ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	-ExecReload=/bin/kill -s HUP 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I1218 13:32:43.866342   11484 machine.go:91] provisioned docker machine in 56.2046637s
	I1218 13:32:43.866342   11484 start.go:300] post-start starting for "running-upgrade-208100" (driver="hyperv")
	I1218 13:32:43.866342   11484 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1218 13:32:43.886351   11484 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1218 13:32:43.886351   11484 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM running-upgrade-208100 ).state
	I1218 13:32:46.483320   11484 main.go:141] libmachine: [stdout =====>] : Running
	
	I1218 13:32:46.483320   11484 main.go:141] libmachine: [stderr =====>] : 
	I1218 13:32:46.483320   11484 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM running-upgrade-208100 ).networkadapters[0]).ipaddresses[0]
	I1218 13:32:49.468192   11484 main.go:141] libmachine: [stdout =====>] : 192.168.226.193
	
	I1218 13:32:49.468383   11484 main.go:141] libmachine: [stderr =====>] : 
	I1218 13:32:49.468759   11484 sshutil.go:53] new ssh client: &{IP:192.168.226.193 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\running-upgrade-208100\id_rsa Username:docker}
	I1218 13:32:49.589585   11484 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (5.7032106s)
	I1218 13:32:49.603577   11484 ssh_runner.go:195] Run: cat /etc/os-release
	I1218 13:32:49.609582   11484 info.go:137] Remote host: Buildroot 2019.02.7
	I1218 13:32:49.609582   11484 filesync.go:126] Scanning C:\Users\jenkins.minikube7\minikube-integration\.minikube\addons for local assets ...
	I1218 13:32:49.610580   11484 filesync.go:126] Scanning C:\Users\jenkins.minikube7\minikube-integration\.minikube\files for local assets ...
	I1218 13:32:49.611586   11484 filesync.go:149] local asset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\149282.pem -> 149282.pem in /etc/ssl/certs
	I1218 13:32:49.625600   11484 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1218 13:32:49.634984   11484 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\149282.pem --> /etc/ssl/certs/149282.pem (1708 bytes)
	I1218 13:32:49.658646   11484 start.go:303] post-start completed in 5.7922799s
	I1218 13:32:49.658646   11484 fix.go:56] fixHost completed within 1m4.393666s
	I1218 13:32:49.658646   11484 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM running-upgrade-208100 ).state
	I1218 13:32:51.971042   11484 main.go:141] libmachine: [stdout =====>] : Running
	
	I1218 13:32:51.971079   11484 main.go:141] libmachine: [stderr =====>] : 
	I1218 13:32:51.971153   11484 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM running-upgrade-208100 ).networkadapters[0]).ipaddresses[0]
	I1218 13:32:54.945425   11484 main.go:141] libmachine: [stdout =====>] : 192.168.226.193
	
	I1218 13:32:54.945425   11484 main.go:141] libmachine: [stderr =====>] : 
	I1218 13:32:54.950420   11484 main.go:141] libmachine: Using SSH client type: native
	I1218 13:32:54.950420   11484 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xdb4f40] 0xdb7a80 <nil>  [] 0s} 192.168.226.193 22 <nil> <nil>}
	I1218 13:32:54.950420   11484 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1218 13:32:55.119787   11484 main.go:141] libmachine: SSH cmd err, output: <nil>: 1702906375.130501822
	
	I1218 13:32:55.119787   11484 fix.go:206] guest clock: 1702906375.130501822
	I1218 13:32:55.119787   11484 fix.go:219] Guest: 2023-12-18 13:32:55.130501822 +0000 UTC Remote: 2023-12-18 13:32:49.6586461 +0000 UTC m=+163.043559901 (delta=5.471855722s)
	I1218 13:32:55.119787   11484 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM running-upgrade-208100 ).state
	I1218 13:32:57.425355   11484 main.go:141] libmachine: [stdout =====>] : Running
	
	I1218 13:32:57.425670   11484 main.go:141] libmachine: [stderr =====>] : 
	I1218 13:32:57.425729   11484 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM running-upgrade-208100 ).networkadapters[0]).ipaddresses[0]
	I1218 13:33:00.190217   11484 main.go:141] libmachine: [stdout =====>] : 192.168.226.193
	
	I1218 13:33:00.190416   11484 main.go:141] libmachine: [stderr =====>] : 
	I1218 13:33:00.196371   11484 main.go:141] libmachine: Using SSH client type: native
	I1218 13:33:00.197068   11484 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xdb4f40] 0xdb7a80 <nil>  [] 0s} 192.168.226.193 22 <nil> <nil>}
	I1218 13:33:00.197068   11484 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1702906375
	I1218 13:33:00.371748   11484 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Dec 18 13:32:55 UTC 2023
	
	I1218 13:33:00.371801   11484 fix.go:226] clock set: Mon Dec 18 13:32:55 UTC 2023
	 (err=<nil>)
	I1218 13:33:00.371801   11484 start.go:83] releasing machines lock for "running-upgrade-208100", held for 1m15.1077134s
	I1218 13:33:00.372009   11484 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM running-upgrade-208100 ).state
	I1218 13:33:02.643838   11484 main.go:141] libmachine: [stdout =====>] : Running
	
	I1218 13:33:02.643980   11484 main.go:141] libmachine: [stderr =====>] : 
	I1218 13:33:02.644066   11484 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM running-upgrade-208100 ).networkadapters[0]).ipaddresses[0]
	I1218 13:33:05.368618   11484 main.go:141] libmachine: [stdout =====>] : 192.168.226.193
	
	I1218 13:33:05.368694   11484 main.go:141] libmachine: [stderr =====>] : 
	I1218 13:33:05.372980   11484 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1218 13:33:05.373098   11484 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM running-upgrade-208100 ).state
	I1218 13:33:05.390987   11484 ssh_runner.go:195] Run: cat /version.json
	I1218 13:33:05.390987   11484 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM running-upgrade-208100 ).state
	I1218 13:33:07.855997   11484 main.go:141] libmachine: [stdout =====>] : Running
	
	I1218 13:33:07.855997   11484 main.go:141] libmachine: [stderr =====>] : 
	I1218 13:33:07.855997   11484 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM running-upgrade-208100 ).networkadapters[0]).ipaddresses[0]
	I1218 13:33:07.887432   11484 main.go:141] libmachine: [stdout =====>] : Running
	
	I1218 13:33:07.887502   11484 main.go:141] libmachine: [stderr =====>] : 
	I1218 13:33:07.887502   11484 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM running-upgrade-208100 ).networkadapters[0]).ipaddresses[0]
	I1218 13:33:10.880075   11484 main.go:141] libmachine: [stdout =====>] : 192.168.226.193
	
	I1218 13:33:10.880075   11484 main.go:141] libmachine: [stderr =====>] : 
	I1218 13:33:10.880379   11484 sshutil.go:53] new ssh client: &{IP:192.168.226.193 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\running-upgrade-208100\id_rsa Username:docker}
	I1218 13:33:10.931179   11484 main.go:141] libmachine: [stdout =====>] : 192.168.226.193
	
	I1218 13:33:10.931179   11484 main.go:141] libmachine: [stderr =====>] : 
	I1218 13:33:10.931474   11484 sshutil.go:53] new ssh client: &{IP:192.168.226.193 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\running-upgrade-208100\id_rsa Username:docker}
	I1218 13:33:10.994621   11484 ssh_runner.go:235] Completed: cat /version.json: (5.6036117s)
	W1218 13:33:10.994621   11484 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I1218 13:33:11.010504   11484 ssh_runner.go:195] Run: systemctl --version
	I1218 13:33:11.949925   11484 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (6.5768003s)
	I1218 13:33:11.964187   11484 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1218 13:33:11.971901   11484 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1218 13:33:11.989851   11484 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I1218 13:33:12.013664   11484 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I1218 13:33:12.021707   11484 cni.go:305] no active bridge cni configs found in "/etc/cni/net.d" - nothing to configure
	I1218 13:33:12.022252   11484 start.go:475] detecting cgroup driver to use...
	I1218 13:33:12.022570   11484 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1218 13:33:12.053252   11484 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.1"|' /etc/containerd/config.toml"
	I1218 13:33:12.076383   11484 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1218 13:33:12.084898   11484 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I1218 13:33:12.101533   11484 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1218 13:33:12.124807   11484 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1218 13:33:12.147266   11484 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1218 13:33:12.167356   11484 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1218 13:33:12.191366   11484 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1218 13:33:12.216797   11484 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1218 13:33:12.239696   11484 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1218 13:33:12.265425   11484 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1218 13:33:12.289182   11484 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1218 13:33:12.522998   11484 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1218 13:33:12.549137   11484 start.go:475] detecting cgroup driver to use...
	I1218 13:33:12.566343   11484 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1218 13:33:12.599477   11484 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1218 13:33:12.628906   11484 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1218 13:33:12.667340   11484 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1218 13:33:12.693518   11484 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1218 13:33:12.709417   11484 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I1218 13:33:12.743962   11484 ssh_runner.go:195] Run: which cri-dockerd
	I1218 13:33:12.764499   11484 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1218 13:33:12.773419   11484 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I1218 13:33:12.804713   11484 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1218 13:33:12.997568   11484 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1218 13:33:13.183141   11484 docker.go:560] configuring docker to use "cgroupfs" as cgroup driver...
	I1218 13:33:13.183433   11484 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1218 13:33:13.214688   11484 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1218 13:33:13.417951   11484 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1218 13:33:25.227939   11484 ssh_runner.go:235] Completed: sudo systemctl restart docker: (11.8098395s)
	I1218 13:33:25.253347   11484 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I1218 13:33:25.308846   11484 out.go:177] 
	W1218 13:33:25.309895   11484 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	-- Logs begin at Mon 2023-12-18 13:26:54 UTC, end at Mon 2023-12-18 13:33:25 UTC. --
	Dec 18 13:28:31 running-upgrade-208100 systemd[1]: Starting Docker Application Container Engine...
	Dec 18 13:28:31 running-upgrade-208100 dockerd[2735]: time="2023-12-18T13:28:31.243652308Z" level=info msg="Starting up"
	Dec 18 13:28:31 running-upgrade-208100 dockerd[2735]: time="2023-12-18T13:28:31.247681693Z" level=info msg="libcontainerd: started new containerd process" pid=2743
	Dec 18 13:28:31 running-upgrade-208100 dockerd[2735]: time="2023-12-18T13:28:31.247748493Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	Dec 18 13:28:31 running-upgrade-208100 dockerd[2735]: time="2023-12-18T13:28:31.247761793Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	Dec 18 13:28:31 running-upgrade-208100 dockerd[2735]: time="2023-12-18T13:28:31.247784393Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/containerd/containerd.sock 0  <nil>}] <nil>}" module=grpc
	Dec 18 13:28:31 running-upgrade-208100 dockerd[2735]: time="2023-12-18T13:28:31.247819993Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Dec 18 13:28:31 running-upgrade-208100 dockerd[2735]: time="2023-12-18T13:28:31.317367438Z" level=info msg="starting containerd" revision=b34a5c8af56e510852c35414db4c1f4fa6172339 version=v1.2.10
	Dec 18 13:28:31 running-upgrade-208100 dockerd[2735]: time="2023-12-18T13:28:31.317837736Z" level=info msg="loading plugin "io.containerd.content.v1.content"..." type=io.containerd.content.v1
	Dec 18 13:28:31 running-upgrade-208100 dockerd[2735]: time="2023-12-18T13:28:31.318028135Z" level=info msg="loading plugin "io.containerd.snapshotter.v1.btrfs"..." type=io.containerd.snapshotter.v1
	Dec 18 13:28:31 running-upgrade-208100 dockerd[2735]: time="2023-12-18T13:28:31.318385134Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.btrfs" error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs must be a btrfs filesystem to be used with the btrfs snapshotter"
	Dec 18 13:28:31 running-upgrade-208100 dockerd[2735]: time="2023-12-18T13:28:31.318493634Z" level=info msg="loading plugin "io.containerd.snapshotter.v1.aufs"..." type=io.containerd.snapshotter.v1
	Dec 18 13:28:31 running-upgrade-208100 dockerd[2735]: time="2023-12-18T13:28:31.321169024Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.aufs" error="modprobe aufs failed: "modprobe: FATAL: Module aufs not found in directory /lib/modules/4.19.81\n": exit status 1"
	Dec 18 13:28:31 running-upgrade-208100 dockerd[2735]: time="2023-12-18T13:28:31.321272323Z" level=info msg="loading plugin "io.containerd.snapshotter.v1.native"..." type=io.containerd.snapshotter.v1
	Dec 18 13:28:31 running-upgrade-208100 dockerd[2735]: time="2023-12-18T13:28:31.321938421Z" level=info msg="loading plugin "io.containerd.snapshotter.v1.overlayfs"..." type=io.containerd.snapshotter.v1
	Dec 18 13:28:31 running-upgrade-208100 dockerd[2735]: time="2023-12-18T13:28:31.322329919Z" level=info msg="loading plugin "io.containerd.snapshotter.v1.zfs"..." type=io.containerd.snapshotter.v1
	Dec 18 13:28:31 running-upgrade-208100 dockerd[2735]: time="2023-12-18T13:28:31.322743218Z" level=info msg="skip loading plugin "io.containerd.snapshotter.v1.zfs"..." type=io.containerd.snapshotter.v1
	Dec 18 13:28:31 running-upgrade-208100 dockerd[2735]: time="2023-12-18T13:28:31.322852218Z" level=info msg="loading plugin "io.containerd.metadata.v1.bolt"..." type=io.containerd.metadata.v1
	Dec 18 13:28:31 running-upgrade-208100 dockerd[2735]: time="2023-12-18T13:28:31.322979517Z" level=warning msg="could not use snapshotter btrfs in metadata plugin" error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs must be a btrfs filesystem to be used with the btrfs snapshotter"
	Dec 18 13:28:31 running-upgrade-208100 dockerd[2735]: time="2023-12-18T13:28:31.322991817Z" level=warning msg="could not use snapshotter aufs in metadata plugin" error="modprobe aufs failed: "modprobe: FATAL: Module aufs not found in directory /lib/modules/4.19.81\n": exit status 1"
	Dec 18 13:28:31 running-upgrade-208100 dockerd[2735]: time="2023-12-18T13:28:31.323000417Z" level=warning msg="could not use snapshotter zfs in metadata plugin" error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin"
	Dec 18 13:28:31 running-upgrade-208100 dockerd[2735]: time="2023-12-18T13:28:31.336140569Z" level=info msg="loading plugin "io.containerd.differ.v1.walking"..." type=io.containerd.differ.v1
	Dec 18 13:28:31 running-upgrade-208100 dockerd[2735]: time="2023-12-18T13:28:31.336369668Z" level=info msg="loading plugin "io.containerd.gc.v1.scheduler"..." type=io.containerd.gc.v1
	Dec 18 13:28:31 running-upgrade-208100 dockerd[2735]: time="2023-12-18T13:28:31.336435068Z" level=info msg="loading plugin "io.containerd.service.v1.containers-service"..." type=io.containerd.service.v1
	Dec 18 13:28:31 running-upgrade-208100 dockerd[2735]: time="2023-12-18T13:28:31.336465868Z" level=info msg="loading plugin "io.containerd.service.v1.content-service"..." type=io.containerd.service.v1
	Dec 18 13:28:31 running-upgrade-208100 dockerd[2735]: time="2023-12-18T13:28:31.336495767Z" level=info msg="loading plugin "io.containerd.service.v1.diff-service"..." type=io.containerd.service.v1
	Dec 18 13:28:31 running-upgrade-208100 dockerd[2735]: time="2023-12-18T13:28:31.336519767Z" level=info msg="loading plugin "io.containerd.service.v1.images-service"..." type=io.containerd.service.v1
	Dec 18 13:28:31 running-upgrade-208100 dockerd[2735]: time="2023-12-18T13:28:31.336543267Z" level=info msg="loading plugin "io.containerd.service.v1.leases-service"..." type=io.containerd.service.v1
	Dec 18 13:28:31 running-upgrade-208100 dockerd[2735]: time="2023-12-18T13:28:31.336566967Z" level=info msg="loading plugin "io.containerd.service.v1.namespaces-service"..." type=io.containerd.service.v1
	Dec 18 13:28:31 running-upgrade-208100 dockerd[2735]: time="2023-12-18T13:28:31.336589467Z" level=info msg="loading plugin "io.containerd.service.v1.snapshots-service"..." type=io.containerd.service.v1
	Dec 18 13:28:31 running-upgrade-208100 dockerd[2735]: time="2023-12-18T13:28:31.336612467Z" level=info msg="loading plugin "io.containerd.runtime.v1.linux"..." type=io.containerd.runtime.v1
	Dec 18 13:28:31 running-upgrade-208100 dockerd[2735]: time="2023-12-18T13:28:31.336917866Z" level=info msg="loading plugin "io.containerd.runtime.v2.task"..." type=io.containerd.runtime.v2
	Dec 18 13:28:31 running-upgrade-208100 dockerd[2735]: time="2023-12-18T13:28:31.337152565Z" level=info msg="loading plugin "io.containerd.monitor.v1.cgroups"..." type=io.containerd.monitor.v1
	Dec 18 13:28:31 running-upgrade-208100 dockerd[2735]: time="2023-12-18T13:28:31.338092762Z" level=info msg="loading plugin "io.containerd.service.v1.tasks-service"..." type=io.containerd.service.v1
	Dec 18 13:28:31 running-upgrade-208100 dockerd[2735]: time="2023-12-18T13:28:31.338375561Z" level=info msg="loading plugin "io.containerd.internal.v1.restart"..." type=io.containerd.internal.v1
	Dec 18 13:28:31 running-upgrade-208100 dockerd[2735]: time="2023-12-18T13:28:31.338447060Z" level=info msg="loading plugin "io.containerd.grpc.v1.containers"..." type=io.containerd.grpc.v1
	Dec 18 13:28:31 running-upgrade-208100 dockerd[2735]: time="2023-12-18T13:28:31.338482660Z" level=info msg="loading plugin "io.containerd.grpc.v1.content"..." type=io.containerd.grpc.v1
	Dec 18 13:28:31 running-upgrade-208100 dockerd[2735]: time="2023-12-18T13:28:31.338505460Z" level=info msg="loading plugin "io.containerd.grpc.v1.diff"..." type=io.containerd.grpc.v1
	Dec 18 13:28:31 running-upgrade-208100 dockerd[2735]: time="2023-12-18T13:28:31.338536560Z" level=info msg="loading plugin "io.containerd.grpc.v1.events"..." type=io.containerd.grpc.v1
	Dec 18 13:28:31 running-upgrade-208100 dockerd[2735]: time="2023-12-18T13:28:31.338558560Z" level=info msg="loading plugin "io.containerd.grpc.v1.healthcheck"..." type=io.containerd.grpc.v1
	Dec 18 13:28:31 running-upgrade-208100 dockerd[2735]: time="2023-12-18T13:28:31.338579660Z" level=info msg="loading plugin "io.containerd.grpc.v1.images"..." type=io.containerd.grpc.v1
	Dec 18 13:28:31 running-upgrade-208100 dockerd[2735]: time="2023-12-18T13:28:31.338599460Z" level=info msg="loading plugin "io.containerd.grpc.v1.leases"..." type=io.containerd.grpc.v1
	Dec 18 13:28:31 running-upgrade-208100 dockerd[2735]: time="2023-12-18T13:28:31.338623560Z" level=info msg="loading plugin "io.containerd.grpc.v1.namespaces"..." type=io.containerd.grpc.v1
	Dec 18 13:28:31 running-upgrade-208100 dockerd[2735]: time="2023-12-18T13:28:31.338644360Z" level=info msg="loading plugin "io.containerd.internal.v1.opt"..." type=io.containerd.internal.v1
	Dec 18 13:28:31 running-upgrade-208100 dockerd[2735]: time="2023-12-18T13:28:31.338742759Z" level=info msg="loading plugin "io.containerd.grpc.v1.snapshots"..." type=io.containerd.grpc.v1
	Dec 18 13:28:31 running-upgrade-208100 dockerd[2735]: time="2023-12-18T13:28:31.338780659Z" level=info msg="loading plugin "io.containerd.grpc.v1.tasks"..." type=io.containerd.grpc.v1
	Dec 18 13:28:31 running-upgrade-208100 dockerd[2735]: time="2023-12-18T13:28:31.338805259Z" level=info msg="loading plugin "io.containerd.grpc.v1.version"..." type=io.containerd.grpc.v1
	Dec 18 13:28:31 running-upgrade-208100 dockerd[2735]: time="2023-12-18T13:28:31.338835359Z" level=info msg="loading plugin "io.containerd.grpc.v1.introspection"..." type=io.containerd.grpc.v1
	Dec 18 13:28:31 running-upgrade-208100 dockerd[2735]: time="2023-12-18T13:28:31.339009958Z" level=info msg=serving... address="/var/run/docker/containerd/containerd-debug.sock"
	Dec 18 13:28:31 running-upgrade-208100 dockerd[2735]: time="2023-12-18T13:28:31.339154758Z" level=info msg=serving... address="/var/run/docker/containerd/containerd.sock"
	Dec 18 13:28:31 running-upgrade-208100 dockerd[2735]: time="2023-12-18T13:28:31.339171258Z" level=info msg="containerd successfully booted in 0.025245s"
	Dec 18 13:28:31 running-upgrade-208100 dockerd[2735]: time="2023-12-18T13:28:31.358025588Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	Dec 18 13:28:31 running-upgrade-208100 dockerd[2735]: time="2023-12-18T13:28:31.358083388Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	Dec 18 13:28:31 running-upgrade-208100 dockerd[2735]: time="2023-12-18T13:28:31.358170588Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/containerd/containerd.sock 0  <nil>}] <nil>}" module=grpc
	Dec 18 13:28:31 running-upgrade-208100 dockerd[2735]: time="2023-12-18T13:28:31.358524187Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Dec 18 13:28:31 running-upgrade-208100 dockerd[2735]: time="2023-12-18T13:28:31.362117973Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	Dec 18 13:28:31 running-upgrade-208100 dockerd[2735]: time="2023-12-18T13:28:31.362727171Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	Dec 18 13:28:31 running-upgrade-208100 dockerd[2735]: time="2023-12-18T13:28:31.362908471Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/containerd/containerd.sock 0  <nil>}] <nil>}" module=grpc
	Dec 18 13:28:31 running-upgrade-208100 dockerd[2735]: time="2023-12-18T13:28:31.363006470Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Dec 18 13:28:31 running-upgrade-208100 dockerd[2735]: time="2023-12-18T13:28:31.413882584Z" level=warning msg="Your kernel does not support cgroup blkio weight"
	Dec 18 13:28:31 running-upgrade-208100 dockerd[2735]: time="2023-12-18T13:28:31.414169782Z" level=warning msg="Your kernel does not support cgroup blkio weight_device"
	Dec 18 13:28:31 running-upgrade-208100 dockerd[2735]: time="2023-12-18T13:28:31.414396882Z" level=warning msg="Your kernel does not support cgroup blkio throttle.read_bps_device"
	Dec 18 13:28:31 running-upgrade-208100 dockerd[2735]: time="2023-12-18T13:28:31.414421882Z" level=warning msg="Your kernel does not support cgroup blkio throttle.write_bps_device"
	Dec 18 13:28:31 running-upgrade-208100 dockerd[2735]: time="2023-12-18T13:28:31.414436181Z" level=warning msg="Your kernel does not support cgroup blkio throttle.read_iops_device"
	Dec 18 13:28:31 running-upgrade-208100 dockerd[2735]: time="2023-12-18T13:28:31.414449581Z" level=warning msg="Your kernel does not support cgroup blkio throttle.write_iops_device"
	Dec 18 13:28:31 running-upgrade-208100 dockerd[2735]: time="2023-12-18T13:28:31.414922280Z" level=info msg="Loading containers: start."
	Dec 18 13:28:31 running-upgrade-208100 dockerd[2735]: time="2023-12-18T13:28:31.613467151Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Dec 18 13:28:32 running-upgrade-208100 dockerd[2735]: time="2023-12-18T13:28:32.254301500Z" level=info msg="Loading containers: done."
	Dec 18 13:28:32 running-upgrade-208100 dockerd[2735]: time="2023-12-18T13:28:32.337560294Z" level=info msg="Docker daemon" commit=633a0ea838 graphdriver(s)=overlay2 version=19.03.5
	Dec 18 13:28:32 running-upgrade-208100 dockerd[2735]: time="2023-12-18T13:28:32.337957593Z" level=info msg="Daemon has completed initialization"
	Dec 18 13:28:33 running-upgrade-208100 systemd[1]: Started Docker Application Container Engine.
	Dec 18 13:28:33 running-upgrade-208100 dockerd[2735]: time="2023-12-18T13:28:33.325035771Z" level=info msg="API listen on /var/run/docker.sock"
	Dec 18 13:28:33 running-upgrade-208100 dockerd[2735]: time="2023-12-18T13:28:33.325170870Z" level=info msg="API listen on [::]:2376"
	Dec 18 13:29:51 running-upgrade-208100 dockerd[2735]: time="2023-12-18T13:29:51.508545072Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/7f2d09710da7b3e1aa113e5504828ad48bde6408ca3035fe471637c1591b5337/shim.sock" debug=false pid=4314
	Dec 18 13:29:51 running-upgrade-208100 dockerd[2735]: time="2023-12-18T13:29:51.522258609Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/b4aadbe414c87bc0a6a9c5d54898e8e6586dd660370d81c0633f9b510866451a/shim.sock" debug=false pid=4320
	Dec 18 13:29:51 running-upgrade-208100 dockerd[2735]: time="2023-12-18T13:29:51.538945331Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/12d55b9711dcac3719f9e3d3aec999ef177231c0d9c56a782ad7fd67bb3af861/shim.sock" debug=false pid=4332
	Dec 18 13:29:51 running-upgrade-208100 dockerd[2735]: time="2023-12-18T13:29:51.802458008Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/71a621a4e447cb405091a56ff70084b8874733de6dc714d8b4339a19294b6b7c/shim.sock" debug=false pid=4401
	Dec 18 13:29:51 running-upgrade-208100 dockerd[2735]: time="2023-12-18T13:29:51.809409076Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/1a15a34c3ba1827ceb2dab10b4105531d41b9b40b5f4bc9f6c7084e4593b0240/shim.sock" debug=false pid=4412
	Dec 18 13:29:52 running-upgrade-208100 dockerd[2735]: time="2023-12-18T13:29:52.234059918Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/2c43ae2eaaa18fe33132f0344d49af93d5b2eee42c2b1b94a8b04dfc78bdce2b/shim.sock" debug=false pid=4565
	Dec 18 13:29:52 running-upgrade-208100 dockerd[2735]: time="2023-12-18T13:29:52.254597224Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/edee80fb1c6db38f1d12cab3ebd158a2ef78f0f240c9243587ca70ffda5a4ce4/shim.sock" debug=false pid=4570
	Dec 18 13:29:52 running-upgrade-208100 dockerd[2735]: time="2023-12-18T13:29:52.379828450Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/642295739e5d48eefc84904ce67980442aee0dee10137ca3bd4a0d646d78bab5/shim.sock" debug=false pid=4619
	Dec 18 13:29:52 running-upgrade-208100 dockerd[2735]: time="2023-12-18T13:29:52.442269364Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/f0c791e871065c7f10b7f085c35da54d1d6da1e4e02fa3277e859ae7f9785801/shim.sock" debug=false pid=4644
	Dec 18 13:29:52 running-upgrade-208100 dockerd[2735]: time="2023-12-18T13:29:52.505307576Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/ba17c735408707bd4a507b8f5b47bf40d2d706f46012ba85cee75ad517b2e1f0/shim.sock" debug=false pid=4674
	Dec 18 13:30:11 running-upgrade-208100 dockerd[2735]: time="2023-12-18T13:30:11.327884031Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/8a27b9421df2c52e3d655eced7521464b06efd553e58b900e41aca8f7e9de2db/shim.sock" debug=false pid=5525
	Dec 18 13:30:11 running-upgrade-208100 dockerd[2735]: time="2023-12-18T13:30:11.955412317Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/c4e50e6381dba0a35b43056fb44ed5fa6defb598aa8beb909736580d0c957c69/shim.sock" debug=false pid=5591
	Dec 18 13:30:17 running-upgrade-208100 dockerd[2735]: time="2023-12-18T13:30:17.885468082Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/3449aaaf3d12a19f38ccee2051a8eadda2a8c48f8221dd22586e2a799e710ec9/shim.sock" debug=false pid=5773
	Dec 18 13:30:19 running-upgrade-208100 dockerd[2735]: time="2023-12-18T13:30:19.284626053Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/df99028cb7908df1e06b82d2203c059159149adf6e669828d9666c518717004a/shim.sock" debug=false pid=5849
	Dec 18 13:30:21 running-upgrade-208100 dockerd[2735]: time="2023-12-18T13:30:21.333625605Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/e0e7e1b6d855394c321e544137d3f4f8f959ff2fe26442384b00916ab4ac7a5b/shim.sock" debug=false pid=5912
	Dec 18 13:30:21 running-upgrade-208100 dockerd[2735]: time="2023-12-18T13:30:21.985129007Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/4e68fa7a9d9653e6c4dc943a9ee72818ba9a3b035af33fb0e7029c9a346da9a4/shim.sock" debug=false pid=5968
	Dec 18 13:30:22 running-upgrade-208100 dockerd[2735]: time="2023-12-18T13:30:22.144048217Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/36cadd8dc8d4f45022d4346e79389b61f6f74160a2558991a193ced7c86d70a6/shim.sock" debug=false pid=6005
	Dec 18 13:30:24 running-upgrade-208100 dockerd[2735]: time="2023-12-18T13:30:24.235667650Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/f9dac79e612d8efe14542de693f50ed33f0a7d649c1fb13c72addd79bc7e2e0b/shim.sock" debug=false pid=6102
	Dec 18 13:32:30 running-upgrade-208100 systemd[1]: Stopping Docker Application Container Engine...
	Dec 18 13:32:30 running-upgrade-208100 dockerd[2735]: time="2023-12-18T13:32:30.352320042Z" level=info msg="Processing signal 'terminated'"
	Dec 18 13:32:31 running-upgrade-208100 dockerd[2735]: time="2023-12-18T13:32:31.245622829Z" level=info msg="shim reaped" id=12d55b9711dcac3719f9e3d3aec999ef177231c0d9c56a782ad7fd67bb3af861
	Dec 18 13:32:31 running-upgrade-208100 dockerd[2735]: time="2023-12-18T13:32:31.255733696Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 18 13:32:31 running-upgrade-208100 dockerd[2735]: time="2023-12-18T13:32:31.646982940Z" level=info msg="shim reaped" id=4e68fa7a9d9653e6c4dc943a9ee72818ba9a3b035af33fb0e7029c9a346da9a4
	Dec 18 13:32:31 running-upgrade-208100 dockerd[2735]: time="2023-12-18T13:32:31.655160375Z" level=info msg="shim reaped" id=1a15a34c3ba1827ceb2dab10b4105531d41b9b40b5f4bc9f6c7084e4593b0240
	Dec 18 13:32:31 running-upgrade-208100 dockerd[2735]: time="2023-12-18T13:32:31.659100840Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 18 13:32:31 running-upgrade-208100 dockerd[2735]: time="2023-12-18T13:32:31.661490079Z" level=info msg="shim reaped" id=edee80fb1c6db38f1d12cab3ebd158a2ef78f0f240c9243587ca70ffda5a4ce4
	Dec 18 13:32:31 running-upgrade-208100 dockerd[2735]: time="2023-12-18T13:32:31.671326441Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 18 13:32:31 running-upgrade-208100 dockerd[2735]: time="2023-12-18T13:32:31.677080436Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 18 13:32:31 running-upgrade-208100 dockerd[2735]: time="2023-12-18T13:32:31.677629245Z" level=warning msg="edee80fb1c6db38f1d12cab3ebd158a2ef78f0f240c9243587ca70ffda5a4ce4 cleanup: failed to unmount IPC: umount /var/lib/docker/containers/edee80fb1c6db38f1d12cab3ebd158a2ef78f0f240c9243587ca70ffda5a4ce4/mounts/shm, flags: 0x2: no such file or directory"
	Dec 18 13:32:31 running-upgrade-208100 dockerd[2735]: time="2023-12-18T13:32:31.729409098Z" level=info msg="shim reaped" id=2c43ae2eaaa18fe33132f0344d49af93d5b2eee42c2b1b94a8b04dfc78bdce2b
	Dec 18 13:32:31 running-upgrade-208100 dockerd[2735]: time="2023-12-18T13:32:31.747146790Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 18 13:32:31 running-upgrade-208100 dockerd[2735]: time="2023-12-18T13:32:31.748034905Z" level=warning msg="2c43ae2eaaa18fe33132f0344d49af93d5b2eee42c2b1b94a8b04dfc78bdce2b cleanup: failed to unmount IPC: umount /var/lib/docker/containers/2c43ae2eaaa18fe33132f0344d49af93d5b2eee42c2b1b94a8b04dfc78bdce2b/mounts/shm, flags: 0x2: no such file or directory"
	Dec 18 13:32:31 running-upgrade-208100 dockerd[2735]: time="2023-12-18T13:32:31.794562671Z" level=info msg="shim reaped" id=b4aadbe414c87bc0a6a9c5d54898e8e6586dd660370d81c0633f9b510866451a
	Dec 18 13:32:31 running-upgrade-208100 dockerd[2735]: time="2023-12-18T13:32:31.795321484Z" level=info msg="shim reaped" id=3449aaaf3d12a19f38ccee2051a8eadda2a8c48f8221dd22586e2a799e710ec9
	Dec 18 13:32:31 running-upgrade-208100 dockerd[2735]: time="2023-12-18T13:32:31.804009527Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 18 13:32:31 running-upgrade-208100 dockerd[2735]: time="2023-12-18T13:32:31.810510434Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 18 13:32:31 running-upgrade-208100 dockerd[2735]: time="2023-12-18T13:32:31.854805663Z" level=info msg="shim reaped" id=8a27b9421df2c52e3d655eced7521464b06efd553e58b900e41aca8f7e9de2db
	Dec 18 13:32:31 running-upgrade-208100 dockerd[2735]: time="2023-12-18T13:32:31.866347353Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 18 13:32:31 running-upgrade-208100 dockerd[2735]: time="2023-12-18T13:32:31.892724688Z" level=info msg="shim reaped" id=71a621a4e447cb405091a56ff70084b8874733de6dc714d8b4339a19294b6b7c
	Dec 18 13:32:31 running-upgrade-208100 dockerd[2735]: time="2023-12-18T13:32:31.903063958Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 18 13:32:31 running-upgrade-208100 dockerd[2735]: time="2023-12-18T13:32:31.935048185Z" level=info msg="shim reaped" id=c4e50e6381dba0a35b43056fb44ed5fa6defb598aa8beb909736580d0c957c69
	Dec 18 13:32:31 running-upgrade-208100 dockerd[2735]: time="2023-12-18T13:32:31.936210504Z" level=info msg="shim reaped" id=df99028cb7908df1e06b82d2203c059159149adf6e669828d9666c518717004a
	Dec 18 13:32:31 running-upgrade-208100 dockerd[2735]: time="2023-12-18T13:32:31.951398754Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 18 13:32:31 running-upgrade-208100 dockerd[2735]: time="2023-12-18T13:32:31.951542757Z" level=info msg="shim reaped" id=36cadd8dc8d4f45022d4346e79389b61f6f74160a2558991a193ced7c86d70a6
	Dec 18 13:32:31 running-upgrade-208100 dockerd[2735]: time="2023-12-18T13:32:31.956025931Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 18 13:32:31 running-upgrade-208100 dockerd[2735]: time="2023-12-18T13:32:31.951965664Z" level=warning msg="c4e50e6381dba0a35b43056fb44ed5fa6defb598aa8beb909736580d0c957c69 cleanup: failed to unmount IPC: umount /var/lib/docker/containers/c4e50e6381dba0a35b43056fb44ed5fa6defb598aa8beb909736580d0c957c69/mounts/shm, flags: 0x2: no such file or directory"
	Dec 18 13:32:31 running-upgrade-208100 dockerd[2735]: time="2023-12-18T13:32:31.962363835Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 18 13:32:31 running-upgrade-208100 dockerd[2735]: time="2023-12-18T13:32:31.962588939Z" level=warning msg="36cadd8dc8d4f45022d4346e79389b61f6f74160a2558991a193ced7c86d70a6 cleanup: failed to unmount IPC: umount /var/lib/docker/containers/36cadd8dc8d4f45022d4346e79389b61f6f74160a2558991a193ced7c86d70a6/mounts/shm, flags: 0x2: no such file or directory"
	Dec 18 13:32:32 running-upgrade-208100 dockerd[2735]: time="2023-12-18T13:32:32.000048856Z" level=info msg="shim reaped" id=ba17c735408707bd4a507b8f5b47bf40d2d706f46012ba85cee75ad517b2e1f0
	Dec 18 13:32:32 running-upgrade-208100 dockerd[2735]: time="2023-12-18T13:32:32.011718345Z" level=info msg="shim reaped" id=7f2d09710da7b3e1aa113e5504828ad48bde6408ca3035fe471637c1591b5337
	Dec 18 13:32:32 running-upgrade-208100 dockerd[2735]: time="2023-12-18T13:32:32.011943649Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 18 13:32:32 running-upgrade-208100 dockerd[2735]: time="2023-12-18T13:32:32.012461457Z" level=warning msg="ba17c735408707bd4a507b8f5b47bf40d2d706f46012ba85cee75ad517b2e1f0 cleanup: failed to unmount IPC: umount /var/lib/docker/containers/ba17c735408707bd4a507b8f5b47bf40d2d706f46012ba85cee75ad517b2e1f0/mounts/shm, flags: 0x2: no such file or directory"
	Dec 18 13:32:32 running-upgrade-208100 dockerd[2735]: time="2023-12-18T13:32:32.021818009Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 18 13:32:32 running-upgrade-208100 dockerd[2735]: time="2023-12-18T13:32:32.271097249Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/446db23444220c3910aab284164d34381a3715b08132b8504dd0bfdb35a8044f/shim.sock" debug=false pid=8366
	Dec 18 13:32:32 running-upgrade-208100 dockerd[2735]: time="2023-12-18T13:32:32.579376046Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/5963ead389b4d455fa0da38ca6b2bee69ff106a129122a828c0c8d06a0e26c8d/shim.sock" debug=false pid=8417
	Dec 18 13:32:35 running-upgrade-208100 dockerd[2735]: time="2023-12-18T13:32:35.763037295Z" level=info msg="shim reaped" id=f9dac79e612d8efe14542de693f50ed33f0a7d649c1fb13c72addd79bc7e2e0b
	Dec 18 13:32:35 running-upgrade-208100 dockerd[2735]: time="2023-12-18T13:32:35.773479356Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 18 13:32:35 running-upgrade-208100 dockerd[2735]: time="2023-12-18T13:32:35.773652959Z" level=warning msg="f9dac79e612d8efe14542de693f50ed33f0a7d649c1fb13c72addd79bc7e2e0b cleanup: failed to unmount IPC: umount /var/lib/docker/containers/f9dac79e612d8efe14542de693f50ed33f0a7d649c1fb13c72addd79bc7e2e0b/mounts/shm, flags: 0x2: no such file or directory"
	Dec 18 13:32:35 running-upgrade-208100 dockerd[2735]: time="2023-12-18T13:32:35.930297178Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/1092fb355598e8d796d71ee4607e51752437cb0651a6d5d8f092d4cbcee10c0f/shim.sock" debug=false pid=8573
	Dec 18 13:32:35 running-upgrade-208100 dockerd[2735]: time="2023-12-18T13:32:35.933288524Z" level=info msg="shim reaped" id=e0e7e1b6d855394c321e544137d3f4f8f959ff2fe26442384b00916ab4ac7a5b
	Dec 18 13:32:35 running-upgrade-208100 dockerd[2735]: time="2023-12-18T13:32:35.941556452Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 18 13:32:35 running-upgrade-208100 dockerd[2735]: time="2023-12-18T13:32:35.941752055Z" level=warning msg="e0e7e1b6d855394c321e544137d3f4f8f959ff2fe26442384b00916ab4ac7a5b cleanup: failed to unmount IPC: umount /var/lib/docker/containers/e0e7e1b6d855394c321e544137d3f4f8f959ff2fe26442384b00916ab4ac7a5b/mounts/shm, flags: 0x2: no such file or directory"
	Dec 18 13:32:36 running-upgrade-208100 dockerd[2735]: time="2023-12-18T13:32:36.306868718Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/aeaca1aa5841433fbbdebe49705b87040e0c85c580d17d4332b17bcda37c480d/shim.sock" debug=false pid=8634
	Dec 18 13:32:37 running-upgrade-208100 dockerd[2735]: time="2023-12-18T13:32:37.362493371Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/ed88295551026a24d769b36e26acbc24666cf41a4f29ce13fd652f587638bbbe/shim.sock" debug=false pid=8715
	Dec 18 13:32:37 running-upgrade-208100 dockerd[2735]: time="2023-12-18T13:32:37.628788953Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/87023a324b314810d43e76f256bb2a0ef01af55d3c12cc954730f18fb2114373/shim.sock" debug=false pid=8757
	Dec 18 13:32:40 running-upgrade-208100 dockerd[2735]: time="2023-12-18T13:32:40.675342917Z" level=info msg="Container 642295739e5d48eefc84904ce67980442aee0dee10137ca3bd4a0d646d78bab5 failed to exit within 10 seconds of signal 15 - using the force"
	Dec 18 13:32:40 running-upgrade-208100 dockerd[2735]: time="2023-12-18T13:32:40.783589160Z" level=info msg="Container f0c791e871065c7f10b7f085c35da54d1d6da1e4e02fa3277e859ae7f9785801 failed to exit within 10 seconds of signal 15 - using the force"
	Dec 18 13:32:40 running-upgrade-208100 dockerd[2735]: time="2023-12-18T13:32:40.838484342Z" level=info msg="shim reaped" id=642295739e5d48eefc84904ce67980442aee0dee10137ca3bd4a0d646d78bab5
	Dec 18 13:32:40 running-upgrade-208100 dockerd[2735]: time="2023-12-18T13:32:40.848723287Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 18 13:32:40 running-upgrade-208100 dockerd[2735]: time="2023-12-18T13:32:40.848902890Z" level=warning msg="642295739e5d48eefc84904ce67980442aee0dee10137ca3bd4a0d646d78bab5 cleanup: failed to unmount IPC: umount /var/lib/docker/containers/642295739e5d48eefc84904ce67980442aee0dee10137ca3bd4a0d646d78bab5/mounts/shm, flags: 0x2: no such file or directory"
	Dec 18 13:32:40 running-upgrade-208100 dockerd[2735]: time="2023-12-18T13:32:40.949127118Z" level=info msg="shim reaped" id=f0c791e871065c7f10b7f085c35da54d1d6da1e4e02fa3277e859ae7f9785801
	Dec 18 13:32:40 running-upgrade-208100 dockerd[2735]: time="2023-12-18T13:32:40.960091474Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 18 13:32:40 running-upgrade-208100 dockerd[2735]: time="2023-12-18T13:32:40.960329677Z" level=warning msg="f0c791e871065c7f10b7f085c35da54d1d6da1e4e02fa3277e859ae7f9785801 cleanup: failed to unmount IPC: umount /var/lib/docker/containers/f0c791e871065c7f10b7f085c35da54d1d6da1e4e02fa3277e859ae7f9785801/mounts/shm, flags: 0x2: no such file or directory"
	Dec 18 13:32:41 running-upgrade-208100 dockerd[2735]: time="2023-12-18T13:32:41.065626463Z" level=info msg="Daemon shutdown complete"
	Dec 18 13:32:41 running-upgrade-208100 dockerd[2735]: time="2023-12-18T13:32:41.065878566Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=moby
	Dec 18 13:32:41 running-upgrade-208100 dockerd[2735]: time="2023-12-18T13:32:41.066572676Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Dec 18 13:32:41 running-upgrade-208100 dockerd[2735]: time="2023-12-18T13:32:41.066780179Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Dec 18 13:32:41 running-upgrade-208100 dockerd[2735]: time="2023-12-18T13:32:41.091737529Z" level=warning msg="failed to get endpoint_count map for scope local: open : no such file or directory"
	Dec 18 13:32:41 running-upgrade-208100 dockerd[2735]: time="2023-12-18T13:32:41.092194535Z" level=warning msg="Failed detaching sandbox a9132aa5b22ac90401eb516901a7eb6a3d9fb1daff42f99397aff71ecb4809f7 from endpoint 832d8f03e2355a50d9f20788957ed5a3945a460b7a4933594bbf2f4c332c5bda: failed to update store for object type *libnetwork.endpoint: open : no such file or directory\n"
	Dec 18 13:32:41 running-upgrade-208100 dockerd[2735]: time="2023-12-18T13:32:41.092488739Z" level=warning msg="Failed deleting endpoint 832d8f03e2355a50d9f20788957ed5a3945a460b7a4933594bbf2f4c332c5bda: endpoint with name k8s_POD_kube-addon-manager-minikube_kube-system_c3e29047da86ce6690916750ab69c40b_1 id 832d8f03e2355a50d9f20788957ed5a3945a460b7a4933594bbf2f4c332c5bda has active containers\n"
	Dec 18 13:32:41 running-upgrade-208100 dockerd[2735]: time="2023-12-18T13:32:41.092638741Z" level=warning msg="Failed to delete sandbox a9132aa5b22ac90401eb516901a7eb6a3d9fb1daff42f99397aff71ecb4809f7 from store: open : no such file or directory"
	Dec 18 13:32:41 running-upgrade-208100 dockerd[2735]: time="2023-12-18T13:32:41.101300863Z" level=error msg="0b29ce09353d4bb2cc9098b1194b461266068720a85649ff7f8078a3594965bf cleanup: failed to delete container from containerd: grpc: the client connection is closing: unknown"
	Dec 18 13:32:41 running-upgrade-208100 dockerd[2735]: time="2023-12-18T13:32:41.101485865Z" level=error msg="Handler for POST /containers/0b29ce09353d4bb2cc9098b1194b461266068720a85649ff7f8078a3594965bf/start returned error: transport is closing: unavailable"
	Dec 18 13:32:42 running-upgrade-208100 systemd[1]: docker.service: Succeeded.
	Dec 18 13:32:42 running-upgrade-208100 systemd[1]: Stopped Docker Application Container Engine.
	Dec 18 13:32:42 running-upgrade-208100 systemd[1]: docker.service: Found left-over process 8366 (containerd-shim) in control group while starting unit. Ignoring.
	Dec 18 13:32:42 running-upgrade-208100 systemd[1]: This usually indicates unclean termination of a previous run, or service implementation deficiencies.
	Dec 18 13:32:42 running-upgrade-208100 systemd[1]: docker.service: Found left-over process 8417 (containerd-shim) in control group while starting unit. Ignoring.
	Dec 18 13:32:42 running-upgrade-208100 systemd[1]: This usually indicates unclean termination of a previous run, or service implementation deficiencies.
	Dec 18 13:32:42 running-upgrade-208100 systemd[1]: docker.service: Found left-over process 8573 (containerd-shim) in control group while starting unit. Ignoring.
	Dec 18 13:32:42 running-upgrade-208100 systemd[1]: This usually indicates unclean termination of a previous run, or service implementation deficiencies.
	Dec 18 13:32:42 running-upgrade-208100 systemd[1]: docker.service: Found left-over process 8634 (containerd-shim) in control group while starting unit. Ignoring.
	Dec 18 13:32:42 running-upgrade-208100 systemd[1]: This usually indicates unclean termination of a previous run, or service implementation deficiencies.
	Dec 18 13:32:42 running-upgrade-208100 systemd[1]: docker.service: Found left-over process 8715 (containerd-shim) in control group while starting unit. Ignoring.
	Dec 18 13:32:42 running-upgrade-208100 systemd[1]: This usually indicates unclean termination of a previous run, or service implementation deficiencies.
	Dec 18 13:32:42 running-upgrade-208100 systemd[1]: docker.service: Found left-over process 8757 (containerd-shim) in control group while starting unit. Ignoring.
	Dec 18 13:32:42 running-upgrade-208100 systemd[1]: This usually indicates unclean termination of a previous run, or service implementation deficiencies.
	Dec 18 13:32:42 running-upgrade-208100 systemd[1]: Starting Docker Application Container Engine...
	Dec 18 13:32:42 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:32:42.128903539Z" level=info msg="Starting up"
	Dec 18 13:32:42 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:32:42.131480575Z" level=info msg="libcontainerd: started new containerd process" pid=8903
	Dec 18 13:32:42 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:32:42.131619076Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	Dec 18 13:32:42 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:32:42.131696277Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	Dec 18 13:32:42 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:32:42.131777679Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/containerd/containerd.sock 0  <nil>}] <nil>}" module=grpc
	Dec 18 13:32:42 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:32:42.131975981Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Dec 18 13:32:42 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:32:42.167954178Z" level=info msg="starting containerd" revision=b34a5c8af56e510852c35414db4c1f4fa6172339 version=v1.2.10
	Dec 18 13:32:42 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:32:42.168625987Z" level=info msg="loading plugin "io.containerd.content.v1.content"..." type=io.containerd.content.v1
	Dec 18 13:32:42 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:32:42.169466898Z" level=info msg="loading plugin "io.containerd.snapshotter.v1.btrfs"..." type=io.containerd.snapshotter.v1
	Dec 18 13:32:42 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:32:42.169833704Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.btrfs" error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs must be a btrfs filesystem to be used with the btrfs snapshotter"
	Dec 18 13:32:42 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:32:42.169947805Z" level=info msg="loading plugin "io.containerd.snapshotter.v1.aufs"..." type=io.containerd.snapshotter.v1
	Dec 18 13:32:42 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:32:42.171899632Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.aufs" error="modprobe aufs failed: "modprobe: FATAL: Module aufs not found in directory /lib/modules/4.19.81\n": exit status 1"
	Dec 18 13:32:42 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:32:42.172054834Z" level=info msg="loading plugin "io.containerd.snapshotter.v1.native"..." type=io.containerd.snapshotter.v1
	Dec 18 13:32:42 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:32:42.172865645Z" level=info msg="loading plugin "io.containerd.snapshotter.v1.overlayfs"..." type=io.containerd.snapshotter.v1
	Dec 18 13:32:42 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:32:42.173404253Z" level=info msg="loading plugin "io.containerd.snapshotter.v1.zfs"..." type=io.containerd.snapshotter.v1
	Dec 18 13:32:42 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:32:42.173775158Z" level=info msg="skip loading plugin "io.containerd.snapshotter.v1.zfs"..." type=io.containerd.snapshotter.v1
	Dec 18 13:32:42 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:32:42.173879259Z" level=info msg="loading plugin "io.containerd.metadata.v1.bolt"..." type=io.containerd.metadata.v1
	Dec 18 13:32:42 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:32:42.173911960Z" level=warning msg="could not use snapshotter zfs in metadata plugin" error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin"
	Dec 18 13:32:42 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:32:42.173924460Z" level=warning msg="could not use snapshotter btrfs in metadata plugin" error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs must be a btrfs filesystem to be used with the btrfs snapshotter"
	Dec 18 13:32:42 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:32:42.173931160Z" level=warning msg="could not use snapshotter aufs in metadata plugin" error="modprobe aufs failed: "modprobe: FATAL: Module aufs not found in directory /lib/modules/4.19.81\n": exit status 1"
	Dec 18 13:32:42 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:32:42.174067262Z" level=info msg="loading plugin "io.containerd.differ.v1.walking"..." type=io.containerd.differ.v1
	Dec 18 13:32:42 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:32:42.174171163Z" level=info msg="loading plugin "io.containerd.gc.v1.scheduler"..." type=io.containerd.gc.v1
	Dec 18 13:32:42 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:32:42.174436667Z" level=info msg="loading plugin "io.containerd.service.v1.containers-service"..." type=io.containerd.service.v1
	Dec 18 13:32:42 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:32:42.174462067Z" level=info msg="loading plugin "io.containerd.service.v1.content-service"..." type=io.containerd.service.v1
	Dec 18 13:32:42 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:32:42.174474168Z" level=info msg="loading plugin "io.containerd.service.v1.diff-service"..." type=io.containerd.service.v1
	Dec 18 13:32:42 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:32:42.174486068Z" level=info msg="loading plugin "io.containerd.service.v1.images-service"..." type=io.containerd.service.v1
	Dec 18 13:32:42 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:32:42.174497568Z" level=info msg="loading plugin "io.containerd.service.v1.leases-service"..." type=io.containerd.service.v1
	Dec 18 13:32:42 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:32:42.174509068Z" level=info msg="loading plugin "io.containerd.service.v1.namespaces-service"..." type=io.containerd.service.v1
	Dec 18 13:32:42 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:32:42.174526668Z" level=info msg="loading plugin "io.containerd.service.v1.snapshots-service"..." type=io.containerd.service.v1
	Dec 18 13:32:42 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:32:42.174557169Z" level=info msg="loading plugin "io.containerd.runtime.v1.linux"..." type=io.containerd.runtime.v1
	Dec 18 13:32:42 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:32:42.195700860Z" level=info msg="loading plugin "io.containerd.runtime.v2.task"..." type=io.containerd.runtime.v2
	Dec 18 13:32:42 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:32:42.195872863Z" level=info msg="loading plugin "io.containerd.monitor.v1.cgroups"..." type=io.containerd.monitor.v1
	Dec 18 13:32:42 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:32:42.196466271Z" level=info msg="loading plugin "io.containerd.service.v1.tasks-service"..." type=io.containerd.service.v1
	Dec 18 13:32:42 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:32:42.197802189Z" level=info msg="loading plugin "io.containerd.internal.v1.restart"..." type=io.containerd.internal.v1
	Dec 18 13:32:42 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:32:42.197889390Z" level=info msg="loading plugin "io.containerd.grpc.v1.containers"..." type=io.containerd.grpc.v1
	Dec 18 13:32:42 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:32:42.197920191Z" level=info msg="loading plugin "io.containerd.grpc.v1.content"..." type=io.containerd.grpc.v1
	Dec 18 13:32:42 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:32:42.197931791Z" level=info msg="loading plugin "io.containerd.grpc.v1.diff"..." type=io.containerd.grpc.v1
	Dec 18 13:32:42 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:32:42.197942291Z" level=info msg="loading plugin "io.containerd.grpc.v1.events"..." type=io.containerd.grpc.v1
	Dec 18 13:32:42 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:32:42.197952791Z" level=info msg="loading plugin "io.containerd.grpc.v1.healthcheck"..." type=io.containerd.grpc.v1
	Dec 18 13:32:42 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:32:42.197963992Z" level=info msg="loading plugin "io.containerd.grpc.v1.images"..." type=io.containerd.grpc.v1
	Dec 18 13:32:42 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:32:42.197975092Z" level=info msg="loading plugin "io.containerd.grpc.v1.leases"..." type=io.containerd.grpc.v1
	Dec 18 13:32:42 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:32:42.197985992Z" level=info msg="loading plugin "io.containerd.grpc.v1.namespaces"..." type=io.containerd.grpc.v1
	Dec 18 13:32:42 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:32:42.198008192Z" level=info msg="loading plugin "io.containerd.internal.v1.opt"..." type=io.containerd.internal.v1
	Dec 18 13:32:42 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:32:42.198039693Z" level=info msg="loading plugin "io.containerd.grpc.v1.snapshots"..." type=io.containerd.grpc.v1
	Dec 18 13:32:42 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:32:42.198053293Z" level=info msg="loading plugin "io.containerd.grpc.v1.tasks"..." type=io.containerd.grpc.v1
	Dec 18 13:32:42 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:32:42.198081293Z" level=info msg="loading plugin "io.containerd.grpc.v1.version"..." type=io.containerd.grpc.v1
	Dec 18 13:32:42 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:32:42.198092493Z" level=info msg="loading plugin "io.containerd.grpc.v1.introspection"..." type=io.containerd.grpc.v1
	Dec 18 13:32:42 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:32:42.198389297Z" level=info msg=serving... address="/var/run/docker/containerd/containerd-debug.sock"
	Dec 18 13:32:42 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:32:42.198466498Z" level=info msg=serving... address="/var/run/docker/containerd/containerd.sock"
	Dec 18 13:32:42 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:32:42.198479199Z" level=info msg="containerd successfully booted in 0.032055s"
	Dec 18 13:32:42 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:32:42.210763868Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	Dec 18 13:32:42 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:32:42.210902270Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	Dec 18 13:32:42 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:32:42.210930570Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/containerd/containerd.sock 0  <nil>}] <nil>}" module=grpc
	Dec 18 13:32:42 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:32:42.211049272Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Dec 18 13:32:42 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:32:42.212536593Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	Dec 18 13:32:42 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:32:42.212582393Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	Dec 18 13:32:42 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:32:42.212610794Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/containerd/containerd.sock 0  <nil>}] <nil>}" module=grpc
	Dec 18 13:32:42 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:32:42.212622994Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Dec 18 13:32:42 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:32:42.216677450Z" level=info msg="[graphdriver] using prior storage driver: overlay2"
	Dec 18 13:32:42 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:32:42.310041237Z" level=warning msg="Your kernel does not support cgroup blkio weight"
	Dec 18 13:32:42 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:32:42.310201440Z" level=warning msg="Your kernel does not support cgroup blkio weight_device"
	Dec 18 13:32:42 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:32:42.310383642Z" level=warning msg="Your kernel does not support cgroup blkio throttle.read_bps_device"
	Dec 18 13:32:42 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:32:42.310399042Z" level=warning msg="Your kernel does not support cgroup blkio throttle.write_bps_device"
	Dec 18 13:32:42 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:32:42.310406642Z" level=warning msg="Your kernel does not support cgroup blkio throttle.read_iops_device"
	Dec 18 13:32:42 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:32:42.310413943Z" level=warning msg="Your kernel does not support cgroup blkio throttle.write_iops_device"
	Dec 18 13:32:42 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:32:42.310681546Z" level=info msg="Loading containers: start."
	Dec 18 13:32:43 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:32:43.014705854Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 18 13:32:43 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:32:43.038339175Z" level=warning msg="unmount task rootfs" error="no such file or directory" id=446db23444220c3910aab284164d34381a3715b08132b8504dd0bfdb35a8044f path="/var/run/docker/containerd/daemon/io.containerd.runtime.v1.linux/moby/446db23444220c3910aab284164d34381a3715b08132b8504dd0bfdb35a8044f"
	Dec 18 13:32:43 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:32:43.038666879Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 18 13:32:43 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:32:43.082653876Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 18 13:32:43 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:32:43.111944873Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 18 13:32:43 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:32:43.118661265Z" level=warning msg="unmount task rootfs" error="no such file or directory" id=ed88295551026a24d769b36e26acbc24666cf41a4f29ce13fd652f587638bbbe path="/var/run/docker/containerd/daemon/io.containerd.runtime.v1.linux/moby/ed88295551026a24d769b36e26acbc24666cf41a4f29ce13fd652f587638bbbe"
	Dec 18 13:32:43 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:32:43.120151585Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 18 13:32:43 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:32:43.120437689Z" level=warning msg="87023a324b314810d43e76f256bb2a0ef01af55d3c12cc954730f18fb2114373 cleanup: failed to unmount IPC: umount /var/lib/docker/containers/87023a324b314810d43e76f256bb2a0ef01af55d3c12cc954730f18fb2114373/mounts/shm, flags: 0x2: no such file or directory"
	Dec 18 13:32:43 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:32:43.128884203Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 18 13:32:43 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:32:43.137021014Z" level=warning msg="unmount task rootfs" error="no such file or directory" id=1092fb355598e8d796d71ee4607e51752437cb0651a6d5d8f092d4cbcee10c0f path="/var/run/docker/containerd/daemon/io.containerd.runtime.v1.linux/moby/1092fb355598e8d796d71ee4607e51752437cb0651a6d5d8f092d4cbcee10c0f"
	Dec 18 13:32:43 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:32:43.137363118Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 18 13:32:43 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:32:43.141516475Z" level=warning msg="unmount task rootfs" error="no such file or directory" id=87023a324b314810d43e76f256bb2a0ef01af55d3c12cc954730f18fb2114373 path="/var/run/docker/containerd/daemon/io.containerd.runtime.v1.linux/moby/87023a324b314810d43e76f256bb2a0ef01af55d3c12cc954730f18fb2114373"
	Dec 18 13:32:43 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:32:43.148650572Z" level=warning msg="unmount task rootfs" error="no such file or directory" id=5963ead389b4d455fa0da38ca6b2bee69ff106a129122a828c0c8d06a0e26c8d path="/var/run/docker/containerd/daemon/io.containerd.runtime.v1.linux/moby/5963ead389b4d455fa0da38ca6b2bee69ff106a129122a828c0c8d06a0e26c8d"
	Dec 18 13:32:43 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:32:43.153149533Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 18 13:32:43 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:32:43.153453937Z" level=warning msg="5963ead389b4d455fa0da38ca6b2bee69ff106a129122a828c0c8d06a0e26c8d cleanup: failed to unmount IPC: umount /var/lib/docker/containers/5963ead389b4d455fa0da38ca6b2bee69ff106a129122a828c0c8d06a0e26c8d/mounts/shm, flags: 0x2: no such file or directory"
	Dec 18 13:32:43 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:32:43.153783841Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 18 13:32:43 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:32:43.154008644Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 18 13:32:43 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:32:43.314430522Z" level=info msg="Removing stale sandbox da5ad3c442e68411756c1ea047f8bbcc6c3ed15724f54ee8f3b60b167095a6ae (ed88295551026a24d769b36e26acbc24666cf41a4f29ce13fd652f587638bbbe)"
	Dec 18 13:32:43 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:32:43.317179559Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint 2fce935fa8679bda61c5e579ea0de74a1b56d9814471f14afbd4093291c6a03a 6ea412a76216b1a4db792f06879b6a7bcdc1a46524f2928b8c2bbbf7fdc177ad], retrying...."
	Dec 18 13:32:43 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:32:43.433167333Z" level=info msg="Removing stale sandbox 29cfa647329541ed42b6ff42e242cadec3a9e7886ee1d95e00e9af297cd8dc7d (446db23444220c3910aab284164d34381a3715b08132b8504dd0bfdb35a8044f)"
	Dec 18 13:32:43 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:32:43.436852083Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint 2fce935fa8679bda61c5e579ea0de74a1b56d9814471f14afbd4093291c6a03a 21f2d40c22387c7ed006b2bd1639397d179d7e2c5fe4632f6fd3e4d1b00301d2], retrying...."
	Dec 18 13:32:43 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:32:43.577304189Z" level=info msg="Removing stale sandbox a16a4457c282ffee52703ba3ec3a8f1c469c71721e774031e3bfa05e1866e91d (1092fb355598e8d796d71ee4607e51752437cb0651a6d5d8f092d4cbcee10c0f)"
	Dec 18 13:32:43 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:32:43.585443500Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint eb8696ff1996dad352a2c5bd02b2abb6b69b84bcb8508f4ba2d107072a49bd02 92dc7d12436e8bee86acefd3d19f6918288e14e185d266c003482520b2b395d8], retrying...."
	Dec 18 13:32:43 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:32:43.735742139Z" level=info msg="Removing stale sandbox a9132aa5b22ac90401eb516901a7eb6a3d9fb1daff42f99397aff71ecb4809f7 (0b29ce09353d4bb2cc9098b1194b461266068720a85649ff7f8078a3594965bf)"
	Dec 18 13:32:43 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:32:43.739182286Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint 2fce935fa8679bda61c5e579ea0de74a1b56d9814471f14afbd4093291c6a03a 832d8f03e2355a50d9f20788957ed5a3945a460b7a4933594bbf2f4c332c5bda], retrying...."
	Dec 18 13:32:43 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:32:43.753438880Z" level=info msg="There are old running containers, the network config will not take affect"
	Dec 18 13:32:43 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:32:43.799594306Z" level=info msg="Loading containers: done."
	Dec 18 13:32:43 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:32:43.840900967Z" level=info msg="Docker daemon" commit=633a0ea838 graphdriver(s)=overlay2 version=19.03.5
	Dec 18 13:32:43 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:32:43.841878280Z" level=info msg="Daemon has completed initialization"
	Dec 18 13:32:43 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:32:43.875211132Z" level=info msg="API listen on /var/run/docker.sock"
	Dec 18 13:32:43 running-upgrade-208100 systemd[1]: Started Docker Application Container Engine.
	Dec 18 13:32:43 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:32:43.876014843Z" level=info msg="API listen on [::]:2376"
	Dec 18 13:32:44 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:32:44.349703296Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/d1d35f63acd71a52bd593d048b466df4b6b91bc0e6b80b9e5f145b14bcf2ae9f/shim.sock" debug=false pid=9380
	Dec 18 13:32:44 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:32:44.510443242Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/bfc76e6407688fc3afad0d2c724767c58c0cfbf7217737274d348171e982cb2f/shim.sock" debug=false pid=9417
	Dec 18 13:32:44 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:32:44.510968349Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/d759568fabc5cb3a95f93a6bd9dc398f0ec5d208c9f12c78ff4e0d995bc33eca/shim.sock" debug=false pid=9416
	Dec 18 13:32:44 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:32:44.526792661Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/633ff02596d63b2fd3674ae50988bdf2e4dd905b97d2284d60a386fcd3529048/shim.sock" debug=false pid=9429
	Dec 18 13:32:44 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:32:44.539870835Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/b2645b9471495d6d1e69069be948dfd23c4c4deadd0a00e99ec0f8dd7e85bd6d/shim.sock" debug=false pid=9431
	Dec 18 13:32:44 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:32:44.571987364Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/4669b3e39ea926df6aa01c1a5a99dcc31a8e8ea8411b602cc07145eee7f75ed4/shim.sock" debug=false pid=9466
	Dec 18 13:32:44 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:32:44.599366630Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/d74b3c1e87b87325f3c0762d59d69f9ae2d6ca9c2e64f17f8b32d515131376ef/shim.sock" debug=false pid=9482
	Dec 18 13:32:45 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:32:45.233816552Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/102a30f338430c2229b19c989ad109579135b516dbcd2b7d10d1dc09962c46f0/shim.sock" debug=false pid=9671
	Dec 18 13:32:45 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:32:45.519756409Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/e4e74c211a5107319ce015b1a1dcf443b01f864a74ba66d4d9d5f3266e444cd7/shim.sock" debug=false pid=9742
	Dec 18 13:32:45 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:32:45.541833499Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/a925f2b64cfee4612fc19cbc0f6e1a378fc7e2862658871d6cef23b09a7a3c31/shim.sock" debug=false pid=9752
	Dec 18 13:32:45 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:32:45.565171006Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/8bd5a5f38ae7e08bdbb486c2797fa232c0a3691ebc094dc16d2d430e772371d7/shim.sock" debug=false pid=9753
	Dec 18 13:32:45 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:32:45.681742738Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/b17f31d477d332c6c394226071a9edae5e46792d0cfe59f6191012c4e6b60b4d/shim.sock" debug=false pid=9786
	Dec 18 13:32:45 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:32:45.774952963Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/8737f7395de1fe81a5d873ed4ec7a3be3be269ff3eaa2970751eb81bb92637c7/shim.sock" debug=false pid=9811
	Dec 18 13:32:46 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:32:46.885808772Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/7bba44b457fa3d7b0de1b4f7e37c90e5b2638d21487c5d2a33195996898b685e/shim.sock" debug=false pid=10026
	Dec 18 13:32:47 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:32:47.890792180Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 18 13:32:47 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:32:47.891207985Z" level=warning msg="aeaca1aa5841433fbbdebe49705b87040e0c85c580d17d4332b17bcda37c480d cleanup: failed to unmount IPC: umount /var/lib/docker/containers/aeaca1aa5841433fbbdebe49705b87040e0c85c580d17d4332b17bcda37c480d/mounts/shm, flags: 0x2: no such file or directory"
	Dec 18 13:32:47 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:32:47.909201814Z" level=warning msg="unmount task rootfs" error="no such file or directory" id=aeaca1aa5841433fbbdebe49705b87040e0c85c580d17d4332b17bcda37c480d path="/var/run/docker/containerd/daemon/io.containerd.runtime.v1.linux/moby/aeaca1aa5841433fbbdebe49705b87040e0c85c580d17d4332b17bcda37c480d"
	Dec 18 13:32:47 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:32:47.910999437Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 18 13:32:48 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:32:48.032847180Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/8e2de2e60cc14834c1ffe974a39702d314264b5315d426fa407b7a930f25de69/shim.sock" debug=false pid=10124
	Dec 18 13:32:48 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:32:48.482970114Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/fd187324c33b84f5eedd0d1f50dbf76ef140885044f7cbec8f9ea3557f1ab583/shim.sock" debug=false pid=10184
	Dec 18 13:32:55 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:32:55.239032934Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/ca99e97d3d2921f29b622bba973b11ef162ad88411b7aa8371e533f0c51d8130/shim.sock" debug=false pid=10344
	Dec 18 13:32:59 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:32:59.366809818Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/073e0237abecca1ebe8c9cb4bae36c94ce12867d4b93b98d6254cb9a5a469ac1/shim.sock" debug=false pid=10430
	Dec 18 13:33:13 running-upgrade-208100 systemd[1]: Stopping Docker Application Container Engine...
	Dec 18 13:33:13 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:33:13.443064756Z" level=info msg="Processing signal 'terminated'"
	Dec 18 13:33:14 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:33:14.866719931Z" level=info msg="shim reaped" id=d759568fabc5cb3a95f93a6bd9dc398f0ec5d208c9f12c78ff4e0d995bc33eca
	Dec 18 13:33:14 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:33:14.877607427Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 18 13:33:14 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:33:14.884461324Z" level=info msg="shim reaped" id=bfc76e6407688fc3afad0d2c724767c58c0cfbf7217737274d348171e982cb2f
	Dec 18 13:33:14 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:33:14.893561821Z" level=info msg="shim reaped" id=d74b3c1e87b87325f3c0762d59d69f9ae2d6ca9c2e64f17f8b32d515131376ef
	Dec 18 13:33:14 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:33:14.902616918Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 18 13:33:14 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:33:14.904138117Z" level=info msg="shim reaped" id=633ff02596d63b2fd3674ae50988bdf2e4dd905b97d2284d60a386fcd3529048
	Dec 18 13:33:14 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:33:14.923131110Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 18 13:33:14 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:33:14.923384110Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 18 13:33:14 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:33:14.926036409Z" level=info msg="shim reaped" id=8bd5a5f38ae7e08bdbb486c2797fa232c0a3691ebc094dc16d2d430e772371d7
	Dec 18 13:33:14 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:33:14.934173006Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 18 13:33:14 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:33:14.934954106Z" level=warning msg="8bd5a5f38ae7e08bdbb486c2797fa232c0a3691ebc094dc16d2d430e772371d7 cleanup: failed to unmount IPC: umount /var/lib/docker/containers/8bd5a5f38ae7e08bdbb486c2797fa232c0a3691ebc094dc16d2d430e772371d7/mounts/shm, flags: 0x2: no such file or directory"
	Dec 18 13:33:14 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:33:14.988498086Z" level=info msg="shim reaped" id=ca99e97d3d2921f29b622bba973b11ef162ad88411b7aa8371e533f0c51d8130
	Dec 18 13:33:15 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:33:15.009629178Z" level=info msg="shim reaped" id=073e0237abecca1ebe8c9cb4bae36c94ce12867d4b93b98d6254cb9a5a469ac1
	Dec 18 13:33:15 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:33:15.009816178Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 18 13:33:15 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:33:15.010115978Z" level=warning msg="ca99e97d3d2921f29b622bba973b11ef162ad88411b7aa8371e533f0c51d8130 cleanup: failed to unmount IPC: umount /var/lib/docker/containers/ca99e97d3d2921f29b622bba973b11ef162ad88411b7aa8371e533f0c51d8130/mounts/shm, flags: 0x2: no such file or directory"
	Dec 18 13:33:15 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:33:15.020749774Z" level=info msg="shim reaped" id=8e2de2e60cc14834c1ffe974a39702d314264b5315d426fa407b7a930f25de69
	Dec 18 13:33:15 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:33:15.020990374Z" level=info msg="shim reaped" id=b2645b9471495d6d1e69069be948dfd23c4c4deadd0a00e99ec0f8dd7e85bd6d
	Dec 18 13:33:15 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:33:15.021341974Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 18 13:33:15 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:33:15.021710574Z" level=warning msg="073e0237abecca1ebe8c9cb4bae36c94ce12867d4b93b98d6254cb9a5a469ac1 cleanup: failed to unmount IPC: umount /var/lib/docker/containers/073e0237abecca1ebe8c9cb4bae36c94ce12867d4b93b98d6254cb9a5a469ac1/mounts/shm, flags: 0x2: no such file or directory"
	Dec 18 13:33:15 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:33:15.031049770Z" level=info msg="shim reaped" id=e4e74c211a5107319ce015b1a1dcf443b01f864a74ba66d4d9d5f3266e444cd7
	Dec 18 13:33:15 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:33:15.033216670Z" level=info msg="shim reaped" id=a925f2b64cfee4612fc19cbc0f6e1a378fc7e2862658871d6cef23b09a7a3c31
	Dec 18 13:33:15 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:33:15.036690168Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 18 13:33:15 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:33:15.036737768Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 18 13:33:15 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:33:15.040032867Z" level=info msg="shim reaped" id=4669b3e39ea926df6aa01c1a5a99dcc31a8e8ea8411b602cc07145eee7f75ed4
	Dec 18 13:33:15 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:33:15.041424267Z" level=info msg="shim reaped" id=d1d35f63acd71a52bd593d048b466df4b6b91bc0e6b80b9e5f145b14bcf2ae9f
	Dec 18 13:33:15 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:33:15.042140766Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 18 13:33:15 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:33:15.043040466Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 18 13:33:15 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:33:15.044517065Z" level=warning msg="a925f2b64cfee4612fc19cbc0f6e1a378fc7e2862658871d6cef23b09a7a3c31 cleanup: failed to unmount IPC: umount /var/lib/docker/containers/a925f2b64cfee4612fc19cbc0f6e1a378fc7e2862658871d6cef23b09a7a3c31/mounts/shm, flags: 0x2: no such file or directory"
	Dec 18 13:33:15 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:33:15.046333165Z" level=warning msg="e4e74c211a5107319ce015b1a1dcf443b01f864a74ba66d4d9d5f3266e444cd7 cleanup: failed to unmount IPC: umount /var/lib/docker/containers/e4e74c211a5107319ce015b1a1dcf443b01f864a74ba66d4d9d5f3266e444cd7/mounts/shm, flags: 0x2: no such file or directory"
	Dec 18 13:33:15 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:33:15.058508060Z" level=info msg="shim reaped" id=7bba44b457fa3d7b0de1b4f7e37c90e5b2638d21487c5d2a33195996898b685e
	Dec 18 13:33:15 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:33:15.059742260Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 18 13:33:15 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:33:15.065407558Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 18 13:33:15 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:33:15.065910458Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 18 13:33:15 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:33:15.852306368Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/a6fba5786ff788b9dc73bdb9810a96ce9278d87dec794da1ff55462711b2e979/shim.sock" debug=false pid=11444
	Dec 18 13:33:16 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:33:16.977205053Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/ae36ca9692232e6bed49d3d2678d6128fb5090f1ba6d4ade00e32bcde5414637/shim.sock" debug=false pid=11489
	Dec 18 13:33:19 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:33:19.022810598Z" level=info msg="shim reaped" id=fd187324c33b84f5eedd0d1f50dbf76ef140885044f7cbec8f9ea3557f1ab583
	Dec 18 13:33:19 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:33:19.032839195Z" level=warning msg="fd187324c33b84f5eedd0d1f50dbf76ef140885044f7cbec8f9ea3557f1ab583 cleanup: failed to unmount IPC: umount /var/lib/docker/containers/fd187324c33b84f5eedd0d1f50dbf76ef140885044f7cbec8f9ea3557f1ab583/mounts/shm, flags: 0x2: no such file or directory"
	Dec 18 13:33:19 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:33:19.033426094Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 18 13:33:19 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:33:19.043584191Z" level=info msg="shim reaped" id=8737f7395de1fe81a5d873ed4ec7a3be3be269ff3eaa2970751eb81bb92637c7
	Dec 18 13:33:19 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:33:19.052117387Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 18 13:33:19 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:33:19.052491287Z" level=warning msg="8737f7395de1fe81a5d873ed4ec7a3be3be269ff3eaa2970751eb81bb92637c7 cleanup: failed to unmount IPC: umount /var/lib/docker/containers/8737f7395de1fe81a5d873ed4ec7a3be3be269ff3eaa2970751eb81bb92637c7/mounts/shm, flags: 0x2: no such file or directory"
	Dec 18 13:33:19 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:33:19.181671840Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/6c10ea079e9a1716fd4db0cec5465c43fb0cdf9cf1c493c61c1e199192233fc2/shim.sock" debug=false pid=11598
	Dec 18 13:33:19 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:33:19.579642693Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/4dcc6de242f1ac3ca36ff633ed825f925894ec608e48e9ff5989802973f3d91c/shim.sock" debug=false pid=11661
	Dec 18 13:33:22 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:33:22.727641232Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/fb51af98cd9b6de29d178a9c8da2620cc7bdf852cebeff8d80041280c039eea5/shim.sock" debug=false pid=11755
	Dec 18 13:33:23 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:33:23.188947062Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/087b2d8be812db0e126c8ffdf135a12a904c94e88e3d0cc22b004de7a1ba4f3f/shim.sock" debug=false pid=11820
	Dec 18 13:33:23 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:33:23.823598128Z" level=info msg="Container 102a30f338430c2229b19c989ad109579135b516dbcd2b7d10d1dc09962c46f0 failed to exit within 10 seconds of signal 15 - using the force"
	Dec 18 13:33:23 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:33:23.906010597Z" level=info msg="Container b17f31d477d332c6c394226071a9edae5e46792d0cfe59f6191012c4e6b60b4d failed to exit within 10 seconds of signal 15 - using the force"
	Dec 18 13:33:23 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:33:23.972069373Z" level=info msg="shim reaped" id=102a30f338430c2229b19c989ad109579135b516dbcd2b7d10d1dc09962c46f0
	Dec 18 13:33:23 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:33:23.980801970Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 18 13:33:23 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:33:23.981036470Z" level=warning msg="102a30f338430c2229b19c989ad109579135b516dbcd2b7d10d1dc09962c46f0 cleanup: failed to unmount IPC: umount /var/lib/docker/containers/102a30f338430c2229b19c989ad109579135b516dbcd2b7d10d1dc09962c46f0/mounts/shm, flags: 0x2: no such file or directory"
	Dec 18 13:33:24 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:33:24.076674134Z" level=info msg="shim reaped" id=b17f31d477d332c6c394226071a9edae5e46792d0cfe59f6191012c4e6b60b4d
	Dec 18 13:33:24 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:33:24.087425430Z" level=warning msg="b17f31d477d332c6c394226071a9edae5e46792d0cfe59f6191012c4e6b60b4d cleanup: failed to unmount IPC: umount /var/lib/docker/containers/b17f31d477d332c6c394226071a9edae5e46792d0cfe59f6191012c4e6b60b4d/mounts/shm, flags: 0x2: no such file or directory"
	Dec 18 13:33:24 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:33:24.087466030Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 18 13:33:24 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:33:24.153496106Z" level=info msg="Daemon shutdown complete"
	Dec 18 13:33:24 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:33:24.153553506Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=moby
	Dec 18 13:33:24 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:33:24.153814206Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Dec 18 13:33:24 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:33:24.156690605Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Dec 18 13:33:24 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:33:24.217665282Z" level=warning msg="failed to get endpoint_count map for scope local: open : no such file or directory"
	Dec 18 13:33:24 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:33:24.227305179Z" level=warning msg="eba7c202d8f4db2001fe02512de987a29d2fd1ce3ef67ba7cce1f0192b50ee16 cleanup: failed to unmount IPC: umount /var/lib/docker/containers/eba7c202d8f4db2001fe02512de987a29d2fd1ce3ef67ba7cce1f0192b50ee16/mounts/shm, flags: 0x2: no such file or directory"
	Dec 18 13:33:24 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:33:24.241495674Z" level=error msg="eba7c202d8f4db2001fe02512de987a29d2fd1ce3ef67ba7cce1f0192b50ee16 cleanup: failed to delete container from containerd: grpc: the client connection is closing: unknown"
	Dec 18 13:33:24 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:33:24.241781073Z" level=error msg="Handler for POST /containers/eba7c202d8f4db2001fe02512de987a29d2fd1ce3ef67ba7cce1f0192b50ee16/start returned error: failed to update store for object type *libnetwork.endpoint: open : no such file or directory"
	Dec 18 13:33:24 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:33:24.529152967Z" level=warning msg="failed to get endpoint_count map for scope local: open : no such file or directory"
	Dec 18 13:33:24 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:33:24.536142565Z" level=warning msg="d48eca03f6c616ae5ef3ca13fb7d198addcd4c7dd9c687dfa664291e4f6a45a4 cleanup: failed to unmount IPC: umount /var/lib/docker/containers/d48eca03f6c616ae5ef3ca13fb7d198addcd4c7dd9c687dfa664291e4f6a45a4/mounts/shm, flags: 0x2: no such file or directory"
	Dec 18 13:33:24 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:33:24.543551562Z" level=error msg="d48eca03f6c616ae5ef3ca13fb7d198addcd4c7dd9c687dfa664291e4f6a45a4 cleanup: failed to delete container from containerd: grpc: the client connection is closing: unknown"
	Dec 18 13:33:24 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:33:24.543688362Z" level=error msg="Handler for POST /containers/d48eca03f6c616ae5ef3ca13fb7d198addcd4c7dd9c687dfa664291e4f6a45a4/start returned error: failed to update store for object type *libnetwork.endpoint: open : no such file or directory"
	Dec 18 13:33:25 running-upgrade-208100 systemd[1]: docker.service: Succeeded.
	Dec 18 13:33:25 running-upgrade-208100 systemd[1]: Stopped Docker Application Container Engine.
	Dec 18 13:33:25 running-upgrade-208100 systemd[1]: docker.service: Found left-over process 11444 (containerd-shim) in control group while starting unit. Ignoring.
	Dec 18 13:33:25 running-upgrade-208100 systemd[1]: This usually indicates unclean termination of a previous run, or service implementation deficiencies.
	Dec 18 13:33:25 running-upgrade-208100 systemd[1]: docker.service: Found left-over process 11489 (containerd-shim) in control group while starting unit. Ignoring.
	Dec 18 13:33:25 running-upgrade-208100 systemd[1]: This usually indicates unclean termination of a previous run, or service implementation deficiencies.
	Dec 18 13:33:25 running-upgrade-208100 systemd[1]: docker.service: Found left-over process 11598 (containerd-shim) in control group while starting unit. Ignoring.
	Dec 18 13:33:25 running-upgrade-208100 systemd[1]: This usually indicates unclean termination of a previous run, or service implementation deficiencies.
	Dec 18 13:33:25 running-upgrade-208100 systemd[1]: docker.service: Found left-over process 11661 (containerd-shim) in control group while starting unit. Ignoring.
	Dec 18 13:33:25 running-upgrade-208100 systemd[1]: This usually indicates unclean termination of a previous run, or service implementation deficiencies.
	Dec 18 13:33:25 running-upgrade-208100 systemd[1]: docker.service: Found left-over process 11755 (containerd-shim) in control group while starting unit. Ignoring.
	Dec 18 13:33:25 running-upgrade-208100 systemd[1]: This usually indicates unclean termination of a previous run, or service implementation deficiencies.
	Dec 18 13:33:25 running-upgrade-208100 systemd[1]: docker.service: Found left-over process 11820 (containerd-shim) in control group while starting unit. Ignoring.
	Dec 18 13:33:25 running-upgrade-208100 systemd[1]: This usually indicates unclean termination of a previous run, or service implementation deficiencies.
	Dec 18 13:33:25 running-upgrade-208100 systemd[1]: Starting Docker Application Container Engine...
	Dec 18 13:33:25 running-upgrade-208100 dockerd[11947]: time="2023-12-18T13:33:25.229411409Z" level=info msg="Starting up"
	Dec 18 13:33:25 running-upgrade-208100 dockerd[11947]: time="2023-12-18T13:33:25.232850708Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	Dec 18 13:33:25 running-upgrade-208100 dockerd[11947]: time="2023-12-18T13:33:25.232982008Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	Dec 18 13:33:25 running-upgrade-208100 dockerd[11947]: time="2023-12-18T13:33:25.233013208Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock 0  <nil>}] <nil>}" module=grpc
	Dec 18 13:33:25 running-upgrade-208100 dockerd[11947]: time="2023-12-18T13:33:25.233033308Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Dec 18 13:33:25 running-upgrade-208100 dockerd[11947]: time="2023-12-18T13:33:25.233507108Z" level=warning msg="grpc: addrConn.createTransport failed to connect to {unix:///run/containerd/containerd.sock 0  <nil>}. Err :connection error: desc = \"transport: Error while dialing dial unix /run/containerd/containerd.sock: connect: connection refused\". Reconnecting..." module=grpc
	Dec 18 13:33:25 running-upgrade-208100 dockerd[11947]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": all SubConns are in TransientFailure, latest connection error: connection error: desc = "transport: Error while dialing dial unix /run/containerd/containerd.sock: connect: connection refused": unavailable
	Dec 18 13:33:25 running-upgrade-208100 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Dec 18 13:33:25 running-upgrade-208100 systemd[1]: docker.service: Failed with result 'exit-code'.
	Dec 18 13:33:25 running-upgrade-208100 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	-- Logs begin at Mon 2023-12-18 13:26:54 UTC, end at Mon 2023-12-18 13:33:25 UTC. --
	Dec 18 13:28:31 running-upgrade-208100 systemd[1]: Starting Docker Application Container Engine...
	Dec 18 13:28:31 running-upgrade-208100 dockerd[2735]: time="2023-12-18T13:28:31.243652308Z" level=info msg="Starting up"
	Dec 18 13:28:31 running-upgrade-208100 dockerd[2735]: time="2023-12-18T13:28:31.247681693Z" level=info msg="libcontainerd: started new containerd process" pid=2743
	Dec 18 13:28:31 running-upgrade-208100 dockerd[2735]: time="2023-12-18T13:28:31.247748493Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	Dec 18 13:28:31 running-upgrade-208100 dockerd[2735]: time="2023-12-18T13:28:31.247761793Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	Dec 18 13:28:31 running-upgrade-208100 dockerd[2735]: time="2023-12-18T13:28:31.247784393Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/containerd/containerd.sock 0  <nil>}] <nil>}" module=grpc
	Dec 18 13:28:31 running-upgrade-208100 dockerd[2735]: time="2023-12-18T13:28:31.247819993Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Dec 18 13:28:31 running-upgrade-208100 dockerd[2735]: time="2023-12-18T13:28:31.317367438Z" level=info msg="starting containerd" revision=b34a5c8af56e510852c35414db4c1f4fa6172339 version=v1.2.10
	Dec 18 13:28:31 running-upgrade-208100 dockerd[2735]: time="2023-12-18T13:28:31.317837736Z" level=info msg="loading plugin "io.containerd.content.v1.content"..." type=io.containerd.content.v1
	Dec 18 13:28:31 running-upgrade-208100 dockerd[2735]: time="2023-12-18T13:28:31.318028135Z" level=info msg="loading plugin "io.containerd.snapshotter.v1.btrfs"..." type=io.containerd.snapshotter.v1
	Dec 18 13:28:31 running-upgrade-208100 dockerd[2735]: time="2023-12-18T13:28:31.318385134Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.btrfs" error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs must be a btrfs filesystem to be used with the btrfs snapshotter"
	Dec 18 13:28:31 running-upgrade-208100 dockerd[2735]: time="2023-12-18T13:28:31.318493634Z" level=info msg="loading plugin "io.containerd.snapshotter.v1.aufs"..." type=io.containerd.snapshotter.v1
	Dec 18 13:28:31 running-upgrade-208100 dockerd[2735]: time="2023-12-18T13:28:31.321169024Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.aufs" error="modprobe aufs failed: "modprobe: FATAL: Module aufs not found in directory /lib/modules/4.19.81\n": exit status 1"
	Dec 18 13:28:31 running-upgrade-208100 dockerd[2735]: time="2023-12-18T13:28:31.321272323Z" level=info msg="loading plugin "io.containerd.snapshotter.v1.native"..." type=io.containerd.snapshotter.v1
	Dec 18 13:28:31 running-upgrade-208100 dockerd[2735]: time="2023-12-18T13:28:31.321938421Z" level=info msg="loading plugin "io.containerd.snapshotter.v1.overlayfs"..." type=io.containerd.snapshotter.v1
	Dec 18 13:28:31 running-upgrade-208100 dockerd[2735]: time="2023-12-18T13:28:31.322329919Z" level=info msg="loading plugin "io.containerd.snapshotter.v1.zfs"..." type=io.containerd.snapshotter.v1
	Dec 18 13:28:31 running-upgrade-208100 dockerd[2735]: time="2023-12-18T13:28:31.322743218Z" level=info msg="skip loading plugin "io.containerd.snapshotter.v1.zfs"..." type=io.containerd.snapshotter.v1
	Dec 18 13:28:31 running-upgrade-208100 dockerd[2735]: time="2023-12-18T13:28:31.322852218Z" level=info msg="loading plugin "io.containerd.metadata.v1.bolt"..." type=io.containerd.metadata.v1
	Dec 18 13:28:31 running-upgrade-208100 dockerd[2735]: time="2023-12-18T13:28:31.322979517Z" level=warning msg="could not use snapshotter btrfs in metadata plugin" error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs must be a btrfs filesystem to be used with the btrfs snapshotter"
	Dec 18 13:28:31 running-upgrade-208100 dockerd[2735]: time="2023-12-18T13:28:31.322991817Z" level=warning msg="could not use snapshotter aufs in metadata plugin" error="modprobe aufs failed: "modprobe: FATAL: Module aufs not found in directory /lib/modules/4.19.81\n": exit status 1"
	Dec 18 13:28:31 running-upgrade-208100 dockerd[2735]: time="2023-12-18T13:28:31.323000417Z" level=warning msg="could not use snapshotter zfs in metadata plugin" error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin"
	Dec 18 13:28:31 running-upgrade-208100 dockerd[2735]: time="2023-12-18T13:28:31.336140569Z" level=info msg="loading plugin "io.containerd.differ.v1.walking"..." type=io.containerd.differ.v1
	Dec 18 13:28:31 running-upgrade-208100 dockerd[2735]: time="2023-12-18T13:28:31.336369668Z" level=info msg="loading plugin "io.containerd.gc.v1.scheduler"..." type=io.containerd.gc.v1
	Dec 18 13:28:31 running-upgrade-208100 dockerd[2735]: time="2023-12-18T13:28:31.336435068Z" level=info msg="loading plugin "io.containerd.service.v1.containers-service"..." type=io.containerd.service.v1
	Dec 18 13:28:31 running-upgrade-208100 dockerd[2735]: time="2023-12-18T13:28:31.336465868Z" level=info msg="loading plugin "io.containerd.service.v1.content-service"..." type=io.containerd.service.v1
	Dec 18 13:28:31 running-upgrade-208100 dockerd[2735]: time="2023-12-18T13:28:31.336495767Z" level=info msg="loading plugin "io.containerd.service.v1.diff-service"..." type=io.containerd.service.v1
	Dec 18 13:28:31 running-upgrade-208100 dockerd[2735]: time="2023-12-18T13:28:31.336519767Z" level=info msg="loading plugin "io.containerd.service.v1.images-service"..." type=io.containerd.service.v1
	Dec 18 13:28:31 running-upgrade-208100 dockerd[2735]: time="2023-12-18T13:28:31.336543267Z" level=info msg="loading plugin "io.containerd.service.v1.leases-service"..." type=io.containerd.service.v1
	Dec 18 13:28:31 running-upgrade-208100 dockerd[2735]: time="2023-12-18T13:28:31.336566967Z" level=info msg="loading plugin "io.containerd.service.v1.namespaces-service"..." type=io.containerd.service.v1
	Dec 18 13:28:31 running-upgrade-208100 dockerd[2735]: time="2023-12-18T13:28:31.336589467Z" level=info msg="loading plugin "io.containerd.service.v1.snapshots-service"..." type=io.containerd.service.v1
	Dec 18 13:28:31 running-upgrade-208100 dockerd[2735]: time="2023-12-18T13:28:31.336612467Z" level=info msg="loading plugin "io.containerd.runtime.v1.linux"..." type=io.containerd.runtime.v1
	Dec 18 13:28:31 running-upgrade-208100 dockerd[2735]: time="2023-12-18T13:28:31.336917866Z" level=info msg="loading plugin "io.containerd.runtime.v2.task"..." type=io.containerd.runtime.v2
	Dec 18 13:28:31 running-upgrade-208100 dockerd[2735]: time="2023-12-18T13:28:31.337152565Z" level=info msg="loading plugin "io.containerd.monitor.v1.cgroups"..." type=io.containerd.monitor.v1
	Dec 18 13:28:31 running-upgrade-208100 dockerd[2735]: time="2023-12-18T13:28:31.338092762Z" level=info msg="loading plugin "io.containerd.service.v1.tasks-service"..." type=io.containerd.service.v1
	Dec 18 13:28:31 running-upgrade-208100 dockerd[2735]: time="2023-12-18T13:28:31.338375561Z" level=info msg="loading plugin "io.containerd.internal.v1.restart"..." type=io.containerd.internal.v1
	Dec 18 13:28:31 running-upgrade-208100 dockerd[2735]: time="2023-12-18T13:28:31.338447060Z" level=info msg="loading plugin "io.containerd.grpc.v1.containers"..." type=io.containerd.grpc.v1
	Dec 18 13:28:31 running-upgrade-208100 dockerd[2735]: time="2023-12-18T13:28:31.338482660Z" level=info msg="loading plugin "io.containerd.grpc.v1.content"..." type=io.containerd.grpc.v1
	Dec 18 13:28:31 running-upgrade-208100 dockerd[2735]: time="2023-12-18T13:28:31.338505460Z" level=info msg="loading plugin "io.containerd.grpc.v1.diff"..." type=io.containerd.grpc.v1
	Dec 18 13:28:31 running-upgrade-208100 dockerd[2735]: time="2023-12-18T13:28:31.338536560Z" level=info msg="loading plugin "io.containerd.grpc.v1.events"..." type=io.containerd.grpc.v1
	Dec 18 13:28:31 running-upgrade-208100 dockerd[2735]: time="2023-12-18T13:28:31.338558560Z" level=info msg="loading plugin "io.containerd.grpc.v1.healthcheck"..." type=io.containerd.grpc.v1
	Dec 18 13:28:31 running-upgrade-208100 dockerd[2735]: time="2023-12-18T13:28:31.338579660Z" level=info msg="loading plugin "io.containerd.grpc.v1.images"..." type=io.containerd.grpc.v1
	Dec 18 13:28:31 running-upgrade-208100 dockerd[2735]: time="2023-12-18T13:28:31.338599460Z" level=info msg="loading plugin "io.containerd.grpc.v1.leases"..." type=io.containerd.grpc.v1
	Dec 18 13:28:31 running-upgrade-208100 dockerd[2735]: time="2023-12-18T13:28:31.338623560Z" level=info msg="loading plugin "io.containerd.grpc.v1.namespaces"..." type=io.containerd.grpc.v1
	Dec 18 13:28:31 running-upgrade-208100 dockerd[2735]: time="2023-12-18T13:28:31.338644360Z" level=info msg="loading plugin "io.containerd.internal.v1.opt"..." type=io.containerd.internal.v1
	Dec 18 13:28:31 running-upgrade-208100 dockerd[2735]: time="2023-12-18T13:28:31.338742759Z" level=info msg="loading plugin "io.containerd.grpc.v1.snapshots"..." type=io.containerd.grpc.v1
	Dec 18 13:28:31 running-upgrade-208100 dockerd[2735]: time="2023-12-18T13:28:31.338780659Z" level=info msg="loading plugin "io.containerd.grpc.v1.tasks"..." type=io.containerd.grpc.v1
	Dec 18 13:28:31 running-upgrade-208100 dockerd[2735]: time="2023-12-18T13:28:31.338805259Z" level=info msg="loading plugin "io.containerd.grpc.v1.version"..." type=io.containerd.grpc.v1
	Dec 18 13:28:31 running-upgrade-208100 dockerd[2735]: time="2023-12-18T13:28:31.338835359Z" level=info msg="loading plugin "io.containerd.grpc.v1.introspection"..." type=io.containerd.grpc.v1
	Dec 18 13:28:31 running-upgrade-208100 dockerd[2735]: time="2023-12-18T13:28:31.339009958Z" level=info msg=serving... address="/var/run/docker/containerd/containerd-debug.sock"
	Dec 18 13:28:31 running-upgrade-208100 dockerd[2735]: time="2023-12-18T13:28:31.339154758Z" level=info msg=serving... address="/var/run/docker/containerd/containerd.sock"
	Dec 18 13:28:31 running-upgrade-208100 dockerd[2735]: time="2023-12-18T13:28:31.339171258Z" level=info msg="containerd successfully booted in 0.025245s"
	Dec 18 13:28:31 running-upgrade-208100 dockerd[2735]: time="2023-12-18T13:28:31.358025588Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	Dec 18 13:28:31 running-upgrade-208100 dockerd[2735]: time="2023-12-18T13:28:31.358083388Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	Dec 18 13:28:31 running-upgrade-208100 dockerd[2735]: time="2023-12-18T13:28:31.358170588Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/containerd/containerd.sock 0  <nil>}] <nil>}" module=grpc
	Dec 18 13:28:31 running-upgrade-208100 dockerd[2735]: time="2023-12-18T13:28:31.358524187Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Dec 18 13:28:31 running-upgrade-208100 dockerd[2735]: time="2023-12-18T13:28:31.362117973Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	Dec 18 13:28:31 running-upgrade-208100 dockerd[2735]: time="2023-12-18T13:28:31.362727171Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	Dec 18 13:28:31 running-upgrade-208100 dockerd[2735]: time="2023-12-18T13:28:31.362908471Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/containerd/containerd.sock 0  <nil>}] <nil>}" module=grpc
	Dec 18 13:28:31 running-upgrade-208100 dockerd[2735]: time="2023-12-18T13:28:31.363006470Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Dec 18 13:28:31 running-upgrade-208100 dockerd[2735]: time="2023-12-18T13:28:31.413882584Z" level=warning msg="Your kernel does not support cgroup blkio weight"
	Dec 18 13:28:31 running-upgrade-208100 dockerd[2735]: time="2023-12-18T13:28:31.414169782Z" level=warning msg="Your kernel does not support cgroup blkio weight_device"
	Dec 18 13:28:31 running-upgrade-208100 dockerd[2735]: time="2023-12-18T13:28:31.414396882Z" level=warning msg="Your kernel does not support cgroup blkio throttle.read_bps_device"
	Dec 18 13:28:31 running-upgrade-208100 dockerd[2735]: time="2023-12-18T13:28:31.414421882Z" level=warning msg="Your kernel does not support cgroup blkio throttle.write_bps_device"
	Dec 18 13:28:31 running-upgrade-208100 dockerd[2735]: time="2023-12-18T13:28:31.414436181Z" level=warning msg="Your kernel does not support cgroup blkio throttle.read_iops_device"
	Dec 18 13:28:31 running-upgrade-208100 dockerd[2735]: time="2023-12-18T13:28:31.414449581Z" level=warning msg="Your kernel does not support cgroup blkio throttle.write_iops_device"
	Dec 18 13:28:31 running-upgrade-208100 dockerd[2735]: time="2023-12-18T13:28:31.414922280Z" level=info msg="Loading containers: start."
	Dec 18 13:28:31 running-upgrade-208100 dockerd[2735]: time="2023-12-18T13:28:31.613467151Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Dec 18 13:28:32 running-upgrade-208100 dockerd[2735]: time="2023-12-18T13:28:32.254301500Z" level=info msg="Loading containers: done."
	Dec 18 13:28:32 running-upgrade-208100 dockerd[2735]: time="2023-12-18T13:28:32.337560294Z" level=info msg="Docker daemon" commit=633a0ea838 graphdriver(s)=overlay2 version=19.03.5
	Dec 18 13:28:32 running-upgrade-208100 dockerd[2735]: time="2023-12-18T13:28:32.337957593Z" level=info msg="Daemon has completed initialization"
	Dec 18 13:28:33 running-upgrade-208100 systemd[1]: Started Docker Application Container Engine.
	Dec 18 13:28:33 running-upgrade-208100 dockerd[2735]: time="2023-12-18T13:28:33.325035771Z" level=info msg="API listen on /var/run/docker.sock"
	Dec 18 13:28:33 running-upgrade-208100 dockerd[2735]: time="2023-12-18T13:28:33.325170870Z" level=info msg="API listen on [::]:2376"
	Dec 18 13:29:51 running-upgrade-208100 dockerd[2735]: time="2023-12-18T13:29:51.508545072Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/7f2d09710da7b3e1aa113e5504828ad48bde6408ca3035fe471637c1591b5337/shim.sock" debug=false pid=4314
	Dec 18 13:29:51 running-upgrade-208100 dockerd[2735]: time="2023-12-18T13:29:51.522258609Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/b4aadbe414c87bc0a6a9c5d54898e8e6586dd660370d81c0633f9b510866451a/shim.sock" debug=false pid=4320
	Dec 18 13:29:51 running-upgrade-208100 dockerd[2735]: time="2023-12-18T13:29:51.538945331Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/12d55b9711dcac3719f9e3d3aec999ef177231c0d9c56a782ad7fd67bb3af861/shim.sock" debug=false pid=4332
	Dec 18 13:29:51 running-upgrade-208100 dockerd[2735]: time="2023-12-18T13:29:51.802458008Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/71a621a4e447cb405091a56ff70084b8874733de6dc714d8b4339a19294b6b7c/shim.sock" debug=false pid=4401
	Dec 18 13:29:51 running-upgrade-208100 dockerd[2735]: time="2023-12-18T13:29:51.809409076Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/1a15a34c3ba1827ceb2dab10b4105531d41b9b40b5f4bc9f6c7084e4593b0240/shim.sock" debug=false pid=4412
	Dec 18 13:29:52 running-upgrade-208100 dockerd[2735]: time="2023-12-18T13:29:52.234059918Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/2c43ae2eaaa18fe33132f0344d49af93d5b2eee42c2b1b94a8b04dfc78bdce2b/shim.sock" debug=false pid=4565
	Dec 18 13:29:52 running-upgrade-208100 dockerd[2735]: time="2023-12-18T13:29:52.254597224Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/edee80fb1c6db38f1d12cab3ebd158a2ef78f0f240c9243587ca70ffda5a4ce4/shim.sock" debug=false pid=4570
	Dec 18 13:29:52 running-upgrade-208100 dockerd[2735]: time="2023-12-18T13:29:52.379828450Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/642295739e5d48eefc84904ce67980442aee0dee10137ca3bd4a0d646d78bab5/shim.sock" debug=false pid=4619
	Dec 18 13:29:52 running-upgrade-208100 dockerd[2735]: time="2023-12-18T13:29:52.442269364Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/f0c791e871065c7f10b7f085c35da54d1d6da1e4e02fa3277e859ae7f9785801/shim.sock" debug=false pid=4644
	Dec 18 13:29:52 running-upgrade-208100 dockerd[2735]: time="2023-12-18T13:29:52.505307576Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/ba17c735408707bd4a507b8f5b47bf40d2d706f46012ba85cee75ad517b2e1f0/shim.sock" debug=false pid=4674
	Dec 18 13:30:11 running-upgrade-208100 dockerd[2735]: time="2023-12-18T13:30:11.327884031Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/8a27b9421df2c52e3d655eced7521464b06efd553e58b900e41aca8f7e9de2db/shim.sock" debug=false pid=5525
	Dec 18 13:30:11 running-upgrade-208100 dockerd[2735]: time="2023-12-18T13:30:11.955412317Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/c4e50e6381dba0a35b43056fb44ed5fa6defb598aa8beb909736580d0c957c69/shim.sock" debug=false pid=5591
	Dec 18 13:30:17 running-upgrade-208100 dockerd[2735]: time="2023-12-18T13:30:17.885468082Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/3449aaaf3d12a19f38ccee2051a8eadda2a8c48f8221dd22586e2a799e710ec9/shim.sock" debug=false pid=5773
	Dec 18 13:30:19 running-upgrade-208100 dockerd[2735]: time="2023-12-18T13:30:19.284626053Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/df99028cb7908df1e06b82d2203c059159149adf6e669828d9666c518717004a/shim.sock" debug=false pid=5849
	Dec 18 13:30:21 running-upgrade-208100 dockerd[2735]: time="2023-12-18T13:30:21.333625605Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/e0e7e1b6d855394c321e544137d3f4f8f959ff2fe26442384b00916ab4ac7a5b/shim.sock" debug=false pid=5912
	Dec 18 13:30:21 running-upgrade-208100 dockerd[2735]: time="2023-12-18T13:30:21.985129007Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/4e68fa7a9d9653e6c4dc943a9ee72818ba9a3b035af33fb0e7029c9a346da9a4/shim.sock" debug=false pid=5968
	Dec 18 13:30:22 running-upgrade-208100 dockerd[2735]: time="2023-12-18T13:30:22.144048217Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/36cadd8dc8d4f45022d4346e79389b61f6f74160a2558991a193ced7c86d70a6/shim.sock" debug=false pid=6005
	Dec 18 13:30:24 running-upgrade-208100 dockerd[2735]: time="2023-12-18T13:30:24.235667650Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/f9dac79e612d8efe14542de693f50ed33f0a7d649c1fb13c72addd79bc7e2e0b/shim.sock" debug=false pid=6102
	Dec 18 13:32:30 running-upgrade-208100 systemd[1]: Stopping Docker Application Container Engine...
	Dec 18 13:32:30 running-upgrade-208100 dockerd[2735]: time="2023-12-18T13:32:30.352320042Z" level=info msg="Processing signal 'terminated'"
	Dec 18 13:32:31 running-upgrade-208100 dockerd[2735]: time="2023-12-18T13:32:31.245622829Z" level=info msg="shim reaped" id=12d55b9711dcac3719f9e3d3aec999ef177231c0d9c56a782ad7fd67bb3af861
	Dec 18 13:32:31 running-upgrade-208100 dockerd[2735]: time="2023-12-18T13:32:31.255733696Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 18 13:32:31 running-upgrade-208100 dockerd[2735]: time="2023-12-18T13:32:31.646982940Z" level=info msg="shim reaped" id=4e68fa7a9d9653e6c4dc943a9ee72818ba9a3b035af33fb0e7029c9a346da9a4
	Dec 18 13:32:31 running-upgrade-208100 dockerd[2735]: time="2023-12-18T13:32:31.655160375Z" level=info msg="shim reaped" id=1a15a34c3ba1827ceb2dab10b4105531d41b9b40b5f4bc9f6c7084e4593b0240
	Dec 18 13:32:31 running-upgrade-208100 dockerd[2735]: time="2023-12-18T13:32:31.659100840Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 18 13:32:31 running-upgrade-208100 dockerd[2735]: time="2023-12-18T13:32:31.661490079Z" level=info msg="shim reaped" id=edee80fb1c6db38f1d12cab3ebd158a2ef78f0f240c9243587ca70ffda5a4ce4
	Dec 18 13:32:31 running-upgrade-208100 dockerd[2735]: time="2023-12-18T13:32:31.671326441Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 18 13:32:31 running-upgrade-208100 dockerd[2735]: time="2023-12-18T13:32:31.677080436Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 18 13:32:31 running-upgrade-208100 dockerd[2735]: time="2023-12-18T13:32:31.677629245Z" level=warning msg="edee80fb1c6db38f1d12cab3ebd158a2ef78f0f240c9243587ca70ffda5a4ce4 cleanup: failed to unmount IPC: umount /var/lib/docker/containers/edee80fb1c6db38f1d12cab3ebd158a2ef78f0f240c9243587ca70ffda5a4ce4/mounts/shm, flags: 0x2: no such file or directory"
	Dec 18 13:32:31 running-upgrade-208100 dockerd[2735]: time="2023-12-18T13:32:31.729409098Z" level=info msg="shim reaped" id=2c43ae2eaaa18fe33132f0344d49af93d5b2eee42c2b1b94a8b04dfc78bdce2b
	Dec 18 13:32:31 running-upgrade-208100 dockerd[2735]: time="2023-12-18T13:32:31.747146790Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 18 13:32:31 running-upgrade-208100 dockerd[2735]: time="2023-12-18T13:32:31.748034905Z" level=warning msg="2c43ae2eaaa18fe33132f0344d49af93d5b2eee42c2b1b94a8b04dfc78bdce2b cleanup: failed to unmount IPC: umount /var/lib/docker/containers/2c43ae2eaaa18fe33132f0344d49af93d5b2eee42c2b1b94a8b04dfc78bdce2b/mounts/shm, flags: 0x2: no such file or directory"
	Dec 18 13:32:31 running-upgrade-208100 dockerd[2735]: time="2023-12-18T13:32:31.794562671Z" level=info msg="shim reaped" id=b4aadbe414c87bc0a6a9c5d54898e8e6586dd660370d81c0633f9b510866451a
	Dec 18 13:32:31 running-upgrade-208100 dockerd[2735]: time="2023-12-18T13:32:31.795321484Z" level=info msg="shim reaped" id=3449aaaf3d12a19f38ccee2051a8eadda2a8c48f8221dd22586e2a799e710ec9
	Dec 18 13:32:31 running-upgrade-208100 dockerd[2735]: time="2023-12-18T13:32:31.804009527Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 18 13:32:31 running-upgrade-208100 dockerd[2735]: time="2023-12-18T13:32:31.810510434Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 18 13:32:31 running-upgrade-208100 dockerd[2735]: time="2023-12-18T13:32:31.854805663Z" level=info msg="shim reaped" id=8a27b9421df2c52e3d655eced7521464b06efd553e58b900e41aca8f7e9de2db
	Dec 18 13:32:31 running-upgrade-208100 dockerd[2735]: time="2023-12-18T13:32:31.866347353Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 18 13:32:31 running-upgrade-208100 dockerd[2735]: time="2023-12-18T13:32:31.892724688Z" level=info msg="shim reaped" id=71a621a4e447cb405091a56ff70084b8874733de6dc714d8b4339a19294b6b7c
	Dec 18 13:32:31 running-upgrade-208100 dockerd[2735]: time="2023-12-18T13:32:31.903063958Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 18 13:32:31 running-upgrade-208100 dockerd[2735]: time="2023-12-18T13:32:31.935048185Z" level=info msg="shim reaped" id=c4e50e6381dba0a35b43056fb44ed5fa6defb598aa8beb909736580d0c957c69
	Dec 18 13:32:31 running-upgrade-208100 dockerd[2735]: time="2023-12-18T13:32:31.936210504Z" level=info msg="shim reaped" id=df99028cb7908df1e06b82d2203c059159149adf6e669828d9666c518717004a
	Dec 18 13:32:31 running-upgrade-208100 dockerd[2735]: time="2023-12-18T13:32:31.951398754Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 18 13:32:31 running-upgrade-208100 dockerd[2735]: time="2023-12-18T13:32:31.951542757Z" level=info msg="shim reaped" id=36cadd8dc8d4f45022d4346e79389b61f6f74160a2558991a193ced7c86d70a6
	Dec 18 13:32:31 running-upgrade-208100 dockerd[2735]: time="2023-12-18T13:32:31.956025931Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 18 13:32:31 running-upgrade-208100 dockerd[2735]: time="2023-12-18T13:32:31.951965664Z" level=warning msg="c4e50e6381dba0a35b43056fb44ed5fa6defb598aa8beb909736580d0c957c69 cleanup: failed to unmount IPC: umount /var/lib/docker/containers/c4e50e6381dba0a35b43056fb44ed5fa6defb598aa8beb909736580d0c957c69/mounts/shm, flags: 0x2: no such file or directory"
	Dec 18 13:32:31 running-upgrade-208100 dockerd[2735]: time="2023-12-18T13:32:31.962363835Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 18 13:32:31 running-upgrade-208100 dockerd[2735]: time="2023-12-18T13:32:31.962588939Z" level=warning msg="36cadd8dc8d4f45022d4346e79389b61f6f74160a2558991a193ced7c86d70a6 cleanup: failed to unmount IPC: umount /var/lib/docker/containers/36cadd8dc8d4f45022d4346e79389b61f6f74160a2558991a193ced7c86d70a6/mounts/shm, flags: 0x2: no such file or directory"
	Dec 18 13:32:32 running-upgrade-208100 dockerd[2735]: time="2023-12-18T13:32:32.000048856Z" level=info msg="shim reaped" id=ba17c735408707bd4a507b8f5b47bf40d2d706f46012ba85cee75ad517b2e1f0
	Dec 18 13:32:32 running-upgrade-208100 dockerd[2735]: time="2023-12-18T13:32:32.011718345Z" level=info msg="shim reaped" id=7f2d09710da7b3e1aa113e5504828ad48bde6408ca3035fe471637c1591b5337
	Dec 18 13:32:32 running-upgrade-208100 dockerd[2735]: time="2023-12-18T13:32:32.011943649Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 18 13:32:32 running-upgrade-208100 dockerd[2735]: time="2023-12-18T13:32:32.012461457Z" level=warning msg="ba17c735408707bd4a507b8f5b47bf40d2d706f46012ba85cee75ad517b2e1f0 cleanup: failed to unmount IPC: umount /var/lib/docker/containers/ba17c735408707bd4a507b8f5b47bf40d2d706f46012ba85cee75ad517b2e1f0/mounts/shm, flags: 0x2: no such file or directory"
	Dec 18 13:32:32 running-upgrade-208100 dockerd[2735]: time="2023-12-18T13:32:32.021818009Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 18 13:32:32 running-upgrade-208100 dockerd[2735]: time="2023-12-18T13:32:32.271097249Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/446db23444220c3910aab284164d34381a3715b08132b8504dd0bfdb35a8044f/shim.sock" debug=false pid=8366
	Dec 18 13:32:32 running-upgrade-208100 dockerd[2735]: time="2023-12-18T13:32:32.579376046Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/5963ead389b4d455fa0da38ca6b2bee69ff106a129122a828c0c8d06a0e26c8d/shim.sock" debug=false pid=8417
	Dec 18 13:32:35 running-upgrade-208100 dockerd[2735]: time="2023-12-18T13:32:35.763037295Z" level=info msg="shim reaped" id=f9dac79e612d8efe14542de693f50ed33f0a7d649c1fb13c72addd79bc7e2e0b
	Dec 18 13:32:35 running-upgrade-208100 dockerd[2735]: time="2023-12-18T13:32:35.773479356Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 18 13:32:35 running-upgrade-208100 dockerd[2735]: time="2023-12-18T13:32:35.773652959Z" level=warning msg="f9dac79e612d8efe14542de693f50ed33f0a7d649c1fb13c72addd79bc7e2e0b cleanup: failed to unmount IPC: umount /var/lib/docker/containers/f9dac79e612d8efe14542de693f50ed33f0a7d649c1fb13c72addd79bc7e2e0b/mounts/shm, flags: 0x2: no such file or directory"
	Dec 18 13:32:35 running-upgrade-208100 dockerd[2735]: time="2023-12-18T13:32:35.930297178Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/1092fb355598e8d796d71ee4607e51752437cb0651a6d5d8f092d4cbcee10c0f/shim.sock" debug=false pid=8573
	Dec 18 13:32:35 running-upgrade-208100 dockerd[2735]: time="2023-12-18T13:32:35.933288524Z" level=info msg="shim reaped" id=e0e7e1b6d855394c321e544137d3f4f8f959ff2fe26442384b00916ab4ac7a5b
	Dec 18 13:32:35 running-upgrade-208100 dockerd[2735]: time="2023-12-18T13:32:35.941556452Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 18 13:32:35 running-upgrade-208100 dockerd[2735]: time="2023-12-18T13:32:35.941752055Z" level=warning msg="e0e7e1b6d855394c321e544137d3f4f8f959ff2fe26442384b00916ab4ac7a5b cleanup: failed to unmount IPC: umount /var/lib/docker/containers/e0e7e1b6d855394c321e544137d3f4f8f959ff2fe26442384b00916ab4ac7a5b/mounts/shm, flags: 0x2: no such file or directory"
	Dec 18 13:32:36 running-upgrade-208100 dockerd[2735]: time="2023-12-18T13:32:36.306868718Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/aeaca1aa5841433fbbdebe49705b87040e0c85c580d17d4332b17bcda37c480d/shim.sock" debug=false pid=8634
	Dec 18 13:32:37 running-upgrade-208100 dockerd[2735]: time="2023-12-18T13:32:37.362493371Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/ed88295551026a24d769b36e26acbc24666cf41a4f29ce13fd652f587638bbbe/shim.sock" debug=false pid=8715
	Dec 18 13:32:37 running-upgrade-208100 dockerd[2735]: time="2023-12-18T13:32:37.628788953Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/87023a324b314810d43e76f256bb2a0ef01af55d3c12cc954730f18fb2114373/shim.sock" debug=false pid=8757
	Dec 18 13:32:40 running-upgrade-208100 dockerd[2735]: time="2023-12-18T13:32:40.675342917Z" level=info msg="Container 642295739e5d48eefc84904ce67980442aee0dee10137ca3bd4a0d646d78bab5 failed to exit within 10 seconds of signal 15 - using the force"
	Dec 18 13:32:40 running-upgrade-208100 dockerd[2735]: time="2023-12-18T13:32:40.783589160Z" level=info msg="Container f0c791e871065c7f10b7f085c35da54d1d6da1e4e02fa3277e859ae7f9785801 failed to exit within 10 seconds of signal 15 - using the force"
	Dec 18 13:32:40 running-upgrade-208100 dockerd[2735]: time="2023-12-18T13:32:40.838484342Z" level=info msg="shim reaped" id=642295739e5d48eefc84904ce67980442aee0dee10137ca3bd4a0d646d78bab5
	Dec 18 13:32:40 running-upgrade-208100 dockerd[2735]: time="2023-12-18T13:32:40.848723287Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 18 13:32:40 running-upgrade-208100 dockerd[2735]: time="2023-12-18T13:32:40.848902890Z" level=warning msg="642295739e5d48eefc84904ce67980442aee0dee10137ca3bd4a0d646d78bab5 cleanup: failed to unmount IPC: umount /var/lib/docker/containers/642295739e5d48eefc84904ce67980442aee0dee10137ca3bd4a0d646d78bab5/mounts/shm, flags: 0x2: no such file or directory"
	Dec 18 13:32:40 running-upgrade-208100 dockerd[2735]: time="2023-12-18T13:32:40.949127118Z" level=info msg="shim reaped" id=f0c791e871065c7f10b7f085c35da54d1d6da1e4e02fa3277e859ae7f9785801
	Dec 18 13:32:40 running-upgrade-208100 dockerd[2735]: time="2023-12-18T13:32:40.960091474Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 18 13:32:40 running-upgrade-208100 dockerd[2735]: time="2023-12-18T13:32:40.960329677Z" level=warning msg="f0c791e871065c7f10b7f085c35da54d1d6da1e4e02fa3277e859ae7f9785801 cleanup: failed to unmount IPC: umount /var/lib/docker/containers/f0c791e871065c7f10b7f085c35da54d1d6da1e4e02fa3277e859ae7f9785801/mounts/shm, flags: 0x2: no such file or directory"
	Dec 18 13:32:41 running-upgrade-208100 dockerd[2735]: time="2023-12-18T13:32:41.065626463Z" level=info msg="Daemon shutdown complete"
	Dec 18 13:32:41 running-upgrade-208100 dockerd[2735]: time="2023-12-18T13:32:41.065878566Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=moby
	Dec 18 13:32:41 running-upgrade-208100 dockerd[2735]: time="2023-12-18T13:32:41.066572676Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Dec 18 13:32:41 running-upgrade-208100 dockerd[2735]: time="2023-12-18T13:32:41.066780179Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Dec 18 13:32:41 running-upgrade-208100 dockerd[2735]: time="2023-12-18T13:32:41.091737529Z" level=warning msg="failed to get endpoint_count map for scope local: open : no such file or directory"
	Dec 18 13:32:41 running-upgrade-208100 dockerd[2735]: time="2023-12-18T13:32:41.092194535Z" level=warning msg="Failed detaching sandbox a9132aa5b22ac90401eb516901a7eb6a3d9fb1daff42f99397aff71ecb4809f7 from endpoint 832d8f03e2355a50d9f20788957ed5a3945a460b7a4933594bbf2f4c332c5bda: failed to update store for object type *libnetwork.endpoint: open : no such file or directory\n"
	Dec 18 13:32:41 running-upgrade-208100 dockerd[2735]: time="2023-12-18T13:32:41.092488739Z" level=warning msg="Failed deleting endpoint 832d8f03e2355a50d9f20788957ed5a3945a460b7a4933594bbf2f4c332c5bda: endpoint with name k8s_POD_kube-addon-manager-minikube_kube-system_c3e29047da86ce6690916750ab69c40b_1 id 832d8f03e2355a50d9f20788957ed5a3945a460b7a4933594bbf2f4c332c5bda has active containers\n"
	Dec 18 13:32:41 running-upgrade-208100 dockerd[2735]: time="2023-12-18T13:32:41.092638741Z" level=warning msg="Failed to delete sandbox a9132aa5b22ac90401eb516901a7eb6a3d9fb1daff42f99397aff71ecb4809f7 from store: open : no such file or directory"
	Dec 18 13:32:41 running-upgrade-208100 dockerd[2735]: time="2023-12-18T13:32:41.101300863Z" level=error msg="0b29ce09353d4bb2cc9098b1194b461266068720a85649ff7f8078a3594965bf cleanup: failed to delete container from containerd: grpc: the client connection is closing: unknown"
	Dec 18 13:32:41 running-upgrade-208100 dockerd[2735]: time="2023-12-18T13:32:41.101485865Z" level=error msg="Handler for POST /containers/0b29ce09353d4bb2cc9098b1194b461266068720a85649ff7f8078a3594965bf/start returned error: transport is closing: unavailable"
	Dec 18 13:32:42 running-upgrade-208100 systemd[1]: docker.service: Succeeded.
	Dec 18 13:32:42 running-upgrade-208100 systemd[1]: Stopped Docker Application Container Engine.
	Dec 18 13:32:42 running-upgrade-208100 systemd[1]: docker.service: Found left-over process 8366 (containerd-shim) in control group while starting unit. Ignoring.
	Dec 18 13:32:42 running-upgrade-208100 systemd[1]: This usually indicates unclean termination of a previous run, or service implementation deficiencies.
	Dec 18 13:32:42 running-upgrade-208100 systemd[1]: docker.service: Found left-over process 8417 (containerd-shim) in control group while starting unit. Ignoring.
	Dec 18 13:32:42 running-upgrade-208100 systemd[1]: This usually indicates unclean termination of a previous run, or service implementation deficiencies.
	Dec 18 13:32:42 running-upgrade-208100 systemd[1]: docker.service: Found left-over process 8573 (containerd-shim) in control group while starting unit. Ignoring.
	Dec 18 13:32:42 running-upgrade-208100 systemd[1]: This usually indicates unclean termination of a previous run, or service implementation deficiencies.
	Dec 18 13:32:42 running-upgrade-208100 systemd[1]: docker.service: Found left-over process 8634 (containerd-shim) in control group while starting unit. Ignoring.
	Dec 18 13:32:42 running-upgrade-208100 systemd[1]: This usually indicates unclean termination of a previous run, or service implementation deficiencies.
	Dec 18 13:32:42 running-upgrade-208100 systemd[1]: docker.service: Found left-over process 8715 (containerd-shim) in control group while starting unit. Ignoring.
	Dec 18 13:32:42 running-upgrade-208100 systemd[1]: This usually indicates unclean termination of a previous run, or service implementation deficiencies.
	Dec 18 13:32:42 running-upgrade-208100 systemd[1]: docker.service: Found left-over process 8757 (containerd-shim) in control group while starting unit. Ignoring.
	Dec 18 13:32:42 running-upgrade-208100 systemd[1]: This usually indicates unclean termination of a previous run, or service implementation deficiencies.
	Dec 18 13:32:42 running-upgrade-208100 systemd[1]: Starting Docker Application Container Engine...
	Dec 18 13:32:42 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:32:42.128903539Z" level=info msg="Starting up"
	Dec 18 13:32:42 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:32:42.131480575Z" level=info msg="libcontainerd: started new containerd process" pid=8903
	Dec 18 13:32:42 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:32:42.131619076Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	Dec 18 13:32:42 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:32:42.131696277Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	Dec 18 13:32:42 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:32:42.131777679Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/containerd/containerd.sock 0  <nil>}] <nil>}" module=grpc
	Dec 18 13:32:42 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:32:42.131975981Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Dec 18 13:32:42 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:32:42.167954178Z" level=info msg="starting containerd" revision=b34a5c8af56e510852c35414db4c1f4fa6172339 version=v1.2.10
	Dec 18 13:32:42 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:32:42.168625987Z" level=info msg="loading plugin "io.containerd.content.v1.content"..." type=io.containerd.content.v1
	Dec 18 13:32:42 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:32:42.169466898Z" level=info msg="loading plugin "io.containerd.snapshotter.v1.btrfs"..." type=io.containerd.snapshotter.v1
	Dec 18 13:32:42 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:32:42.169833704Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.btrfs" error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs must be a btrfs filesystem to be used with the btrfs snapshotter"
	Dec 18 13:32:42 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:32:42.169947805Z" level=info msg="loading plugin "io.containerd.snapshotter.v1.aufs"..." type=io.containerd.snapshotter.v1
	Dec 18 13:32:42 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:32:42.171899632Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.aufs" error="modprobe aufs failed: "modprobe: FATAL: Module aufs not found in directory /lib/modules/4.19.81\n": exit status 1"
	Dec 18 13:32:42 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:32:42.172054834Z" level=info msg="loading plugin "io.containerd.snapshotter.v1.native"..." type=io.containerd.snapshotter.v1
	Dec 18 13:32:42 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:32:42.172865645Z" level=info msg="loading plugin "io.containerd.snapshotter.v1.overlayfs"..." type=io.containerd.snapshotter.v1
	Dec 18 13:32:42 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:32:42.173404253Z" level=info msg="loading plugin "io.containerd.snapshotter.v1.zfs"..." type=io.containerd.snapshotter.v1
	Dec 18 13:32:42 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:32:42.173775158Z" level=info msg="skip loading plugin "io.containerd.snapshotter.v1.zfs"..." type=io.containerd.snapshotter.v1
	Dec 18 13:32:42 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:32:42.173879259Z" level=info msg="loading plugin "io.containerd.metadata.v1.bolt"..." type=io.containerd.metadata.v1
	Dec 18 13:32:42 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:32:42.173911960Z" level=warning msg="could not use snapshotter zfs in metadata plugin" error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin"
	Dec 18 13:32:42 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:32:42.173924460Z" level=warning msg="could not use snapshotter btrfs in metadata plugin" error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs must be a btrfs filesystem to be used with the btrfs snapshotter"
	Dec 18 13:32:42 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:32:42.173931160Z" level=warning msg="could not use snapshotter aufs in metadata plugin" error="modprobe aufs failed: "modprobe: FATAL: Module aufs not found in directory /lib/modules/4.19.81\n": exit status 1"
	Dec 18 13:32:42 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:32:42.174067262Z" level=info msg="loading plugin "io.containerd.differ.v1.walking"..." type=io.containerd.differ.v1
	Dec 18 13:32:42 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:32:42.174171163Z" level=info msg="loading plugin "io.containerd.gc.v1.scheduler"..." type=io.containerd.gc.v1
	Dec 18 13:32:42 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:32:42.174436667Z" level=info msg="loading plugin "io.containerd.service.v1.containers-service"..." type=io.containerd.service.v1
	Dec 18 13:32:42 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:32:42.174462067Z" level=info msg="loading plugin "io.containerd.service.v1.content-service"..." type=io.containerd.service.v1
	Dec 18 13:32:42 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:32:42.174474168Z" level=info msg="loading plugin "io.containerd.service.v1.diff-service"..." type=io.containerd.service.v1
	Dec 18 13:32:42 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:32:42.174486068Z" level=info msg="loading plugin "io.containerd.service.v1.images-service"..." type=io.containerd.service.v1
	Dec 18 13:32:42 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:32:42.174497568Z" level=info msg="loading plugin "io.containerd.service.v1.leases-service"..." type=io.containerd.service.v1
	Dec 18 13:32:42 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:32:42.174509068Z" level=info msg="loading plugin "io.containerd.service.v1.namespaces-service"..." type=io.containerd.service.v1
	Dec 18 13:32:42 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:32:42.174526668Z" level=info msg="loading plugin "io.containerd.service.v1.snapshots-service"..." type=io.containerd.service.v1
	Dec 18 13:32:42 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:32:42.174557169Z" level=info msg="loading plugin "io.containerd.runtime.v1.linux"..." type=io.containerd.runtime.v1
	Dec 18 13:32:42 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:32:42.195700860Z" level=info msg="loading plugin "io.containerd.runtime.v2.task"..." type=io.containerd.runtime.v2
	Dec 18 13:32:42 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:32:42.195872863Z" level=info msg="loading plugin "io.containerd.monitor.v1.cgroups"..." type=io.containerd.monitor.v1
	Dec 18 13:32:42 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:32:42.196466271Z" level=info msg="loading plugin "io.containerd.service.v1.tasks-service"..." type=io.containerd.service.v1
	Dec 18 13:32:42 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:32:42.197802189Z" level=info msg="loading plugin "io.containerd.internal.v1.restart"..." type=io.containerd.internal.v1
	Dec 18 13:32:42 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:32:42.197889390Z" level=info msg="loading plugin "io.containerd.grpc.v1.containers"..." type=io.containerd.grpc.v1
	Dec 18 13:32:42 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:32:42.197920191Z" level=info msg="loading plugin "io.containerd.grpc.v1.content"..." type=io.containerd.grpc.v1
	Dec 18 13:32:42 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:32:42.197931791Z" level=info msg="loading plugin "io.containerd.grpc.v1.diff"..." type=io.containerd.grpc.v1
	Dec 18 13:32:42 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:32:42.197942291Z" level=info msg="loading plugin "io.containerd.grpc.v1.events"..." type=io.containerd.grpc.v1
	Dec 18 13:32:42 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:32:42.197952791Z" level=info msg="loading plugin "io.containerd.grpc.v1.healthcheck"..." type=io.containerd.grpc.v1
	Dec 18 13:32:42 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:32:42.197963992Z" level=info msg="loading plugin "io.containerd.grpc.v1.images"..." type=io.containerd.grpc.v1
	Dec 18 13:32:42 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:32:42.197975092Z" level=info msg="loading plugin "io.containerd.grpc.v1.leases"..." type=io.containerd.grpc.v1
	Dec 18 13:32:42 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:32:42.197985992Z" level=info msg="loading plugin "io.containerd.grpc.v1.namespaces"..." type=io.containerd.grpc.v1
	Dec 18 13:32:42 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:32:42.198008192Z" level=info msg="loading plugin "io.containerd.internal.v1.opt"..." type=io.containerd.internal.v1
	Dec 18 13:32:42 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:32:42.198039693Z" level=info msg="loading plugin "io.containerd.grpc.v1.snapshots"..." type=io.containerd.grpc.v1
	Dec 18 13:32:42 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:32:42.198053293Z" level=info msg="loading plugin "io.containerd.grpc.v1.tasks"..." type=io.containerd.grpc.v1
	Dec 18 13:32:42 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:32:42.198081293Z" level=info msg="loading plugin "io.containerd.grpc.v1.version"..." type=io.containerd.grpc.v1
	Dec 18 13:32:42 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:32:42.198092493Z" level=info msg="loading plugin "io.containerd.grpc.v1.introspection"..." type=io.containerd.grpc.v1
	Dec 18 13:32:42 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:32:42.198389297Z" level=info msg=serving... address="/var/run/docker/containerd/containerd-debug.sock"
	Dec 18 13:32:42 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:32:42.198466498Z" level=info msg=serving... address="/var/run/docker/containerd/containerd.sock"
	Dec 18 13:32:42 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:32:42.198479199Z" level=info msg="containerd successfully booted in 0.032055s"
	Dec 18 13:32:42 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:32:42.210763868Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	Dec 18 13:32:42 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:32:42.210902270Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	Dec 18 13:32:42 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:32:42.210930570Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/containerd/containerd.sock 0  <nil>}] <nil>}" module=grpc
	Dec 18 13:32:42 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:32:42.211049272Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Dec 18 13:32:42 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:32:42.212536593Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	Dec 18 13:32:42 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:32:42.212582393Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	Dec 18 13:32:42 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:32:42.212610794Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/containerd/containerd.sock 0  <nil>}] <nil>}" module=grpc
	Dec 18 13:32:42 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:32:42.212622994Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Dec 18 13:32:42 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:32:42.216677450Z" level=info msg="[graphdriver] using prior storage driver: overlay2"
	Dec 18 13:32:42 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:32:42.310041237Z" level=warning msg="Your kernel does not support cgroup blkio weight"
	Dec 18 13:32:42 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:32:42.310201440Z" level=warning msg="Your kernel does not support cgroup blkio weight_device"
	Dec 18 13:32:42 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:32:42.310383642Z" level=warning msg="Your kernel does not support cgroup blkio throttle.read_bps_device"
	Dec 18 13:32:42 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:32:42.310399042Z" level=warning msg="Your kernel does not support cgroup blkio throttle.write_bps_device"
	Dec 18 13:32:42 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:32:42.310406642Z" level=warning msg="Your kernel does not support cgroup blkio throttle.read_iops_device"
	Dec 18 13:32:42 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:32:42.310413943Z" level=warning msg="Your kernel does not support cgroup blkio throttle.write_iops_device"
	Dec 18 13:32:42 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:32:42.310681546Z" level=info msg="Loading containers: start."
	Dec 18 13:32:43 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:32:43.014705854Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 18 13:32:43 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:32:43.038339175Z" level=warning msg="unmount task rootfs" error="no such file or directory" id=446db23444220c3910aab284164d34381a3715b08132b8504dd0bfdb35a8044f path="/var/run/docker/containerd/daemon/io.containerd.runtime.v1.linux/moby/446db23444220c3910aab284164d34381a3715b08132b8504dd0bfdb35a8044f"
	Dec 18 13:32:43 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:32:43.038666879Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 18 13:32:43 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:32:43.082653876Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 18 13:32:43 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:32:43.111944873Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 18 13:32:43 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:32:43.118661265Z" level=warning msg="unmount task rootfs" error="no such file or directory" id=ed88295551026a24d769b36e26acbc24666cf41a4f29ce13fd652f587638bbbe path="/var/run/docker/containerd/daemon/io.containerd.runtime.v1.linux/moby/ed88295551026a24d769b36e26acbc24666cf41a4f29ce13fd652f587638bbbe"
	Dec 18 13:32:43 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:32:43.120151585Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 18 13:32:43 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:32:43.120437689Z" level=warning msg="87023a324b314810d43e76f256bb2a0ef01af55d3c12cc954730f18fb2114373 cleanup: failed to unmount IPC: umount /var/lib/docker/containers/87023a324b314810d43e76f256bb2a0ef01af55d3c12cc954730f18fb2114373/mounts/shm, flags: 0x2: no such file or directory"
	Dec 18 13:32:43 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:32:43.128884203Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 18 13:32:43 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:32:43.137021014Z" level=warning msg="unmount task rootfs" error="no such file or directory" id=1092fb355598e8d796d71ee4607e51752437cb0651a6d5d8f092d4cbcee10c0f path="/var/run/docker/containerd/daemon/io.containerd.runtime.v1.linux/moby/1092fb355598e8d796d71ee4607e51752437cb0651a6d5d8f092d4cbcee10c0f"
	Dec 18 13:32:43 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:32:43.137363118Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 18 13:32:43 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:32:43.141516475Z" level=warning msg="unmount task rootfs" error="no such file or directory" id=87023a324b314810d43e76f256bb2a0ef01af55d3c12cc954730f18fb2114373 path="/var/run/docker/containerd/daemon/io.containerd.runtime.v1.linux/moby/87023a324b314810d43e76f256bb2a0ef01af55d3c12cc954730f18fb2114373"
	Dec 18 13:32:43 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:32:43.148650572Z" level=warning msg="unmount task rootfs" error="no such file or directory" id=5963ead389b4d455fa0da38ca6b2bee69ff106a129122a828c0c8d06a0e26c8d path="/var/run/docker/containerd/daemon/io.containerd.runtime.v1.linux/moby/5963ead389b4d455fa0da38ca6b2bee69ff106a129122a828c0c8d06a0e26c8d"
	Dec 18 13:32:43 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:32:43.153149533Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 18 13:32:43 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:32:43.153453937Z" level=warning msg="5963ead389b4d455fa0da38ca6b2bee69ff106a129122a828c0c8d06a0e26c8d cleanup: failed to unmount IPC: umount /var/lib/docker/containers/5963ead389b4d455fa0da38ca6b2bee69ff106a129122a828c0c8d06a0e26c8d/mounts/shm, flags: 0x2: no such file or directory"
	Dec 18 13:32:43 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:32:43.153783841Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 18 13:32:43 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:32:43.154008644Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 18 13:32:43 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:32:43.314430522Z" level=info msg="Removing stale sandbox da5ad3c442e68411756c1ea047f8bbcc6c3ed15724f54ee8f3b60b167095a6ae (ed88295551026a24d769b36e26acbc24666cf41a4f29ce13fd652f587638bbbe)"
	Dec 18 13:32:43 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:32:43.317179559Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint 2fce935fa8679bda61c5e579ea0de74a1b56d9814471f14afbd4093291c6a03a 6ea412a76216b1a4db792f06879b6a7bcdc1a46524f2928b8c2bbbf7fdc177ad], retrying...."
	Dec 18 13:32:43 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:32:43.433167333Z" level=info msg="Removing stale sandbox 29cfa647329541ed42b6ff42e242cadec3a9e7886ee1d95e00e9af297cd8dc7d (446db23444220c3910aab284164d34381a3715b08132b8504dd0bfdb35a8044f)"
	Dec 18 13:32:43 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:32:43.436852083Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint 2fce935fa8679bda61c5e579ea0de74a1b56d9814471f14afbd4093291c6a03a 21f2d40c22387c7ed006b2bd1639397d179d7e2c5fe4632f6fd3e4d1b00301d2], retrying...."
	Dec 18 13:32:43 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:32:43.577304189Z" level=info msg="Removing stale sandbox a16a4457c282ffee52703ba3ec3a8f1c469c71721e774031e3bfa05e1866e91d (1092fb355598e8d796d71ee4607e51752437cb0651a6d5d8f092d4cbcee10c0f)"
	Dec 18 13:32:43 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:32:43.585443500Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint eb8696ff1996dad352a2c5bd02b2abb6b69b84bcb8508f4ba2d107072a49bd02 92dc7d12436e8bee86acefd3d19f6918288e14e185d266c003482520b2b395d8], retrying...."
	Dec 18 13:32:43 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:32:43.735742139Z" level=info msg="Removing stale sandbox a9132aa5b22ac90401eb516901a7eb6a3d9fb1daff42f99397aff71ecb4809f7 (0b29ce09353d4bb2cc9098b1194b461266068720a85649ff7f8078a3594965bf)"
	Dec 18 13:32:43 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:32:43.739182286Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint 2fce935fa8679bda61c5e579ea0de74a1b56d9814471f14afbd4093291c6a03a 832d8f03e2355a50d9f20788957ed5a3945a460b7a4933594bbf2f4c332c5bda], retrying...."
	Dec 18 13:32:43 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:32:43.753438880Z" level=info msg="There are old running containers, the network config will not take affect"
	Dec 18 13:32:43 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:32:43.799594306Z" level=info msg="Loading containers: done."
	Dec 18 13:32:43 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:32:43.840900967Z" level=info msg="Docker daemon" commit=633a0ea838 graphdriver(s)=overlay2 version=19.03.5
	Dec 18 13:32:43 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:32:43.841878280Z" level=info msg="Daemon has completed initialization"
	Dec 18 13:32:43 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:32:43.875211132Z" level=info msg="API listen on /var/run/docker.sock"
	Dec 18 13:32:43 running-upgrade-208100 systemd[1]: Started Docker Application Container Engine.
	Dec 18 13:32:43 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:32:43.876014843Z" level=info msg="API listen on [::]:2376"
	Dec 18 13:32:44 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:32:44.349703296Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/d1d35f63acd71a52bd593d048b466df4b6b91bc0e6b80b9e5f145b14bcf2ae9f/shim.sock" debug=false pid=9380
	Dec 18 13:32:44 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:32:44.510443242Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/bfc76e6407688fc3afad0d2c724767c58c0cfbf7217737274d348171e982cb2f/shim.sock" debug=false pid=9417
	Dec 18 13:32:44 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:32:44.510968349Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/d759568fabc5cb3a95f93a6bd9dc398f0ec5d208c9f12c78ff4e0d995bc33eca/shim.sock" debug=false pid=9416
	Dec 18 13:32:44 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:32:44.526792661Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/633ff02596d63b2fd3674ae50988bdf2e4dd905b97d2284d60a386fcd3529048/shim.sock" debug=false pid=9429
	Dec 18 13:32:44 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:32:44.539870835Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/b2645b9471495d6d1e69069be948dfd23c4c4deadd0a00e99ec0f8dd7e85bd6d/shim.sock" debug=false pid=9431
	Dec 18 13:32:44 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:32:44.571987364Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/4669b3e39ea926df6aa01c1a5a99dcc31a8e8ea8411b602cc07145eee7f75ed4/shim.sock" debug=false pid=9466
	Dec 18 13:32:44 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:32:44.599366630Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/d74b3c1e87b87325f3c0762d59d69f9ae2d6ca9c2e64f17f8b32d515131376ef/shim.sock" debug=false pid=9482
	Dec 18 13:32:45 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:32:45.233816552Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/102a30f338430c2229b19c989ad109579135b516dbcd2b7d10d1dc09962c46f0/shim.sock" debug=false pid=9671
	Dec 18 13:32:45 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:32:45.519756409Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/e4e74c211a5107319ce015b1a1dcf443b01f864a74ba66d4d9d5f3266e444cd7/shim.sock" debug=false pid=9742
	Dec 18 13:32:45 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:32:45.541833499Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/a925f2b64cfee4612fc19cbc0f6e1a378fc7e2862658871d6cef23b09a7a3c31/shim.sock" debug=false pid=9752
	Dec 18 13:32:45 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:32:45.565171006Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/8bd5a5f38ae7e08bdbb486c2797fa232c0a3691ebc094dc16d2d430e772371d7/shim.sock" debug=false pid=9753
	Dec 18 13:32:45 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:32:45.681742738Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/b17f31d477d332c6c394226071a9edae5e46792d0cfe59f6191012c4e6b60b4d/shim.sock" debug=false pid=9786
	Dec 18 13:32:45 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:32:45.774952963Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/8737f7395de1fe81a5d873ed4ec7a3be3be269ff3eaa2970751eb81bb92637c7/shim.sock" debug=false pid=9811
	Dec 18 13:32:46 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:32:46.885808772Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/7bba44b457fa3d7b0de1b4f7e37c90e5b2638d21487c5d2a33195996898b685e/shim.sock" debug=false pid=10026
	Dec 18 13:32:47 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:32:47.890792180Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 18 13:32:47 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:32:47.891207985Z" level=warning msg="aeaca1aa5841433fbbdebe49705b87040e0c85c580d17d4332b17bcda37c480d cleanup: failed to unmount IPC: umount /var/lib/docker/containers/aeaca1aa5841433fbbdebe49705b87040e0c85c580d17d4332b17bcda37c480d/mounts/shm, flags: 0x2: no such file or directory"
	Dec 18 13:32:47 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:32:47.909201814Z" level=warning msg="unmount task rootfs" error="no such file or directory" id=aeaca1aa5841433fbbdebe49705b87040e0c85c580d17d4332b17bcda37c480d path="/var/run/docker/containerd/daemon/io.containerd.runtime.v1.linux/moby/aeaca1aa5841433fbbdebe49705b87040e0c85c580d17d4332b17bcda37c480d"
	Dec 18 13:32:47 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:32:47.910999437Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 18 13:32:48 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:32:48.032847180Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/8e2de2e60cc14834c1ffe974a39702d314264b5315d426fa407b7a930f25de69/shim.sock" debug=false pid=10124
	Dec 18 13:32:48 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:32:48.482970114Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/fd187324c33b84f5eedd0d1f50dbf76ef140885044f7cbec8f9ea3557f1ab583/shim.sock" debug=false pid=10184
	Dec 18 13:32:55 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:32:55.239032934Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/ca99e97d3d2921f29b622bba973b11ef162ad88411b7aa8371e533f0c51d8130/shim.sock" debug=false pid=10344
	Dec 18 13:32:59 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:32:59.366809818Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/073e0237abecca1ebe8c9cb4bae36c94ce12867d4b93b98d6254cb9a5a469ac1/shim.sock" debug=false pid=10430
	Dec 18 13:33:13 running-upgrade-208100 systemd[1]: Stopping Docker Application Container Engine...
	Dec 18 13:33:13 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:33:13.443064756Z" level=info msg="Processing signal 'terminated'"
	Dec 18 13:33:14 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:33:14.866719931Z" level=info msg="shim reaped" id=d759568fabc5cb3a95f93a6bd9dc398f0ec5d208c9f12c78ff4e0d995bc33eca
	Dec 18 13:33:14 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:33:14.877607427Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 18 13:33:14 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:33:14.884461324Z" level=info msg="shim reaped" id=bfc76e6407688fc3afad0d2c724767c58c0cfbf7217737274d348171e982cb2f
	Dec 18 13:33:14 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:33:14.893561821Z" level=info msg="shim reaped" id=d74b3c1e87b87325f3c0762d59d69f9ae2d6ca9c2e64f17f8b32d515131376ef
	Dec 18 13:33:14 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:33:14.902616918Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 18 13:33:14 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:33:14.904138117Z" level=info msg="shim reaped" id=633ff02596d63b2fd3674ae50988bdf2e4dd905b97d2284d60a386fcd3529048
	Dec 18 13:33:14 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:33:14.923131110Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 18 13:33:14 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:33:14.923384110Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 18 13:33:14 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:33:14.926036409Z" level=info msg="shim reaped" id=8bd5a5f38ae7e08bdbb486c2797fa232c0a3691ebc094dc16d2d430e772371d7
	Dec 18 13:33:14 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:33:14.934173006Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 18 13:33:14 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:33:14.934954106Z" level=warning msg="8bd5a5f38ae7e08bdbb486c2797fa232c0a3691ebc094dc16d2d430e772371d7 cleanup: failed to unmount IPC: umount /var/lib/docker/containers/8bd5a5f38ae7e08bdbb486c2797fa232c0a3691ebc094dc16d2d430e772371d7/mounts/shm, flags: 0x2: no such file or directory"
	Dec 18 13:33:14 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:33:14.988498086Z" level=info msg="shim reaped" id=ca99e97d3d2921f29b622bba973b11ef162ad88411b7aa8371e533f0c51d8130
	Dec 18 13:33:15 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:33:15.009629178Z" level=info msg="shim reaped" id=073e0237abecca1ebe8c9cb4bae36c94ce12867d4b93b98d6254cb9a5a469ac1
	Dec 18 13:33:15 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:33:15.009816178Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 18 13:33:15 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:33:15.010115978Z" level=warning msg="ca99e97d3d2921f29b622bba973b11ef162ad88411b7aa8371e533f0c51d8130 cleanup: failed to unmount IPC: umount /var/lib/docker/containers/ca99e97d3d2921f29b622bba973b11ef162ad88411b7aa8371e533f0c51d8130/mounts/shm, flags: 0x2: no such file or directory"
	Dec 18 13:33:15 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:33:15.020749774Z" level=info msg="shim reaped" id=8e2de2e60cc14834c1ffe974a39702d314264b5315d426fa407b7a930f25de69
	Dec 18 13:33:15 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:33:15.020990374Z" level=info msg="shim reaped" id=b2645b9471495d6d1e69069be948dfd23c4c4deadd0a00e99ec0f8dd7e85bd6d
	Dec 18 13:33:15 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:33:15.021341974Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 18 13:33:15 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:33:15.021710574Z" level=warning msg="073e0237abecca1ebe8c9cb4bae36c94ce12867d4b93b98d6254cb9a5a469ac1 cleanup: failed to unmount IPC: umount /var/lib/docker/containers/073e0237abecca1ebe8c9cb4bae36c94ce12867d4b93b98d6254cb9a5a469ac1/mounts/shm, flags: 0x2: no such file or directory"
	Dec 18 13:33:15 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:33:15.031049770Z" level=info msg="shim reaped" id=e4e74c211a5107319ce015b1a1dcf443b01f864a74ba66d4d9d5f3266e444cd7
	Dec 18 13:33:15 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:33:15.033216670Z" level=info msg="shim reaped" id=a925f2b64cfee4612fc19cbc0f6e1a378fc7e2862658871d6cef23b09a7a3c31
	Dec 18 13:33:15 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:33:15.036690168Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 18 13:33:15 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:33:15.036737768Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 18 13:33:15 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:33:15.040032867Z" level=info msg="shim reaped" id=4669b3e39ea926df6aa01c1a5a99dcc31a8e8ea8411b602cc07145eee7f75ed4
	Dec 18 13:33:15 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:33:15.041424267Z" level=info msg="shim reaped" id=d1d35f63acd71a52bd593d048b466df4b6b91bc0e6b80b9e5f145b14bcf2ae9f
	Dec 18 13:33:15 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:33:15.042140766Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 18 13:33:15 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:33:15.043040466Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 18 13:33:15 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:33:15.044517065Z" level=warning msg="a925f2b64cfee4612fc19cbc0f6e1a378fc7e2862658871d6cef23b09a7a3c31 cleanup: failed to unmount IPC: umount /var/lib/docker/containers/a925f2b64cfee4612fc19cbc0f6e1a378fc7e2862658871d6cef23b09a7a3c31/mounts/shm, flags: 0x2: no such file or directory"
	Dec 18 13:33:15 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:33:15.046333165Z" level=warning msg="e4e74c211a5107319ce015b1a1dcf443b01f864a74ba66d4d9d5f3266e444cd7 cleanup: failed to unmount IPC: umount /var/lib/docker/containers/e4e74c211a5107319ce015b1a1dcf443b01f864a74ba66d4d9d5f3266e444cd7/mounts/shm, flags: 0x2: no such file or directory"
	Dec 18 13:33:15 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:33:15.058508060Z" level=info msg="shim reaped" id=7bba44b457fa3d7b0de1b4f7e37c90e5b2638d21487c5d2a33195996898b685e
	Dec 18 13:33:15 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:33:15.059742260Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 18 13:33:15 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:33:15.065407558Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 18 13:33:15 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:33:15.065910458Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 18 13:33:15 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:33:15.852306368Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/a6fba5786ff788b9dc73bdb9810a96ce9278d87dec794da1ff55462711b2e979/shim.sock" debug=false pid=11444
	Dec 18 13:33:16 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:33:16.977205053Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/ae36ca9692232e6bed49d3d2678d6128fb5090f1ba6d4ade00e32bcde5414637/shim.sock" debug=false pid=11489
	Dec 18 13:33:19 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:33:19.022810598Z" level=info msg="shim reaped" id=fd187324c33b84f5eedd0d1f50dbf76ef140885044f7cbec8f9ea3557f1ab583
	Dec 18 13:33:19 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:33:19.032839195Z" level=warning msg="fd187324c33b84f5eedd0d1f50dbf76ef140885044f7cbec8f9ea3557f1ab583 cleanup: failed to unmount IPC: umount /var/lib/docker/containers/fd187324c33b84f5eedd0d1f50dbf76ef140885044f7cbec8f9ea3557f1ab583/mounts/shm, flags: 0x2: no such file or directory"
	Dec 18 13:33:19 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:33:19.033426094Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 18 13:33:19 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:33:19.043584191Z" level=info msg="shim reaped" id=8737f7395de1fe81a5d873ed4ec7a3be3be269ff3eaa2970751eb81bb92637c7
	Dec 18 13:33:19 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:33:19.052117387Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 18 13:33:19 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:33:19.052491287Z" level=warning msg="8737f7395de1fe81a5d873ed4ec7a3be3be269ff3eaa2970751eb81bb92637c7 cleanup: failed to unmount IPC: umount /var/lib/docker/containers/8737f7395de1fe81a5d873ed4ec7a3be3be269ff3eaa2970751eb81bb92637c7/mounts/shm, flags: 0x2: no such file or directory"
	Dec 18 13:33:19 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:33:19.181671840Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/6c10ea079e9a1716fd4db0cec5465c43fb0cdf9cf1c493c61c1e199192233fc2/shim.sock" debug=false pid=11598
	Dec 18 13:33:19 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:33:19.579642693Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/4dcc6de242f1ac3ca36ff633ed825f925894ec608e48e9ff5989802973f3d91c/shim.sock" debug=false pid=11661
	Dec 18 13:33:22 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:33:22.727641232Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/fb51af98cd9b6de29d178a9c8da2620cc7bdf852cebeff8d80041280c039eea5/shim.sock" debug=false pid=11755
	Dec 18 13:33:23 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:33:23.188947062Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/087b2d8be812db0e126c8ffdf135a12a904c94e88e3d0cc22b004de7a1ba4f3f/shim.sock" debug=false pid=11820
	Dec 18 13:33:23 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:33:23.823598128Z" level=info msg="Container 102a30f338430c2229b19c989ad109579135b516dbcd2b7d10d1dc09962c46f0 failed to exit within 10 seconds of signal 15 - using the force"
	Dec 18 13:33:23 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:33:23.906010597Z" level=info msg="Container b17f31d477d332c6c394226071a9edae5e46792d0cfe59f6191012c4e6b60b4d failed to exit within 10 seconds of signal 15 - using the force"
	Dec 18 13:33:23 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:33:23.972069373Z" level=info msg="shim reaped" id=102a30f338430c2229b19c989ad109579135b516dbcd2b7d10d1dc09962c46f0
	Dec 18 13:33:23 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:33:23.980801970Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 18 13:33:23 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:33:23.981036470Z" level=warning msg="102a30f338430c2229b19c989ad109579135b516dbcd2b7d10d1dc09962c46f0 cleanup: failed to unmount IPC: umount /var/lib/docker/containers/102a30f338430c2229b19c989ad109579135b516dbcd2b7d10d1dc09962c46f0/mounts/shm, flags: 0x2: no such file or directory"
	Dec 18 13:33:24 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:33:24.076674134Z" level=info msg="shim reaped" id=b17f31d477d332c6c394226071a9edae5e46792d0cfe59f6191012c4e6b60b4d
	Dec 18 13:33:24 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:33:24.087425430Z" level=warning msg="b17f31d477d332c6c394226071a9edae5e46792d0cfe59f6191012c4e6b60b4d cleanup: failed to unmount IPC: umount /var/lib/docker/containers/b17f31d477d332c6c394226071a9edae5e46792d0cfe59f6191012c4e6b60b4d/mounts/shm, flags: 0x2: no such file or directory"
	Dec 18 13:33:24 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:33:24.087466030Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 18 13:33:24 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:33:24.153496106Z" level=info msg="Daemon shutdown complete"
	Dec 18 13:33:24 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:33:24.153553506Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=moby
	Dec 18 13:33:24 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:33:24.153814206Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Dec 18 13:33:24 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:33:24.156690605Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Dec 18 13:33:24 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:33:24.217665282Z" level=warning msg="failed to get endpoint_count map for scope local: open : no such file or directory"
	Dec 18 13:33:24 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:33:24.227305179Z" level=warning msg="eba7c202d8f4db2001fe02512de987a29d2fd1ce3ef67ba7cce1f0192b50ee16 cleanup: failed to unmount IPC: umount /var/lib/docker/containers/eba7c202d8f4db2001fe02512de987a29d2fd1ce3ef67ba7cce1f0192b50ee16/mounts/shm, flags: 0x2: no such file or directory"
	Dec 18 13:33:24 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:33:24.241495674Z" level=error msg="eba7c202d8f4db2001fe02512de987a29d2fd1ce3ef67ba7cce1f0192b50ee16 cleanup: failed to delete container from containerd: grpc: the client connection is closing: unknown"
	Dec 18 13:33:24 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:33:24.241781073Z" level=error msg="Handler for POST /containers/eba7c202d8f4db2001fe02512de987a29d2fd1ce3ef67ba7cce1f0192b50ee16/start returned error: failed to update store for object type *libnetwork.endpoint: open : no such file or directory"
	Dec 18 13:33:24 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:33:24.529152967Z" level=warning msg="failed to get endpoint_count map for scope local: open : no such file or directory"
	Dec 18 13:33:24 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:33:24.536142565Z" level=warning msg="d48eca03f6c616ae5ef3ca13fb7d198addcd4c7dd9c687dfa664291e4f6a45a4 cleanup: failed to unmount IPC: umount /var/lib/docker/containers/d48eca03f6c616ae5ef3ca13fb7d198addcd4c7dd9c687dfa664291e4f6a45a4/mounts/shm, flags: 0x2: no such file or directory"
	Dec 18 13:33:24 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:33:24.543551562Z" level=error msg="d48eca03f6c616ae5ef3ca13fb7d198addcd4c7dd9c687dfa664291e4f6a45a4 cleanup: failed to delete container from containerd: grpc: the client connection is closing: unknown"
	Dec 18 13:33:24 running-upgrade-208100 dockerd[8895]: time="2023-12-18T13:33:24.543688362Z" level=error msg="Handler for POST /containers/d48eca03f6c616ae5ef3ca13fb7d198addcd4c7dd9c687dfa664291e4f6a45a4/start returned error: failed to update store for object type *libnetwork.endpoint: open : no such file or directory"
	Dec 18 13:33:25 running-upgrade-208100 systemd[1]: docker.service: Succeeded.
	Dec 18 13:33:25 running-upgrade-208100 systemd[1]: Stopped Docker Application Container Engine.
	Dec 18 13:33:25 running-upgrade-208100 systemd[1]: docker.service: Found left-over process 11444 (containerd-shim) in control group while starting unit. Ignoring.
	Dec 18 13:33:25 running-upgrade-208100 systemd[1]: This usually indicates unclean termination of a previous run, or service implementation deficiencies.
	Dec 18 13:33:25 running-upgrade-208100 systemd[1]: docker.service: Found left-over process 11489 (containerd-shim) in control group while starting unit. Ignoring.
	Dec 18 13:33:25 running-upgrade-208100 systemd[1]: This usually indicates unclean termination of a previous run, or service implementation deficiencies.
	Dec 18 13:33:25 running-upgrade-208100 systemd[1]: docker.service: Found left-over process 11598 (containerd-shim) in control group while starting unit. Ignoring.
	Dec 18 13:33:25 running-upgrade-208100 systemd[1]: This usually indicates unclean termination of a previous run, or service implementation deficiencies.
	Dec 18 13:33:25 running-upgrade-208100 systemd[1]: docker.service: Found left-over process 11661 (containerd-shim) in control group while starting unit. Ignoring.
	Dec 18 13:33:25 running-upgrade-208100 systemd[1]: This usually indicates unclean termination of a previous run, or service implementation deficiencies.
	Dec 18 13:33:25 running-upgrade-208100 systemd[1]: docker.service: Found left-over process 11755 (containerd-shim) in control group while starting unit. Ignoring.
	Dec 18 13:33:25 running-upgrade-208100 systemd[1]: This usually indicates unclean termination of a previous run, or service implementation deficiencies.
	Dec 18 13:33:25 running-upgrade-208100 systemd[1]: docker.service: Found left-over process 11820 (containerd-shim) in control group while starting unit. Ignoring.
	Dec 18 13:33:25 running-upgrade-208100 systemd[1]: This usually indicates unclean termination of a previous run, or service implementation deficiencies.
	Dec 18 13:33:25 running-upgrade-208100 systemd[1]: Starting Docker Application Container Engine...
	Dec 18 13:33:25 running-upgrade-208100 dockerd[11947]: time="2023-12-18T13:33:25.229411409Z" level=info msg="Starting up"
	Dec 18 13:33:25 running-upgrade-208100 dockerd[11947]: time="2023-12-18T13:33:25.232850708Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	Dec 18 13:33:25 running-upgrade-208100 dockerd[11947]: time="2023-12-18T13:33:25.232982008Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	Dec 18 13:33:25 running-upgrade-208100 dockerd[11947]: time="2023-12-18T13:33:25.233013208Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock 0  <nil>}] <nil>}" module=grpc
	Dec 18 13:33:25 running-upgrade-208100 dockerd[11947]: time="2023-12-18T13:33:25.233033308Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Dec 18 13:33:25 running-upgrade-208100 dockerd[11947]: time="2023-12-18T13:33:25.233507108Z" level=warning msg="grpc: addrConn.createTransport failed to connect to {unix:///run/containerd/containerd.sock 0  <nil>}. Err :connection error: desc = \"transport: Error while dialing dial unix /run/containerd/containerd.sock: connect: connection refused\". Reconnecting..." module=grpc
	Dec 18 13:33:25 running-upgrade-208100 dockerd[11947]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": all SubConns are in TransientFailure, latest connection error: connection error: desc = "transport: Error while dialing dial unix /run/containerd/containerd.sock: connect: connection refused": unavailable
	Dec 18 13:33:25 running-upgrade-208100 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Dec 18 13:33:25 running-upgrade-208100 systemd[1]: docker.service: Failed with result 'exit-code'.
	Dec 18 13:33:25 running-upgrade-208100 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W1218 13:33:25.311056   11484 out.go:239] * 
	* 
	W1218 13:33:25.312864   11484 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1218 13:33:25.313836   11484 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:145: upgrade from v1.6.2 to HEAD failed: out/minikube-windows-amd64.exe start -p running-upgrade-208100 --memory=2200 --alsologtostderr -v=1 --driver=hyperv: exit status 90
panic.go:523: *** TestRunningBinaryUpgrade FAILED at 2023-12-18 13:33:25.8500697 +0000 UTC m=+6703.533648101
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p running-upgrade-208100 -n running-upgrade-208100
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p running-upgrade-208100 -n running-upgrade-208100: exit status 6 (13.0603429s)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	W1218 13:33:25.990627    1684 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	E1218 13:33:38.826686    1684 status.go:415] kubeconfig endpoint: extract IP: "running-upgrade-208100" does not appear in C:\Users\jenkins.minikube7\minikube-integration\kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "running-upgrade-208100" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:175: Cleaning up "running-upgrade-208100" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p running-upgrade-208100
E1218 13:33:42.373856   14928 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-806500\client.crt: The system cannot find the path specified.
E1218 13:33:45.632866   14928 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-922300\client.crt: The system cannot find the path specified.
E1218 13:33:56.571526   14928 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ingress-addon-legacy-134100\client.crt: The system cannot find the path specified.
E1218 13:34:02.422142   14928 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-922300\client.crt: The system cannot find the path specified.
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p running-upgrade-208100: (45.9060154s)
--- FAIL: TestRunningBinaryUpgrade (540.45s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (600.81s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:196: (dbg) Run:  C:\Users\jenkins.minikube7\AppData\Local\Temp\minikube-v1.6.2.1184398753.exe start -p stopped-upgrade-592200 --memory=2200 --vm-driver=hyperv
version_upgrade_test.go:196: (dbg) Done: C:\Users\jenkins.minikube7\AppData\Local\Temp\minikube-v1.6.2.1184398753.exe start -p stopped-upgrade-592200 --memory=2200 --vm-driver=hyperv: (4m43.5720798s)
version_upgrade_test.go:205: (dbg) Run:  C:\Users\jenkins.minikube7\AppData\Local\Temp\minikube-v1.6.2.1184398753.exe -p stopped-upgrade-592200 stop
version_upgrade_test.go:205: (dbg) Done: C:\Users\jenkins.minikube7\AppData\Local\Temp\minikube-v1.6.2.1184398753.exe -p stopped-upgrade-592200 stop: (28.6800092s)
version_upgrade_test.go:211: (dbg) Run:  out/minikube-windows-amd64.exe start -p stopped-upgrade-592200 --memory=2200 --alsologtostderr -v=1 --driver=hyperv
version_upgrade_test.go:211: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p stopped-upgrade-592200 --memory=2200 --alsologtostderr -v=1 --driver=hyperv: exit status 90 (4m48.361893s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-592200] minikube v1.32.0 on Microsoft Windows 10 Enterprise N 10.0.19045.3803 Build 19045.3803
	  - KUBECONFIG=C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube7\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=17824
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	* Using the hyperv driver based on existing profile
	* Starting control plane node stopped-upgrade-592200 in cluster stopped-upgrade-592200
	* Restarting existing hyperv VM for "stopped-upgrade-592200" ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	W1218 13:30:19.495805   14360 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I1218 13:30:19.575438   14360 out.go:296] Setting OutFile to fd 1800 ...
	I1218 13:30:19.576417   14360 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1218 13:30:19.576417   14360 out.go:309] Setting ErrFile to fd 1796...
	I1218 13:30:19.576417   14360 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1218 13:30:19.607422   14360 out.go:303] Setting JSON to false
	I1218 13:30:19.615433   14360 start.go:128] hostinfo: {"hostname":"minikube7","uptime":6694,"bootTime":1702899525,"procs":208,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.3803 Build 19045.3803","kernelVersion":"10.0.19045.3803 Build 19045.3803","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f66ed2ea-6c04-4a6b-8eea-b2fb0953e990"}
	W1218 13:30:19.615433   14360 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1218 13:30:19.617425   14360 out.go:177] * [stopped-upgrade-592200] minikube v1.32.0 on Microsoft Windows 10 Enterprise N 10.0.19045.3803 Build 19045.3803
	I1218 13:30:19.618423   14360 notify.go:220] Checking for updates...
	I1218 13:30:19.662771   14360 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	I1218 13:30:19.720841   14360 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1218 13:30:19.722345   14360 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube7\minikube-integration\.minikube
	I1218 13:30:19.771852   14360 out.go:177]   - MINIKUBE_LOCATION=17824
	I1218 13:30:19.772871   14360 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1218 13:30:19.862923   14360 config.go:182] Loaded profile config "stopped-upgrade-592200": Driver=, ContainerRuntime=docker, KubernetesVersion=v1.17.0
	I1218 13:30:19.862923   14360 start_flags.go:694] config upgrade: Driver=hyperv
	I1218 13:30:19.862923   14360 start_flags.go:706] config upgrade: KicBaseImage=gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702585004-17765@sha256:ba2fbb9efd5b81a443834ba0800f3bc13feea942ce199df74b0054a9bdb32bbd
	I1218 13:30:19.862923   14360 profile.go:148] Saving config to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\stopped-upgrade-592200\config.json ...
	I1218 13:30:19.962630   14360 out.go:177] * Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	I1218 13:30:19.964175   14360 driver.go:392] Setting default libvirt URI to qemu:///system
	I1218 13:30:26.610821   14360 out.go:177] * Using the hyperv driver based on existing profile
	I1218 13:30:26.712608   14360 start.go:298] selected driver: hyperv
	I1218 13:30:26.712608   14360 start.go:902] validating driver "hyperv" against &{Name:stopped-upgrade-592200 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.6.0.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702585004-17765@sha256:ba2fbb9efd5b81a443834ba0800f3bc13feea942ce199df74b0054a9bdb32bbd Memory:2200 CPUs:2 DiskSize:20000 VMDriver:hyperv Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.17.0
ClusterName: Namespace: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:minikube IP:192.168.226.196 Port:8443 KubernetesVersion:v1.17.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[] StartHostTimeout:0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP
: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:0s GPUs:}
	I1218 13:30:26.712608   14360 start.go:913] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1218 13:30:26.767694   14360 cni.go:84] Creating CNI manager for ""
	I1218 13:30:26.767694   14360 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I1218 13:30:26.767694   14360 start_flags.go:323] config:
	{Name:stopped-upgrade-592200 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.6.0.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702585004-17765@sha256:ba2fbb9efd5b81a443834ba0800f3bc13feea942ce199df74b0054a9bdb32bbd Memory:2200 CPUs:2 DiskSize:20000 VMDriver:hyperv Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.17.0 ClusterName: Namespace: APIServerName:minikubeCA APIServerNames:[] APIServerIPs
:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:minikube IP:192.168.226.196 Port:8443 KubernetesVersion:v1.17.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[] StartHostTimeout:0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:0s GPUs:}
	I1218 13:30:26.768070   14360 iso.go:125] acquiring lock: {Name:mka7fdf4332632f41b99a369cc525e40499a0030 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1218 13:30:26.861989   14360 out.go:177] * Starting control plane node stopped-upgrade-592200 in cluster stopped-upgrade-592200
	I1218 13:30:26.864312   14360 preload.go:132] Checking if preload exists for k8s version v1.17.0 and runtime docker
	W1218 13:30:26.911926   14360 preload.go:115] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.17.0/preloaded-images-k8s-v18-v1.17.0-docker-overlay2-amd64.tar.lz4 status code: 404
	I1218 13:30:26.912223   14360 profile.go:148] Saving config to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\stopped-upgrade-592200\config.json ...
	I1218 13:30:26.912415   14360 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager:v1.17.0 -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager_v1.17.0
	I1218 13:30:26.912415   14360 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause:3.1 -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause_3.1
	I1218 13:30:26.912415   14360 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver:v1.17.0 -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver_v1.17.0
	I1218 13:30:26.912415   14360 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner:v5 -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner_v5
	I1218 13:30:26.912415   14360 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd:3.4.3-0 -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.4.3-0
	I1218 13:30:26.912415   14360 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler:v1.17.0 -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler_v1.17.0
	I1218 13:30:26.912415   14360 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns:1.6.5 -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns_1.6.5
	I1218 13:30:26.912415   14360 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy:v1.17.0 -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.17.0
	I1218 13:30:26.915237   14360 start.go:365] acquiring machines lock for stopped-upgrade-592200: {Name:mk814f158b6187cc9297257c36fdbe0d2871c950 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1218 13:30:27.108545   14360 cache.go:107] acquiring lock: {Name:mkeac0ccf1d6f0e0eb0c19801602a218964c6025 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1218 13:30:27.109549   14360 cache.go:107] acquiring lock: {Name:mk945c9573a262bf2c410f3ec338c9e4cbac7ce3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1218 13:30:27.109549   14360 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.4.3-0 exists
	I1218 13:30:27.109549   14360 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager_v1.17.0 exists
	I1218 13:30:27.109549   14360 cache.go:96] cache image "registry.k8s.io/etcd:3.4.3-0" -> "C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\etcd_3.4.3-0" took 197.1334ms
	I1218 13:30:27.109549   14360 cache.go:80] save to tar file registry.k8s.io/etcd:3.4.3-0 -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.4.3-0 succeeded
	I1218 13:30:27.109549   14360 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.17.0" -> "C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\kube-controller-manager_v1.17.0" took 197.1334ms
	I1218 13:30:27.109549   14360 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.17.0 -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager_v1.17.0 succeeded
	I1218 13:30:27.117552   14360 cache.go:107] acquiring lock: {Name:mk6522f86f404131d1768d0de0ce775513ec42e5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1218 13:30:27.117552   14360 cache.go:107] acquiring lock: {Name:mk43c24b3570a50e54ec9f1dc43aba5ea2e54859 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1218 13:30:27.117552   14360 cache.go:107] acquiring lock: {Name:mk3a663ba67028a054dd5a6e96ba367c56e950d4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1218 13:30:27.117552   14360 cache.go:107] acquiring lock: {Name:mke680978131adbec647605a81bab7c783de93d6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1218 13:30:27.117552   14360 cache.go:107] acquiring lock: {Name:mk1869bccfa4db5e538bd31af28e9c95a48df16c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1218 13:30:27.117552   14360 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns_1.6.5 exists
	I1218 13:30:27.117552   14360 cache.go:107] acquiring lock: {Name:mkc6e9060bea9211e4f8126ac5de344442cb8c23 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1218 13:30:27.117552   14360 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner_v5 exists
	I1218 13:30:27.117552   14360 cache.go:96] cache image "registry.k8s.io/coredns:1.6.5" -> "C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\coredns_1.6.5" took 205.136ms
	I1218 13:30:27.117552   14360 cache.go:80] save to tar file registry.k8s.io/coredns:1.6.5 -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns_1.6.5 succeeded
	I1218 13:30:27.117552   14360 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\cache\\images\\amd64\\gcr.io\\k8s-minikube\\storage-provisioner_v5" took 205.136ms
	I1218 13:30:27.117552   14360 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner_v5 succeeded
	I1218 13:30:27.118535   14360 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.17.0 exists
	I1218 13:30:27.118535   14360 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause_3.1 exists
	I1218 13:30:27.118535   14360 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.17.0" -> "C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\kube-proxy_v1.17.0" took 206.1194ms
	I1218 13:30:27.118535   14360 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.17.0 -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.17.0 succeeded
	I1218 13:30:27.118535   14360 cache.go:96] cache image "registry.k8s.io/pause:3.1" -> "C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\pause_3.1" took 206.1194ms
	I1218 13:30:27.118535   14360 cache.go:80] save to tar file registry.k8s.io/pause:3.1 -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause_3.1 succeeded
	I1218 13:30:27.118535   14360 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler_v1.17.0 exists
	I1218 13:30:27.118535   14360 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.17.0" -> "C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\kube-scheduler_v1.17.0" took 206.1194ms
	I1218 13:30:27.118535   14360 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.17.0 -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler_v1.17.0 succeeded
	I1218 13:30:27.216567   14360 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver_v1.17.0 exists
	I1218 13:30:27.217521   14360 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.17.0" -> "C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\kube-apiserver_v1.17.0" took 305.1048ms
	I1218 13:30:27.217521   14360 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.17.0 -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver_v1.17.0 succeeded
	I1218 13:30:27.217521   14360 cache.go:87] Successfully saved all images to host disk.
	I1218 13:33:00.372059   14360 start.go:369] acquired machines lock for "stopped-upgrade-592200" in 2m33.4561887s
	I1218 13:33:00.372328   14360 start.go:96] Skipping create...Using existing machine configuration
	I1218 13:33:00.372349   14360 fix.go:54] fixHost starting: minikube
	I1218 13:33:00.372638   14360 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM stopped-upgrade-592200 ).state
	I1218 13:33:02.629064   14360 main.go:141] libmachine: [stdout =====>] : Off
	
	I1218 13:33:02.629064   14360 main.go:141] libmachine: [stderr =====>] : 
	I1218 13:33:02.629163   14360 fix.go:102] recreateIfNeeded on stopped-upgrade-592200: state=Stopped err=<nil>
	W1218 13:33:02.629163   14360 fix.go:128] unexpected machine state, will restart: <nil>
	I1218 13:33:02.630134   14360 out.go:177] * Restarting existing hyperv VM for "stopped-upgrade-592200" ...
	I1218 13:33:02.630891   14360 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM stopped-upgrade-592200
	I1218 13:33:05.747282   14360 main.go:141] libmachine: [stdout =====>] : 
	I1218 13:33:05.747418   14360 main.go:141] libmachine: [stderr =====>] : 
	I1218 13:33:05.747418   14360 main.go:141] libmachine: Waiting for host to start...
	I1218 13:33:05.747521   14360 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM stopped-upgrade-592200 ).state
	I1218 13:33:08.426250   14360 main.go:141] libmachine: [stdout =====>] : Running
	
	I1218 13:33:08.426324   14360 main.go:141] libmachine: [stderr =====>] : 
	I1218 13:33:08.426387   14360 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM stopped-upgrade-592200 ).networkadapters[0]).ipaddresses[0]
	I1218 13:33:11.255441   14360 main.go:141] libmachine: [stdout =====>] : 
	I1218 13:33:11.255441   14360 main.go:141] libmachine: [stderr =====>] : 
	I1218 13:33:12.259437   14360 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM stopped-upgrade-592200 ).state
	I1218 13:33:14.602586   14360 main.go:141] libmachine: [stdout =====>] : Running
	
	I1218 13:33:14.602586   14360 main.go:141] libmachine: [stderr =====>] : 
	I1218 13:33:14.602760   14360 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM stopped-upgrade-592200 ).networkadapters[0]).ipaddresses[0]
	I1218 13:33:17.264963   14360 main.go:141] libmachine: [stdout =====>] : 
	I1218 13:33:17.265293   14360 main.go:141] libmachine: [stderr =====>] : 
	I1218 13:33:18.265847   14360 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM stopped-upgrade-592200 ).state
	I1218 13:33:20.557424   14360 main.go:141] libmachine: [stdout =====>] : Running
	
	I1218 13:33:20.557585   14360 main.go:141] libmachine: [stderr =====>] : 
	I1218 13:33:20.557720   14360 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM stopped-upgrade-592200 ).networkadapters[0]).ipaddresses[0]
	I1218 13:33:23.176272   14360 main.go:141] libmachine: [stdout =====>] : 
	I1218 13:33:23.176272   14360 main.go:141] libmachine: [stderr =====>] : 
	I1218 13:33:24.188635   14360 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM stopped-upgrade-592200 ).state
	I1218 13:33:26.594884   14360 main.go:141] libmachine: [stdout =====>] : Running
	
	I1218 13:33:26.594884   14360 main.go:141] libmachine: [stderr =====>] : 
	I1218 13:33:26.595035   14360 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM stopped-upgrade-592200 ).networkadapters[0]).ipaddresses[0]
	I1218 13:33:29.377143   14360 main.go:141] libmachine: [stdout =====>] : 
	I1218 13:33:29.377321   14360 main.go:141] libmachine: [stderr =====>] : 
	I1218 13:33:30.379583   14360 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM stopped-upgrade-592200 ).state
	I1218 13:33:32.710610   14360 main.go:141] libmachine: [stdout =====>] : Running
	
	I1218 13:33:32.710610   14360 main.go:141] libmachine: [stderr =====>] : 
	I1218 13:33:32.710716   14360 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM stopped-upgrade-592200 ).networkadapters[0]).ipaddresses[0]
	I1218 13:33:35.396111   14360 main.go:141] libmachine: [stdout =====>] : 
	I1218 13:33:35.396187   14360 main.go:141] libmachine: [stderr =====>] : 
	I1218 13:33:36.410787   14360 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM stopped-upgrade-592200 ).state
	I1218 13:33:38.710225   14360 main.go:141] libmachine: [stdout =====>] : Running
	
	I1218 13:33:38.710299   14360 main.go:141] libmachine: [stderr =====>] : 
	I1218 13:33:38.710299   14360 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM stopped-upgrade-592200 ).networkadapters[0]).ipaddresses[0]
	I1218 13:33:41.503967   14360 main.go:141] libmachine: [stdout =====>] : 
	I1218 13:33:41.504031   14360 main.go:141] libmachine: [stderr =====>] : 
	I1218 13:33:42.514574   14360 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM stopped-upgrade-592200 ).state
	I1218 13:33:44.934684   14360 main.go:141] libmachine: [stdout =====>] : Running
	
	I1218 13:33:44.935100   14360 main.go:141] libmachine: [stderr =====>] : 
	I1218 13:33:44.935100   14360 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM stopped-upgrade-592200 ).networkadapters[0]).ipaddresses[0]
	I1218 13:33:47.659412   14360 main.go:141] libmachine: [stdout =====>] : 
	I1218 13:33:47.659668   14360 main.go:141] libmachine: [stderr =====>] : 
	I1218 13:33:48.673484   14360 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM stopped-upgrade-592200 ).state
	I1218 13:33:50.998427   14360 main.go:141] libmachine: [stdout =====>] : Running
	
	I1218 13:33:50.998632   14360 main.go:141] libmachine: [stderr =====>] : 
	I1218 13:33:50.998791   14360 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM stopped-upgrade-592200 ).networkadapters[0]).ipaddresses[0]
	I1218 13:33:53.682067   14360 main.go:141] libmachine: [stdout =====>] : 192.168.226.196
	
	I1218 13:33:53.682067   14360 main.go:141] libmachine: [stderr =====>] : 
	I1218 13:33:53.684837   14360 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM stopped-upgrade-592200 ).state
	I1218 13:33:55.852862   14360 main.go:141] libmachine: [stdout =====>] : Running
	
	I1218 13:33:55.853040   14360 main.go:141] libmachine: [stderr =====>] : 
	I1218 13:33:55.853040   14360 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM stopped-upgrade-592200 ).networkadapters[0]).ipaddresses[0]
	I1218 13:33:58.531734   14360 main.go:141] libmachine: [stdout =====>] : 192.168.226.196
	
	I1218 13:33:58.532010   14360 main.go:141] libmachine: [stderr =====>] : 
	I1218 13:33:58.532164   14360 profile.go:148] Saving config to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\stopped-upgrade-592200\config.json ...
	I1218 13:33:58.534705   14360 machine.go:88] provisioning docker machine ...
	I1218 13:33:58.534817   14360 buildroot.go:166] provisioning hostname "stopped-upgrade-592200"
	I1218 13:33:58.534891   14360 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM stopped-upgrade-592200 ).state
	I1218 13:34:00.837418   14360 main.go:141] libmachine: [stdout =====>] : Running
	
	I1218 13:34:00.837418   14360 main.go:141] libmachine: [stderr =====>] : 
	I1218 13:34:00.837418   14360 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM stopped-upgrade-592200 ).networkadapters[0]).ipaddresses[0]
	I1218 13:34:03.565083   14360 main.go:141] libmachine: [stdout =====>] : 192.168.226.196
	
	I1218 13:34:03.565083   14360 main.go:141] libmachine: [stderr =====>] : 
	I1218 13:34:03.572243   14360 main.go:141] libmachine: Using SSH client type: native
	I1218 13:34:03.572867   14360 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xdb4f40] 0xdb7a80 <nil>  [] 0s} 192.168.226.196 22 <nil> <nil>}
	I1218 13:34:03.572867   14360 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-592200 && echo "stopped-upgrade-592200" | sudo tee /etc/hostname
	I1218 13:34:03.724412   14360 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-592200
	
	I1218 13:34:03.724519   14360 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM stopped-upgrade-592200 ).state
	I1218 13:34:05.975322   14360 main.go:141] libmachine: [stdout =====>] : Running
	
	I1218 13:34:05.975481   14360 main.go:141] libmachine: [stderr =====>] : 
	I1218 13:34:05.975716   14360 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM stopped-upgrade-592200 ).networkadapters[0]).ipaddresses[0]
	I1218 13:34:08.644412   14360 main.go:141] libmachine: [stdout =====>] : 192.168.226.196
	
	I1218 13:34:08.644412   14360 main.go:141] libmachine: [stderr =====>] : 
	I1218 13:34:08.650131   14360 main.go:141] libmachine: Using SSH client type: native
	I1218 13:34:08.650680   14360 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xdb4f40] 0xdb7a80 <nil>  [] 0s} 192.168.226.196 22 <nil> <nil>}
	I1218 13:34:08.650680   14360 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-592200' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-592200/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-592200' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1218 13:34:08.795577   14360 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1218 13:34:08.795717   14360 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube7\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube7\minikube-integration\.minikube}
	I1218 13:34:08.795774   14360 buildroot.go:174] setting up certificates
	I1218 13:34:08.795774   14360 provision.go:83] configureAuth start
	I1218 13:34:08.795774   14360 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM stopped-upgrade-592200 ).state
	I1218 13:34:10.964686   14360 main.go:141] libmachine: [stdout =====>] : Running
	
	I1218 13:34:10.964686   14360 main.go:141] libmachine: [stderr =====>] : 
	I1218 13:34:10.964686   14360 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM stopped-upgrade-592200 ).networkadapters[0]).ipaddresses[0]
	I1218 13:34:13.646018   14360 main.go:141] libmachine: [stdout =====>] : 192.168.226.196
	
	I1218 13:34:13.646018   14360 main.go:141] libmachine: [stderr =====>] : 
	I1218 13:34:13.646018   14360 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM stopped-upgrade-592200 ).state
	I1218 13:34:15.862873   14360 main.go:141] libmachine: [stdout =====>] : Running
	
	I1218 13:34:15.862977   14360 main.go:141] libmachine: [stderr =====>] : 
	I1218 13:34:15.862977   14360 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM stopped-upgrade-592200 ).networkadapters[0]).ipaddresses[0]
	I1218 13:34:18.483463   14360 main.go:141] libmachine: [stdout =====>] : 192.168.226.196
	
	I1218 13:34:18.483729   14360 main.go:141] libmachine: [stderr =====>] : 
	I1218 13:34:18.483755   14360 provision.go:138] copyHostCerts
	I1218 13:34:18.483863   14360 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/ca.pem, removing ...
	I1218 13:34:18.483863   14360 exec_runner.go:203] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.pem
	I1218 13:34:18.484626   14360 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/ca.pem (1078 bytes)
	I1218 13:34:18.485406   14360 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/cert.pem, removing ...
	I1218 13:34:18.485406   14360 exec_runner.go:203] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cert.pem
	I1218 13:34:18.486046   14360 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/cert.pem (1123 bytes)
	I1218 13:34:18.487391   14360 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/key.pem, removing ...
	I1218 13:34:18.487391   14360 exec_runner.go:203] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\key.pem
	I1218 13:34:18.487831   14360 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/key.pem (1679 bytes)
	I1218 13:34:18.489151   14360 provision.go:112] generating server cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.stopped-upgrade-592200 san=[192.168.226.196 192.168.226.196 localhost 127.0.0.1 minikube stopped-upgrade-592200]
	I1218 13:34:18.898627   14360 provision.go:172] copyRemoteCerts
	I1218 13:34:18.912792   14360 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1218 13:34:18.912968   14360 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM stopped-upgrade-592200 ).state
	I1218 13:34:21.083033   14360 main.go:141] libmachine: [stdout =====>] : Running
	
	I1218 13:34:21.083128   14360 main.go:141] libmachine: [stderr =====>] : 
	I1218 13:34:21.083236   14360 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM stopped-upgrade-592200 ).networkadapters[0]).ipaddresses[0]
	I1218 13:34:23.717347   14360 main.go:141] libmachine: [stdout =====>] : 192.168.226.196
	
	I1218 13:34:23.717347   14360 main.go:141] libmachine: [stderr =====>] : 
	I1218 13:34:23.717955   14360 sshutil.go:53] new ssh client: &{IP:192.168.226.196 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\stopped-upgrade-592200\id_rsa Username:docker}
	I1218 13:34:23.821228   14360 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.9084161s)
	I1218 13:34:23.822072   14360 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1218 13:34:23.845073   14360 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1218 13:34:23.862691   14360 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1241 bytes)
	I1218 13:34:23.880795   14360 provision.go:86] duration metric: configureAuth took 15.0849597s
	I1218 13:34:23.880795   14360 buildroot.go:189] setting minikube options for container-runtime
	I1218 13:34:23.881458   14360 config.go:182] Loaded profile config "stopped-upgrade-592200": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.17.0
	I1218 13:34:23.881547   14360 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM stopped-upgrade-592200 ).state
	I1218 13:34:26.113720   14360 main.go:141] libmachine: [stdout =====>] : Running
	
	I1218 13:34:26.113720   14360 main.go:141] libmachine: [stderr =====>] : 
	I1218 13:34:26.113834   14360 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM stopped-upgrade-592200 ).networkadapters[0]).ipaddresses[0]
	I1218 13:34:28.805178   14360 main.go:141] libmachine: [stdout =====>] : 192.168.226.196
	
	I1218 13:34:28.805178   14360 main.go:141] libmachine: [stderr =====>] : 
	I1218 13:34:28.810039   14360 main.go:141] libmachine: Using SSH client type: native
	I1218 13:34:28.810795   14360 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xdb4f40] 0xdb7a80 <nil>  [] 0s} 192.168.226.196 22 <nil> <nil>}
	I1218 13:34:28.810795   14360 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1218 13:34:28.952299   14360 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1218 13:34:28.952299   14360 buildroot.go:70] root file system type: tmpfs
	I1218 13:34:28.953660   14360 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I1218 13:34:28.953763   14360 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM stopped-upgrade-592200 ).state
	I1218 13:34:31.215315   14360 main.go:141] libmachine: [stdout =====>] : Running
	
	I1218 13:34:31.215400   14360 main.go:141] libmachine: [stderr =====>] : 
	I1218 13:34:31.215400   14360 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM stopped-upgrade-592200 ).networkadapters[0]).ipaddresses[0]
	I1218 13:34:33.779345   14360 main.go:141] libmachine: [stdout =====>] : 192.168.226.196
	
	I1218 13:34:33.779580   14360 main.go:141] libmachine: [stderr =====>] : 
	I1218 13:34:33.786231   14360 main.go:141] libmachine: Using SSH client type: native
	I1218 13:34:33.786689   14360 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xdb4f40] 0xdb7a80 <nil>  [] 0s} 192.168.226.196 22 <nil> <nil>}
	I1218 13:34:33.786689   14360 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1218 13:34:33.935393   14360 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1218 13:34:33.935594   14360 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM stopped-upgrade-592200 ).state
	I1218 13:34:36.061095   14360 main.go:141] libmachine: [stdout =====>] : Running
	
	I1218 13:34:36.061316   14360 main.go:141] libmachine: [stderr =====>] : 
	I1218 13:34:36.061316   14360 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM stopped-upgrade-592200 ).networkadapters[0]).ipaddresses[0]
	I1218 13:34:38.645099   14360 main.go:141] libmachine: [stdout =====>] : 192.168.226.196
	
	I1218 13:34:38.645391   14360 main.go:141] libmachine: [stderr =====>] : 
	I1218 13:34:38.650187   14360 main.go:141] libmachine: Using SSH client type: native
	I1218 13:34:38.651243   14360 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xdb4f40] 0xdb7a80 <nil>  [] 0s} 192.168.226.196 22 <nil> <nil>}
	I1218 13:34:38.651311   14360 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1218 13:34:39.822278   14360 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I1218 13:34:39.822350   14360 machine.go:91] provisioned docker machine in 41.287364s
	I1218 13:34:39.822403   14360 start.go:300] post-start starting for "stopped-upgrade-592200" (driver="hyperv")
	I1218 13:34:39.822403   14360 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1218 13:34:39.836238   14360 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1218 13:34:39.836238   14360 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM stopped-upgrade-592200 ).state
	I1218 13:34:41.992246   14360 main.go:141] libmachine: [stdout =====>] : Running
	
	I1218 13:34:41.992246   14360 main.go:141] libmachine: [stderr =====>] : 
	I1218 13:34:41.992332   14360 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM stopped-upgrade-592200 ).networkadapters[0]).ipaddresses[0]
	I1218 13:34:44.546926   14360 main.go:141] libmachine: [stdout =====>] : 192.168.226.196
	
	I1218 13:34:44.546926   14360 main.go:141] libmachine: [stderr =====>] : 
	I1218 13:34:44.547502   14360 sshutil.go:53] new ssh client: &{IP:192.168.226.196 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\stopped-upgrade-592200\id_rsa Username:docker}
	I1218 13:34:44.651448   14360 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.8151191s)
	I1218 13:34:44.666411   14360 ssh_runner.go:195] Run: cat /etc/os-release
	I1218 13:34:44.677407   14360 info.go:137] Remote host: Buildroot 2019.02.7
	I1218 13:34:44.677407   14360 filesync.go:126] Scanning C:\Users\jenkins.minikube7\minikube-integration\.minikube\addons for local assets ...
	I1218 13:34:44.677407   14360 filesync.go:126] Scanning C:\Users\jenkins.minikube7\minikube-integration\.minikube\files for local assets ...
	I1218 13:34:44.679190   14360 filesync.go:149] local asset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\149282.pem -> 149282.pem in /etc/ssl/certs
	I1218 13:34:44.691437   14360 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1218 13:34:44.700267   14360 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\149282.pem --> /etc/ssl/certs/149282.pem (1708 bytes)
	I1218 13:34:44.718104   14360 start.go:303] post-start completed in 4.8956805s
	I1218 13:34:44.718104   14360 fix.go:56] fixHost completed within 1m44.3453272s
	I1218 13:34:44.718104   14360 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM stopped-upgrade-592200 ).state
	I1218 13:34:46.874802   14360 main.go:141] libmachine: [stdout =====>] : Running
	
	I1218 13:34:46.875044   14360 main.go:141] libmachine: [stderr =====>] : 
	I1218 13:34:46.875231   14360 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM stopped-upgrade-592200 ).networkadapters[0]).ipaddresses[0]
	I1218 13:34:49.452248   14360 main.go:141] libmachine: [stdout =====>] : 192.168.226.196
	
	I1218 13:34:49.452474   14360 main.go:141] libmachine: [stderr =====>] : 
	I1218 13:34:49.458464   14360 main.go:141] libmachine: Using SSH client type: native
	I1218 13:34:49.459143   14360 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xdb4f40] 0xdb7a80 <nil>  [] 0s} 192.168.226.196 22 <nil> <nil>}
	I1218 13:34:49.459143   14360 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1218 13:34:49.599078   14360 main.go:141] libmachine: SSH cmd err, output: <nil>: 1702906489.605816705
	
	I1218 13:34:49.599142   14360 fix.go:206] guest clock: 1702906489.605816705
	I1218 13:34:49.599142   14360 fix.go:219] Guest: 2023-12-18 13:34:49.605816705 +0000 UTC Remote: 2023-12-18 13:34:44.7181043 +0000 UTC m=+265.339380701 (delta=4.887712405s)
	I1218 13:34:49.599207   14360 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM stopped-upgrade-592200 ).state
	I1218 13:34:51.722829   14360 main.go:141] libmachine: [stdout =====>] : Running
	
	I1218 13:34:51.722829   14360 main.go:141] libmachine: [stderr =====>] : 
	I1218 13:34:51.722829   14360 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM stopped-upgrade-592200 ).networkadapters[0]).ipaddresses[0]
	I1218 13:34:54.280280   14360 main.go:141] libmachine: [stdout =====>] : 192.168.226.196
	
	I1218 13:34:54.280472   14360 main.go:141] libmachine: [stderr =====>] : 
	I1218 13:34:54.285836   14360 main.go:141] libmachine: Using SSH client type: native
	I1218 13:34:54.286742   14360 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xdb4f40] 0xdb7a80 <nil>  [] 0s} 192.168.226.196 22 <nil> <nil>}
	I1218 13:34:54.286820   14360 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1702906489
	I1218 13:34:54.428367   14360 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Dec 18 13:34:49 UTC 2023
	
	I1218 13:34:54.428367   14360 fix.go:226] clock set: Mon Dec 18 13:34:49 UTC 2023
	 (err=<nil>)
	I1218 13:34:54.428473   14360 start.go:83] releasing machines lock for "stopped-upgrade-592200", held for 1m54.055765s
	I1218 13:34:54.428643   14360 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM stopped-upgrade-592200 ).state
	I1218 13:34:56.656655   14360 main.go:141] libmachine: [stdout =====>] : Running
	
	I1218 13:34:56.656947   14360 main.go:141] libmachine: [stderr =====>] : 
	I1218 13:34:56.657015   14360 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM stopped-upgrade-592200 ).networkadapters[0]).ipaddresses[0]
	I1218 13:34:59.332594   14360 main.go:141] libmachine: [stdout =====>] : 192.168.226.196
	
	I1218 13:34:59.332594   14360 main.go:141] libmachine: [stderr =====>] : 
	I1218 13:34:59.337268   14360 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1218 13:34:59.337447   14360 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM stopped-upgrade-592200 ).state
	I1218 13:34:59.350955   14360 ssh_runner.go:195] Run: cat /version.json
	I1218 13:34:59.350955   14360 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM stopped-upgrade-592200 ).state
	I1218 13:35:01.715722   14360 main.go:141] libmachine: [stdout =====>] : Running
	
	I1218 13:35:01.715722   14360 main.go:141] libmachine: [stderr =====>] : 
	I1218 13:35:01.715722   14360 main.go:141] libmachine: [stdout =====>] : Running
	
	I1218 13:35:01.715853   14360 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM stopped-upgrade-592200 ).networkadapters[0]).ipaddresses[0]
	I1218 13:35:01.715853   14360 main.go:141] libmachine: [stderr =====>] : 
	I1218 13:35:01.715989   14360 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM stopped-upgrade-592200 ).networkadapters[0]).ipaddresses[0]
	I1218 13:35:04.547965   14360 main.go:141] libmachine: [stdout =====>] : 192.168.226.196
	
	I1218 13:35:04.547965   14360 main.go:141] libmachine: [stderr =====>] : 
	I1218 13:35:04.548579   14360 sshutil.go:53] new ssh client: &{IP:192.168.226.196 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\stopped-upgrade-592200\id_rsa Username:docker}
	I1218 13:35:04.590689   14360 main.go:141] libmachine: [stdout =====>] : 192.168.226.196
	
	I1218 13:35:04.590817   14360 main.go:141] libmachine: [stderr =====>] : 
	I1218 13:35:04.591195   14360 sshutil.go:53] new ssh client: &{IP:192.168.226.196 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\stopped-upgrade-592200\id_rsa Username:docker}
	I1218 13:35:04.647495   14360 ssh_runner.go:235] Completed: cat /version.json: (5.2965178s)
	W1218 13:35:04.647495   14360 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I1218 13:35:04.661175   14360 ssh_runner.go:195] Run: systemctl --version
	I1218 13:35:05.452291   14360 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (6.1149034s)
	I1218 13:35:05.465794   14360 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1218 13:35:05.473963   14360 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1218 13:35:05.487159   14360 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I1218 13:35:05.509229   14360 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I1218 13:35:05.517318   14360 cni.go:305] no active bridge cni configs found in "/etc/cni/net.d" - nothing to configure
	I1218 13:35:05.517467   14360 start.go:475] detecting cgroup driver to use...
	I1218 13:35:05.517726   14360 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1218 13:35:05.546923   14360 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.1"|' /etc/containerd/config.toml"
	I1218 13:35:05.567510   14360 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1218 13:35:05.577439   14360 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I1218 13:35:05.590327   14360 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1218 13:35:05.612606   14360 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1218 13:35:05.637316   14360 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1218 13:35:05.664193   14360 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1218 13:35:05.687904   14360 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1218 13:35:05.712304   14360 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1218 13:35:05.734026   14360 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1218 13:35:05.754755   14360 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1218 13:35:05.775676   14360 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1218 13:35:05.890999   14360 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1218 13:35:05.912472   14360 start.go:475] detecting cgroup driver to use...
	I1218 13:35:05.927658   14360 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1218 13:35:05.963697   14360 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1218 13:35:05.991128   14360 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1218 13:35:06.029242   14360 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1218 13:35:06.059400   14360 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1218 13:35:06.076056   14360 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I1218 13:35:06.102869   14360 ssh_runner.go:195] Run: which cri-dockerd
	I1218 13:35:06.121662   14360 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1218 13:35:06.129919   14360 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I1218 13:35:06.154661   14360 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1218 13:35:06.279312   14360 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1218 13:35:06.387209   14360 docker.go:560] configuring docker to use "cgroupfs" as cgroup driver...
	I1218 13:35:06.387538   14360 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1218 13:35:06.414507   14360 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1218 13:35:06.541694   14360 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1218 13:35:07.631236   14360 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.0889321s)
	I1218 13:35:07.644116   14360 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I1218 13:35:07.664121   14360 out.go:177] 
	W1218 13:35:07.664753   14360 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	-- Logs begin at Mon 2023-12-18 13:33:42 UTC, end at Mon 2023-12-18 13:35:07 UTC. --
	Dec 18 13:34:39 stopped-upgrade-592200 systemd[1]: Starting Docker Application Container Engine...
	Dec 18 13:34:39 stopped-upgrade-592200 dockerd[2468]: time="2023-12-18T13:34:39.096473928Z" level=info msg="Starting up"
	Dec 18 13:34:39 stopped-upgrade-592200 dockerd[2468]: time="2023-12-18T13:34:39.099006753Z" level=info msg="libcontainerd: started new containerd process" pid=2475
	Dec 18 13:34:39 stopped-upgrade-592200 dockerd[2468]: time="2023-12-18T13:34:39.099139060Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	Dec 18 13:34:39 stopped-upgrade-592200 dockerd[2468]: time="2023-12-18T13:34:39.099200563Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	Dec 18 13:34:39 stopped-upgrade-592200 dockerd[2468]: time="2023-12-18T13:34:39.099263266Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/containerd/containerd.sock 0  <nil>}] <nil>}" module=grpc
	Dec 18 13:34:39 stopped-upgrade-592200 dockerd[2468]: time="2023-12-18T13:34:39.099345270Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Dec 18 13:34:39 stopped-upgrade-592200 dockerd[2468]: time="2023-12-18T13:34:39.146378696Z" level=info msg="starting containerd" revision=b34a5c8af56e510852c35414db4c1f4fa6172339 version=v1.2.10
	Dec 18 13:34:39 stopped-upgrade-592200 dockerd[2468]: time="2023-12-18T13:34:39.147113333Z" level=info msg="loading plugin "io.containerd.content.v1.content"..." type=io.containerd.content.v1
	Dec 18 13:34:39 stopped-upgrade-592200 dockerd[2468]: time="2023-12-18T13:34:39.148072580Z" level=info msg="loading plugin "io.containerd.snapshotter.v1.btrfs"..." type=io.containerd.snapshotter.v1
	Dec 18 13:34:39 stopped-upgrade-592200 dockerd[2468]: time="2023-12-18T13:34:39.148308992Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.btrfs" error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs must be a btrfs filesystem to be used with the btrfs snapshotter"
	Dec 18 13:34:39 stopped-upgrade-592200 dockerd[2468]: time="2023-12-18T13:34:39.148416397Z" level=info msg="loading plugin "io.containerd.snapshotter.v1.aufs"..." type=io.containerd.snapshotter.v1
	Dec 18 13:34:39 stopped-upgrade-592200 dockerd[2468]: time="2023-12-18T13:34:39.150543002Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.aufs" error="modprobe aufs failed: "modprobe: FATAL: Module aufs not found in directory /lib/modules/4.19.81\n": exit status 1"
	Dec 18 13:34:39 stopped-upgrade-592200 dockerd[2468]: time="2023-12-18T13:34:39.150664208Z" level=info msg="loading plugin "io.containerd.snapshotter.v1.native"..." type=io.containerd.snapshotter.v1
	Dec 18 13:34:39 stopped-upgrade-592200 dockerd[2468]: time="2023-12-18T13:34:39.151702159Z" level=info msg="loading plugin "io.containerd.snapshotter.v1.overlayfs"..." type=io.containerd.snapshotter.v1
	Dec 18 13:34:39 stopped-upgrade-592200 dockerd[2468]: time="2023-12-18T13:34:39.156999221Z" level=info msg="loading plugin "io.containerd.snapshotter.v1.zfs"..." type=io.containerd.snapshotter.v1
	Dec 18 13:34:39 stopped-upgrade-592200 dockerd[2468]: time="2023-12-18T13:34:39.157302136Z" level=info msg="skip loading plugin "io.containerd.snapshotter.v1.zfs"..." type=io.containerd.snapshotter.v1
	Dec 18 13:34:39 stopped-upgrade-592200 dockerd[2468]: time="2023-12-18T13:34:39.157398441Z" level=info msg="loading plugin "io.containerd.metadata.v1.bolt"..." type=io.containerd.metadata.v1
	Dec 18 13:34:39 stopped-upgrade-592200 dockerd[2468]: time="2023-12-18T13:34:39.157484645Z" level=warning msg="could not use snapshotter btrfs in metadata plugin" error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs must be a btrfs filesystem to be used with the btrfs snapshotter"
	Dec 18 13:34:39 stopped-upgrade-592200 dockerd[2468]: time="2023-12-18T13:34:39.157500446Z" level=warning msg="could not use snapshotter aufs in metadata plugin" error="modprobe aufs failed: "modprobe: FATAL: Module aufs not found in directory /lib/modules/4.19.81\n": exit status 1"
	Dec 18 13:34:39 stopped-upgrade-592200 dockerd[2468]: time="2023-12-18T13:34:39.157508347Z" level=warning msg="could not use snapshotter zfs in metadata plugin" error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin"
	Dec 18 13:34:39 stopped-upgrade-592200 dockerd[2468]: time="2023-12-18T13:34:39.161357037Z" level=info msg="loading plugin "io.containerd.differ.v1.walking"..." type=io.containerd.differ.v1
	Dec 18 13:34:39 stopped-upgrade-592200 dockerd[2468]: time="2023-12-18T13:34:39.161410940Z" level=info msg="loading plugin "io.containerd.gc.v1.scheduler"..." type=io.containerd.gc.v1
	Dec 18 13:34:39 stopped-upgrade-592200 dockerd[2468]: time="2023-12-18T13:34:39.161499944Z" level=info msg="loading plugin "io.containerd.service.v1.containers-service"..." type=io.containerd.service.v1
	Dec 18 13:34:39 stopped-upgrade-592200 dockerd[2468]: time="2023-12-18T13:34:39.161735656Z" level=info msg="loading plugin "io.containerd.service.v1.content-service"..." type=io.containerd.service.v1
	Dec 18 13:34:39 stopped-upgrade-592200 dockerd[2468]: time="2023-12-18T13:34:39.161751956Z" level=info msg="loading plugin "io.containerd.service.v1.diff-service"..." type=io.containerd.service.v1
	Dec 18 13:34:39 stopped-upgrade-592200 dockerd[2468]: time="2023-12-18T13:34:39.161765357Z" level=info msg="loading plugin "io.containerd.service.v1.images-service"..." type=io.containerd.service.v1
	Dec 18 13:34:39 stopped-upgrade-592200 dockerd[2468]: time="2023-12-18T13:34:39.161848361Z" level=info msg="loading plugin "io.containerd.service.v1.leases-service"..." type=io.containerd.service.v1
	Dec 18 13:34:39 stopped-upgrade-592200 dockerd[2468]: time="2023-12-18T13:34:39.161865462Z" level=info msg="loading plugin "io.containerd.service.v1.namespaces-service"..." type=io.containerd.service.v1
	Dec 18 13:34:39 stopped-upgrade-592200 dockerd[2468]: time="2023-12-18T13:34:39.161876963Z" level=info msg="loading plugin "io.containerd.service.v1.snapshots-service"..." type=io.containerd.service.v1
	Dec 18 13:34:39 stopped-upgrade-592200 dockerd[2468]: time="2023-12-18T13:34:39.161892963Z" level=info msg="loading plugin "io.containerd.runtime.v1.linux"..." type=io.containerd.runtime.v1
	Dec 18 13:34:39 stopped-upgrade-592200 dockerd[2468]: time="2023-12-18T13:34:39.161988368Z" level=info msg="loading plugin "io.containerd.runtime.v2.task"..." type=io.containerd.runtime.v2
	Dec 18 13:34:39 stopped-upgrade-592200 dockerd[2468]: time="2023-12-18T13:34:39.162173677Z" level=info msg="loading plugin "io.containerd.monitor.v1.cgroups"..." type=io.containerd.monitor.v1
	Dec 18 13:34:39 stopped-upgrade-592200 dockerd[2468]: time="2023-12-18T13:34:39.162723104Z" level=info msg="loading plugin "io.containerd.service.v1.tasks-service"..." type=io.containerd.service.v1
	Dec 18 13:34:39 stopped-upgrade-592200 dockerd[2468]: time="2023-12-18T13:34:39.162886713Z" level=info msg="loading plugin "io.containerd.internal.v1.restart"..." type=io.containerd.internal.v1
	Dec 18 13:34:39 stopped-upgrade-592200 dockerd[2468]: time="2023-12-18T13:34:39.162929415Z" level=info msg="loading plugin "io.containerd.grpc.v1.containers"..." type=io.containerd.grpc.v1
	Dec 18 13:34:39 stopped-upgrade-592200 dockerd[2468]: time="2023-12-18T13:34:39.162948916Z" level=info msg="loading plugin "io.containerd.grpc.v1.content"..." type=io.containerd.grpc.v1
	Dec 18 13:34:39 stopped-upgrade-592200 dockerd[2468]: time="2023-12-18T13:34:39.162960816Z" level=info msg="loading plugin "io.containerd.grpc.v1.diff"..." type=io.containerd.grpc.v1
	Dec 18 13:34:39 stopped-upgrade-592200 dockerd[2468]: time="2023-12-18T13:34:39.162977017Z" level=info msg="loading plugin "io.containerd.grpc.v1.events"..." type=io.containerd.grpc.v1
	Dec 18 13:34:39 stopped-upgrade-592200 dockerd[2468]: time="2023-12-18T13:34:39.162995518Z" level=info msg="loading plugin "io.containerd.grpc.v1.healthcheck"..." type=io.containerd.grpc.v1
	Dec 18 13:34:39 stopped-upgrade-592200 dockerd[2468]: time="2023-12-18T13:34:39.163007519Z" level=info msg="loading plugin "io.containerd.grpc.v1.images"..." type=io.containerd.grpc.v1
	Dec 18 13:34:39 stopped-upgrade-592200 dockerd[2468]: time="2023-12-18T13:34:39.163023719Z" level=info msg="loading plugin "io.containerd.grpc.v1.leases"..." type=io.containerd.grpc.v1
	Dec 18 13:34:39 stopped-upgrade-592200 dockerd[2468]: time="2023-12-18T13:34:39.163034320Z" level=info msg="loading plugin "io.containerd.grpc.v1.namespaces"..." type=io.containerd.grpc.v1
	Dec 18 13:34:39 stopped-upgrade-592200 dockerd[2468]: time="2023-12-18T13:34:39.163068722Z" level=info msg="loading plugin "io.containerd.internal.v1.opt"..." type=io.containerd.internal.v1
	Dec 18 13:34:39 stopped-upgrade-592200 dockerd[2468]: time="2023-12-18T13:34:39.163121724Z" level=info msg="loading plugin "io.containerd.grpc.v1.snapshots"..." type=io.containerd.grpc.v1
	Dec 18 13:34:39 stopped-upgrade-592200 dockerd[2468]: time="2023-12-18T13:34:39.163136725Z" level=info msg="loading plugin "io.containerd.grpc.v1.tasks"..." type=io.containerd.grpc.v1
	Dec 18 13:34:39 stopped-upgrade-592200 dockerd[2468]: time="2023-12-18T13:34:39.163147425Z" level=info msg="loading plugin "io.containerd.grpc.v1.version"..." type=io.containerd.grpc.v1
	Dec 18 13:34:39 stopped-upgrade-592200 dockerd[2468]: time="2023-12-18T13:34:39.163157426Z" level=info msg="loading plugin "io.containerd.grpc.v1.introspection"..." type=io.containerd.grpc.v1
	Dec 18 13:34:39 stopped-upgrade-592200 dockerd[2468]: time="2023-12-18T13:34:39.163306133Z" level=info msg=serving... address="/var/run/docker/containerd/containerd-debug.sock"
	Dec 18 13:34:39 stopped-upgrade-592200 dockerd[2468]: time="2023-12-18T13:34:39.163486742Z" level=info msg=serving... address="/var/run/docker/containerd/containerd.sock"
	Dec 18 13:34:39 stopped-upgrade-592200 dockerd[2468]: time="2023-12-18T13:34:39.163501343Z" level=info msg="containerd successfully booted in 0.022796s"
	Dec 18 13:34:39 stopped-upgrade-592200 dockerd[2468]: time="2023-12-18T13:34:39.182002558Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	Dec 18 13:34:39 stopped-upgrade-592200 dockerd[2468]: time="2023-12-18T13:34:39.182231069Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	Dec 18 13:34:39 stopped-upgrade-592200 dockerd[2468]: time="2023-12-18T13:34:39.182307473Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/containerd/containerd.sock 0  <nil>}] <nil>}" module=grpc
	Dec 18 13:34:39 stopped-upgrade-592200 dockerd[2468]: time="2023-12-18T13:34:39.182346275Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Dec 18 13:34:39 stopped-upgrade-592200 dockerd[2468]: time="2023-12-18T13:34:39.184269970Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	Dec 18 13:34:39 stopped-upgrade-592200 dockerd[2468]: time="2023-12-18T13:34:39.184417377Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	Dec 18 13:34:39 stopped-upgrade-592200 dockerd[2468]: time="2023-12-18T13:34:39.184438778Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/containerd/containerd.sock 0  <nil>}] <nil>}" module=grpc
	Dec 18 13:34:39 stopped-upgrade-592200 dockerd[2468]: time="2023-12-18T13:34:39.184449379Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Dec 18 13:34:39 stopped-upgrade-592200 dockerd[2468]: time="2023-12-18T13:34:39.202707982Z" level=info msg="[graphdriver] using prior storage driver: overlay2"
	Dec 18 13:34:39 stopped-upgrade-592200 dockerd[2468]: time="2023-12-18T13:34:39.348373086Z" level=warning msg="Your kernel does not support cgroup blkio weight"
	Dec 18 13:34:39 stopped-upgrade-592200 dockerd[2468]: time="2023-12-18T13:34:39.348532193Z" level=warning msg="Your kernel does not support cgroup blkio weight_device"
	Dec 18 13:34:39 stopped-upgrade-592200 dockerd[2468]: time="2023-12-18T13:34:39.348588696Z" level=warning msg="Your kernel does not support cgroup blkio throttle.read_bps_device"
	Dec 18 13:34:39 stopped-upgrade-592200 dockerd[2468]: time="2023-12-18T13:34:39.348631798Z" level=warning msg="Your kernel does not support cgroup blkio throttle.write_bps_device"
	Dec 18 13:34:39 stopped-upgrade-592200 dockerd[2468]: time="2023-12-18T13:34:39.348825308Z" level=warning msg="Your kernel does not support cgroup blkio throttle.read_iops_device"
	Dec 18 13:34:39 stopped-upgrade-592200 dockerd[2468]: time="2023-12-18T13:34:39.348890711Z" level=warning msg="Your kernel does not support cgroup blkio throttle.write_iops_device"
	Dec 18 13:34:39 stopped-upgrade-592200 dockerd[2468]: time="2023-12-18T13:34:39.349197826Z" level=info msg="Loading containers: start."
	Dec 18 13:34:39 stopped-upgrade-592200 dockerd[2468]: time="2023-12-18T13:34:39.673519065Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Dec 18 13:34:39 stopped-upgrade-592200 dockerd[2468]: time="2023-12-18T13:34:39.762378759Z" level=info msg="Loading containers: done."
	Dec 18 13:34:39 stopped-upgrade-592200 dockerd[2468]: time="2023-12-18T13:34:39.786973376Z" level=info msg="Docker daemon" commit=633a0ea838 graphdriver(s)=overlay2 version=19.03.5
	Dec 18 13:34:39 stopped-upgrade-592200 dockerd[2468]: time="2023-12-18T13:34:39.788005227Z" level=info msg="Daemon has completed initialization"
	Dec 18 13:34:39 stopped-upgrade-592200 dockerd[2468]: time="2023-12-18T13:34:39.827436477Z" level=info msg="API listen on [::]:2376"
	Dec 18 13:34:39 stopped-upgrade-592200 systemd[1]: Started Docker Application Container Engine.
	Dec 18 13:34:39 stopped-upgrade-592200 dockerd[2468]: time="2023-12-18T13:34:39.828252917Z" level=info msg="API listen on /var/run/docker.sock"
	Dec 18 13:35:06 stopped-upgrade-592200 systemd[1]: Stopping Docker Application Container Engine...
	Dec 18 13:35:06 stopped-upgrade-592200 dockerd[2468]: time="2023-12-18T13:35:06.563963613Z" level=info msg="Processing signal 'terminated'"
	Dec 18 13:35:06 stopped-upgrade-592200 dockerd[2468]: time="2023-12-18T13:35:06.565180813Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Dec 18 13:35:06 stopped-upgrade-592200 dockerd[2468]: time="2023-12-18T13:35:06.565921013Z" level=info msg="Daemon shutdown complete"
	Dec 18 13:35:06 stopped-upgrade-592200 dockerd[2468]: time="2023-12-18T13:35:06.565954013Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Dec 18 13:35:06 stopped-upgrade-592200 dockerd[2468]: time="2023-12-18T13:35:06.565971613Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Dec 18 13:35:07 stopped-upgrade-592200 systemd[1]: docker.service: Succeeded.
	Dec 18 13:35:07 stopped-upgrade-592200 systemd[1]: Stopped Docker Application Container Engine.
	Dec 18 13:35:07 stopped-upgrade-592200 systemd[1]: Starting Docker Application Container Engine...
	Dec 18 13:35:07 stopped-upgrade-592200 dockerd[2911]: time="2023-12-18T13:35:07.629955913Z" level=info msg="Starting up"
	Dec 18 13:35:07 stopped-upgrade-592200 dockerd[2911]: time="2023-12-18T13:35:07.633620213Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	Dec 18 13:35:07 stopped-upgrade-592200 dockerd[2911]: time="2023-12-18T13:35:07.633733613Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	Dec 18 13:35:07 stopped-upgrade-592200 dockerd[2911]: time="2023-12-18T13:35:07.633783313Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock 0  <nil>}] <nil>}" module=grpc
	Dec 18 13:35:07 stopped-upgrade-592200 dockerd[2911]: time="2023-12-18T13:35:07.633815313Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Dec 18 13:35:07 stopped-upgrade-592200 dockerd[2911]: time="2023-12-18T13:35:07.634456913Z" level=warning msg="grpc: addrConn.createTransport failed to connect to {unix:///run/containerd/containerd.sock 0  <nil>}. Err :connection error: desc = \"transport: Error while dialing dial unix /run/containerd/containerd.sock: connect: connection refused\". Reconnecting..." module=grpc
	Dec 18 13:35:07 stopped-upgrade-592200 dockerd[2911]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": all SubConns are in TransientFailure, latest connection error: connection error: desc = "transport: Error while dialing dial unix /run/containerd/containerd.sock: connect: connection refused": unavailable
	Dec 18 13:35:07 stopped-upgrade-592200 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Dec 18 13:35:07 stopped-upgrade-592200 systemd[1]: docker.service: Failed with result 'exit-code'.
	Dec 18 13:35:07 stopped-upgrade-592200 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	-- Logs begin at Mon 2023-12-18 13:33:42 UTC, end at Mon 2023-12-18 13:35:07 UTC. --
	Dec 18 13:34:39 stopped-upgrade-592200 systemd[1]: Starting Docker Application Container Engine...
	Dec 18 13:34:39 stopped-upgrade-592200 dockerd[2468]: time="2023-12-18T13:34:39.096473928Z" level=info msg="Starting up"
	Dec 18 13:34:39 stopped-upgrade-592200 dockerd[2468]: time="2023-12-18T13:34:39.099006753Z" level=info msg="libcontainerd: started new containerd process" pid=2475
	Dec 18 13:34:39 stopped-upgrade-592200 dockerd[2468]: time="2023-12-18T13:34:39.099139060Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	Dec 18 13:34:39 stopped-upgrade-592200 dockerd[2468]: time="2023-12-18T13:34:39.099200563Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	Dec 18 13:34:39 stopped-upgrade-592200 dockerd[2468]: time="2023-12-18T13:34:39.099263266Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/containerd/containerd.sock 0  <nil>}] <nil>}" module=grpc
	Dec 18 13:34:39 stopped-upgrade-592200 dockerd[2468]: time="2023-12-18T13:34:39.099345270Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Dec 18 13:34:39 stopped-upgrade-592200 dockerd[2468]: time="2023-12-18T13:34:39.146378696Z" level=info msg="starting containerd" revision=b34a5c8af56e510852c35414db4c1f4fa6172339 version=v1.2.10
	Dec 18 13:34:39 stopped-upgrade-592200 dockerd[2468]: time="2023-12-18T13:34:39.147113333Z" level=info msg="loading plugin "io.containerd.content.v1.content"..." type=io.containerd.content.v1
	Dec 18 13:34:39 stopped-upgrade-592200 dockerd[2468]: time="2023-12-18T13:34:39.148072580Z" level=info msg="loading plugin "io.containerd.snapshotter.v1.btrfs"..." type=io.containerd.snapshotter.v1
	Dec 18 13:34:39 stopped-upgrade-592200 dockerd[2468]: time="2023-12-18T13:34:39.148308992Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.btrfs" error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs must be a btrfs filesystem to be used with the btrfs snapshotter"
	Dec 18 13:34:39 stopped-upgrade-592200 dockerd[2468]: time="2023-12-18T13:34:39.148416397Z" level=info msg="loading plugin "io.containerd.snapshotter.v1.aufs"..." type=io.containerd.snapshotter.v1
	Dec 18 13:34:39 stopped-upgrade-592200 dockerd[2468]: time="2023-12-18T13:34:39.150543002Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.aufs" error="modprobe aufs failed: "modprobe: FATAL: Module aufs not found in directory /lib/modules/4.19.81\n": exit status 1"
	Dec 18 13:34:39 stopped-upgrade-592200 dockerd[2468]: time="2023-12-18T13:34:39.150664208Z" level=info msg="loading plugin "io.containerd.snapshotter.v1.native"..." type=io.containerd.snapshotter.v1
	Dec 18 13:34:39 stopped-upgrade-592200 dockerd[2468]: time="2023-12-18T13:34:39.151702159Z" level=info msg="loading plugin "io.containerd.snapshotter.v1.overlayfs"..." type=io.containerd.snapshotter.v1
	Dec 18 13:34:39 stopped-upgrade-592200 dockerd[2468]: time="2023-12-18T13:34:39.156999221Z" level=info msg="loading plugin "io.containerd.snapshotter.v1.zfs"..." type=io.containerd.snapshotter.v1
	Dec 18 13:34:39 stopped-upgrade-592200 dockerd[2468]: time="2023-12-18T13:34:39.157302136Z" level=info msg="skip loading plugin "io.containerd.snapshotter.v1.zfs"..." type=io.containerd.snapshotter.v1
	Dec 18 13:34:39 stopped-upgrade-592200 dockerd[2468]: time="2023-12-18T13:34:39.157398441Z" level=info msg="loading plugin "io.containerd.metadata.v1.bolt"..." type=io.containerd.metadata.v1
	Dec 18 13:34:39 stopped-upgrade-592200 dockerd[2468]: time="2023-12-18T13:34:39.157484645Z" level=warning msg="could not use snapshotter btrfs in metadata plugin" error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs must be a btrfs filesystem to be used with the btrfs snapshotter"
	Dec 18 13:34:39 stopped-upgrade-592200 dockerd[2468]: time="2023-12-18T13:34:39.157500446Z" level=warning msg="could not use snapshotter aufs in metadata plugin" error="modprobe aufs failed: "modprobe: FATAL: Module aufs not found in directory /lib/modules/4.19.81\n": exit status 1"
	Dec 18 13:34:39 stopped-upgrade-592200 dockerd[2468]: time="2023-12-18T13:34:39.157508347Z" level=warning msg="could not use snapshotter zfs in metadata plugin" error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin"
	Dec 18 13:34:39 stopped-upgrade-592200 dockerd[2468]: time="2023-12-18T13:34:39.161357037Z" level=info msg="loading plugin "io.containerd.differ.v1.walking"..." type=io.containerd.differ.v1
	Dec 18 13:34:39 stopped-upgrade-592200 dockerd[2468]: time="2023-12-18T13:34:39.161410940Z" level=info msg="loading plugin "io.containerd.gc.v1.scheduler"..." type=io.containerd.gc.v1
	Dec 18 13:34:39 stopped-upgrade-592200 dockerd[2468]: time="2023-12-18T13:34:39.161499944Z" level=info msg="loading plugin "io.containerd.service.v1.containers-service"..." type=io.containerd.service.v1
	Dec 18 13:34:39 stopped-upgrade-592200 dockerd[2468]: time="2023-12-18T13:34:39.161735656Z" level=info msg="loading plugin "io.containerd.service.v1.content-service"..." type=io.containerd.service.v1
	Dec 18 13:34:39 stopped-upgrade-592200 dockerd[2468]: time="2023-12-18T13:34:39.161751956Z" level=info msg="loading plugin "io.containerd.service.v1.diff-service"..." type=io.containerd.service.v1
	Dec 18 13:34:39 stopped-upgrade-592200 dockerd[2468]: time="2023-12-18T13:34:39.161765357Z" level=info msg="loading plugin "io.containerd.service.v1.images-service"..." type=io.containerd.service.v1
	Dec 18 13:34:39 stopped-upgrade-592200 dockerd[2468]: time="2023-12-18T13:34:39.161848361Z" level=info msg="loading plugin "io.containerd.service.v1.leases-service"..." type=io.containerd.service.v1
	Dec 18 13:34:39 stopped-upgrade-592200 dockerd[2468]: time="2023-12-18T13:34:39.161865462Z" level=info msg="loading plugin "io.containerd.service.v1.namespaces-service"..." type=io.containerd.service.v1
	Dec 18 13:34:39 stopped-upgrade-592200 dockerd[2468]: time="2023-12-18T13:34:39.161876963Z" level=info msg="loading plugin "io.containerd.service.v1.snapshots-service"..." type=io.containerd.service.v1
	Dec 18 13:34:39 stopped-upgrade-592200 dockerd[2468]: time="2023-12-18T13:34:39.161892963Z" level=info msg="loading plugin "io.containerd.runtime.v1.linux"..." type=io.containerd.runtime.v1
	Dec 18 13:34:39 stopped-upgrade-592200 dockerd[2468]: time="2023-12-18T13:34:39.161988368Z" level=info msg="loading plugin "io.containerd.runtime.v2.task"..." type=io.containerd.runtime.v2
	Dec 18 13:34:39 stopped-upgrade-592200 dockerd[2468]: time="2023-12-18T13:34:39.162173677Z" level=info msg="loading plugin "io.containerd.monitor.v1.cgroups"..." type=io.containerd.monitor.v1
	Dec 18 13:34:39 stopped-upgrade-592200 dockerd[2468]: time="2023-12-18T13:34:39.162723104Z" level=info msg="loading plugin "io.containerd.service.v1.tasks-service"..." type=io.containerd.service.v1
	Dec 18 13:34:39 stopped-upgrade-592200 dockerd[2468]: time="2023-12-18T13:34:39.162886713Z" level=info msg="loading plugin "io.containerd.internal.v1.restart"..." type=io.containerd.internal.v1
	Dec 18 13:34:39 stopped-upgrade-592200 dockerd[2468]: time="2023-12-18T13:34:39.162929415Z" level=info msg="loading plugin "io.containerd.grpc.v1.containers"..." type=io.containerd.grpc.v1
	Dec 18 13:34:39 stopped-upgrade-592200 dockerd[2468]: time="2023-12-18T13:34:39.162948916Z" level=info msg="loading plugin "io.containerd.grpc.v1.content"..." type=io.containerd.grpc.v1
	Dec 18 13:34:39 stopped-upgrade-592200 dockerd[2468]: time="2023-12-18T13:34:39.162960816Z" level=info msg="loading plugin "io.containerd.grpc.v1.diff"..." type=io.containerd.grpc.v1
	Dec 18 13:34:39 stopped-upgrade-592200 dockerd[2468]: time="2023-12-18T13:34:39.162977017Z" level=info msg="loading plugin "io.containerd.grpc.v1.events"..." type=io.containerd.grpc.v1
	Dec 18 13:34:39 stopped-upgrade-592200 dockerd[2468]: time="2023-12-18T13:34:39.162995518Z" level=info msg="loading plugin "io.containerd.grpc.v1.healthcheck"..." type=io.containerd.grpc.v1
	Dec 18 13:34:39 stopped-upgrade-592200 dockerd[2468]: time="2023-12-18T13:34:39.163007519Z" level=info msg="loading plugin "io.containerd.grpc.v1.images"..." type=io.containerd.grpc.v1
	Dec 18 13:34:39 stopped-upgrade-592200 dockerd[2468]: time="2023-12-18T13:34:39.163023719Z" level=info msg="loading plugin "io.containerd.grpc.v1.leases"..." type=io.containerd.grpc.v1
	Dec 18 13:34:39 stopped-upgrade-592200 dockerd[2468]: time="2023-12-18T13:34:39.163034320Z" level=info msg="loading plugin "io.containerd.grpc.v1.namespaces"..." type=io.containerd.grpc.v1
	Dec 18 13:34:39 stopped-upgrade-592200 dockerd[2468]: time="2023-12-18T13:34:39.163068722Z" level=info msg="loading plugin "io.containerd.internal.v1.opt"..." type=io.containerd.internal.v1
	Dec 18 13:34:39 stopped-upgrade-592200 dockerd[2468]: time="2023-12-18T13:34:39.163121724Z" level=info msg="loading plugin "io.containerd.grpc.v1.snapshots"..." type=io.containerd.grpc.v1
	Dec 18 13:34:39 stopped-upgrade-592200 dockerd[2468]: time="2023-12-18T13:34:39.163136725Z" level=info msg="loading plugin "io.containerd.grpc.v1.tasks"..." type=io.containerd.grpc.v1
	Dec 18 13:34:39 stopped-upgrade-592200 dockerd[2468]: time="2023-12-18T13:34:39.163147425Z" level=info msg="loading plugin "io.containerd.grpc.v1.version"..." type=io.containerd.grpc.v1
	Dec 18 13:34:39 stopped-upgrade-592200 dockerd[2468]: time="2023-12-18T13:34:39.163157426Z" level=info msg="loading plugin "io.containerd.grpc.v1.introspection"..." type=io.containerd.grpc.v1
	Dec 18 13:34:39 stopped-upgrade-592200 dockerd[2468]: time="2023-12-18T13:34:39.163306133Z" level=info msg=serving... address="/var/run/docker/containerd/containerd-debug.sock"
	Dec 18 13:34:39 stopped-upgrade-592200 dockerd[2468]: time="2023-12-18T13:34:39.163486742Z" level=info msg=serving... address="/var/run/docker/containerd/containerd.sock"
	Dec 18 13:34:39 stopped-upgrade-592200 dockerd[2468]: time="2023-12-18T13:34:39.163501343Z" level=info msg="containerd successfully booted in 0.022796s"
	Dec 18 13:34:39 stopped-upgrade-592200 dockerd[2468]: time="2023-12-18T13:34:39.182002558Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	Dec 18 13:34:39 stopped-upgrade-592200 dockerd[2468]: time="2023-12-18T13:34:39.182231069Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	Dec 18 13:34:39 stopped-upgrade-592200 dockerd[2468]: time="2023-12-18T13:34:39.182307473Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/containerd/containerd.sock 0  <nil>}] <nil>}" module=grpc
	Dec 18 13:34:39 stopped-upgrade-592200 dockerd[2468]: time="2023-12-18T13:34:39.182346275Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Dec 18 13:34:39 stopped-upgrade-592200 dockerd[2468]: time="2023-12-18T13:34:39.184269970Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	Dec 18 13:34:39 stopped-upgrade-592200 dockerd[2468]: time="2023-12-18T13:34:39.184417377Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	Dec 18 13:34:39 stopped-upgrade-592200 dockerd[2468]: time="2023-12-18T13:34:39.184438778Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/containerd/containerd.sock 0  <nil>}] <nil>}" module=grpc
	Dec 18 13:34:39 stopped-upgrade-592200 dockerd[2468]: time="2023-12-18T13:34:39.184449379Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Dec 18 13:34:39 stopped-upgrade-592200 dockerd[2468]: time="2023-12-18T13:34:39.202707982Z" level=info msg="[graphdriver] using prior storage driver: overlay2"
	Dec 18 13:34:39 stopped-upgrade-592200 dockerd[2468]: time="2023-12-18T13:34:39.348373086Z" level=warning msg="Your kernel does not support cgroup blkio weight"
	Dec 18 13:34:39 stopped-upgrade-592200 dockerd[2468]: time="2023-12-18T13:34:39.348532193Z" level=warning msg="Your kernel does not support cgroup blkio weight_device"
	Dec 18 13:34:39 stopped-upgrade-592200 dockerd[2468]: time="2023-12-18T13:34:39.348588696Z" level=warning msg="Your kernel does not support cgroup blkio throttle.read_bps_device"
	Dec 18 13:34:39 stopped-upgrade-592200 dockerd[2468]: time="2023-12-18T13:34:39.348631798Z" level=warning msg="Your kernel does not support cgroup blkio throttle.write_bps_device"
	Dec 18 13:34:39 stopped-upgrade-592200 dockerd[2468]: time="2023-12-18T13:34:39.348825308Z" level=warning msg="Your kernel does not support cgroup blkio throttle.read_iops_device"
	Dec 18 13:34:39 stopped-upgrade-592200 dockerd[2468]: time="2023-12-18T13:34:39.348890711Z" level=warning msg="Your kernel does not support cgroup blkio throttle.write_iops_device"
	Dec 18 13:34:39 stopped-upgrade-592200 dockerd[2468]: time="2023-12-18T13:34:39.349197826Z" level=info msg="Loading containers: start."
	Dec 18 13:34:39 stopped-upgrade-592200 dockerd[2468]: time="2023-12-18T13:34:39.673519065Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Dec 18 13:34:39 stopped-upgrade-592200 dockerd[2468]: time="2023-12-18T13:34:39.762378759Z" level=info msg="Loading containers: done."
	Dec 18 13:34:39 stopped-upgrade-592200 dockerd[2468]: time="2023-12-18T13:34:39.786973376Z" level=info msg="Docker daemon" commit=633a0ea838 graphdriver(s)=overlay2 version=19.03.5
	Dec 18 13:34:39 stopped-upgrade-592200 dockerd[2468]: time="2023-12-18T13:34:39.788005227Z" level=info msg="Daemon has completed initialization"
	Dec 18 13:34:39 stopped-upgrade-592200 dockerd[2468]: time="2023-12-18T13:34:39.827436477Z" level=info msg="API listen on [::]:2376"
	Dec 18 13:34:39 stopped-upgrade-592200 systemd[1]: Started Docker Application Container Engine.
	Dec 18 13:34:39 stopped-upgrade-592200 dockerd[2468]: time="2023-12-18T13:34:39.828252917Z" level=info msg="API listen on /var/run/docker.sock"
	Dec 18 13:35:06 stopped-upgrade-592200 systemd[1]: Stopping Docker Application Container Engine...
	Dec 18 13:35:06 stopped-upgrade-592200 dockerd[2468]: time="2023-12-18T13:35:06.563963613Z" level=info msg="Processing signal 'terminated'"
	Dec 18 13:35:06 stopped-upgrade-592200 dockerd[2468]: time="2023-12-18T13:35:06.565180813Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Dec 18 13:35:06 stopped-upgrade-592200 dockerd[2468]: time="2023-12-18T13:35:06.565921013Z" level=info msg="Daemon shutdown complete"
	Dec 18 13:35:06 stopped-upgrade-592200 dockerd[2468]: time="2023-12-18T13:35:06.565954013Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Dec 18 13:35:06 stopped-upgrade-592200 dockerd[2468]: time="2023-12-18T13:35:06.565971613Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Dec 18 13:35:07 stopped-upgrade-592200 systemd[1]: docker.service: Succeeded.
	Dec 18 13:35:07 stopped-upgrade-592200 systemd[1]: Stopped Docker Application Container Engine.
	Dec 18 13:35:07 stopped-upgrade-592200 systemd[1]: Starting Docker Application Container Engine...
	Dec 18 13:35:07 stopped-upgrade-592200 dockerd[2911]: time="2023-12-18T13:35:07.629955913Z" level=info msg="Starting up"
	Dec 18 13:35:07 stopped-upgrade-592200 dockerd[2911]: time="2023-12-18T13:35:07.633620213Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	Dec 18 13:35:07 stopped-upgrade-592200 dockerd[2911]: time="2023-12-18T13:35:07.633733613Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	Dec 18 13:35:07 stopped-upgrade-592200 dockerd[2911]: time="2023-12-18T13:35:07.633783313Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock 0  <nil>}] <nil>}" module=grpc
	Dec 18 13:35:07 stopped-upgrade-592200 dockerd[2911]: time="2023-12-18T13:35:07.633815313Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Dec 18 13:35:07 stopped-upgrade-592200 dockerd[2911]: time="2023-12-18T13:35:07.634456913Z" level=warning msg="grpc: addrConn.createTransport failed to connect to {unix:///run/containerd/containerd.sock 0  <nil>}. Err :connection error: desc = \"transport: Error while dialing dial unix /run/containerd/containerd.sock: connect: connection refused\". Reconnecting..." module=grpc
	Dec 18 13:35:07 stopped-upgrade-592200 dockerd[2911]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": all SubConns are in TransientFailure, latest connection error: connection error: desc = "transport: Error while dialing dial unix /run/containerd/containerd.sock: connect: connection refused": unavailable
	Dec 18 13:35:07 stopped-upgrade-592200 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Dec 18 13:35:07 stopped-upgrade-592200 systemd[1]: docker.service: Failed with result 'exit-code'.
	Dec 18 13:35:07 stopped-upgrade-592200 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W1218 13:35:07.665340   14360 out.go:239] * 
	* 
	W1218 13:35:07.665995   14360 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1218 13:35:07.666805   14360 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:213: upgrade from v1.6.2 to HEAD failed: out/minikube-windows-amd64.exe start -p stopped-upgrade-592200 --memory=2200 --alsologtostderr -v=1 --driver=hyperv: exit status 90
--- FAIL: TestStoppedBinaryUpgrade/Upgrade (600.81s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (313.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-windows-amd64.exe start -p NoKubernetes-137000 --driver=hyperv
no_kubernetes_test.go:95: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p NoKubernetes-137000 --driver=hyperv: exit status 1 (4m59.7796952s)

                                                
                                                
-- stdout --
	* [NoKubernetes-137000] minikube v1.32.0 on Microsoft Windows 10 Enterprise N 10.0.19045.3803 Build 19045.3803
	  - KUBECONFIG=C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube7\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=17824
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the hyperv driver based on user configuration
	* Starting control plane node NoKubernetes-137000 in cluster NoKubernetes-137000
	* Creating hyperv VM (CPUs=2, Memory=6000MB, Disk=20000MB) ...

                                                
                                                
-- /stdout --
** stderr ** 
	W1218 13:34:25.238097    9288 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
no_kubernetes_test.go:97: failed to start minikube with args: "out/minikube-windows-amd64.exe start -p NoKubernetes-137000 --driver=hyperv" : exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p NoKubernetes-137000 -n NoKubernetes-137000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p NoKubernetes-137000 -n NoKubernetes-137000: exit status 6 (13.3225066s)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	W1218 13:39:25.029336    7544 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	E1218 13:39:38.139858    7544 status.go:410] forwarded endpoint: failed to lookup ip for ""

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "NoKubernetes-137000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestNoKubernetes/serial/StartWithK8s (313.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (276.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-windows-amd64.exe start -p bridge-353700 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=hyperv
E1218 14:13:08.612525   14928 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\kindnet-353700\client.crt: The system cannot find the path specified.
E1218 14:13:25.553834   14928 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\calico-353700\client.crt: The system cannot find the path specified.
E1218 14:13:42.394357   14928 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-806500\client.crt: The system cannot find the path specified.
E1218 14:13:56.572811   14928 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ingress-addon-legacy-134100\client.crt: The system cannot find the path specified.
E1218 14:14:02.431753   14928 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-922300\client.crt: The system cannot find the path specified.
E1218 14:14:26.036240   14928 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\custom-flannel-353700\client.crt: The system cannot find the path specified.
E1218 14:14:26.051902   14928 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\custom-flannel-353700\client.crt: The system cannot find the path specified.
E1218 14:14:26.067585   14928 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\custom-flannel-353700\client.crt: The system cannot find the path specified.
E1218 14:14:26.098881   14928 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\custom-flannel-353700\client.crt: The system cannot find the path specified.
E1218 14:14:26.145539   14928 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\custom-flannel-353700\client.crt: The system cannot find the path specified.
E1218 14:14:26.237908   14928 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\custom-flannel-353700\client.crt: The system cannot find the path specified.
E1218 14:14:26.409398   14928 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\custom-flannel-353700\client.crt: The system cannot find the path specified.
E1218 14:14:26.743235   14928 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\custom-flannel-353700\client.crt: The system cannot find the path specified.
E1218 14:14:27.393324   14928 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\custom-flannel-353700\client.crt: The system cannot find the path specified.
E1218 14:14:28.677681   14928 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\custom-flannel-353700\client.crt: The system cannot find the path specified.
E1218 14:14:31.246104   14928 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\custom-flannel-353700\client.crt: The system cannot find the path specified.
E1218 14:14:36.369157   14928 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\custom-flannel-353700\client.crt: The system cannot find the path specified.
E1218 14:14:46.625276   14928 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\custom-flannel-353700\client.crt: The system cannot find the path specified.
E1218 14:14:47.487635   14928 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\calico-353700\client.crt: The system cannot find the path specified.
net_test.go:112: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p bridge-353700 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=hyperv: exit status 90 (4m36.0057619s)

                                                
                                                
-- stdout --
	* [bridge-353700] minikube v1.32.0 on Microsoft Windows 10 Enterprise N 10.0.19045.3803 Build 19045.3803
	  - KUBECONFIG=C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube7\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=17824
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the hyperv driver based on user configuration
	* Starting control plane node bridge-353700 in cluster bridge-353700
	* Creating hyperv VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	W1218 14:12:56.041295    4856 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I1218 14:12:56.135491    4856 out.go:296] Setting OutFile to fd 1408 ...
	I1218 14:12:56.135491    4856 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1218 14:12:56.135491    4856 out.go:309] Setting ErrFile to fd 2020...
	I1218 14:12:56.135491    4856 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1218 14:12:56.160480    4856 out.go:303] Setting JSON to false
	I1218 14:12:56.166485    4856 start.go:128] hostinfo: {"hostname":"minikube7","uptime":9250,"bootTime":1702899525,"procs":207,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.3803 Build 19045.3803","kernelVersion":"10.0.19045.3803 Build 19045.3803","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f66ed2ea-6c04-4a6b-8eea-b2fb0953e990"}
	W1218 14:12:56.166485    4856 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1218 14:12:56.168489    4856 out.go:177] * [bridge-353700] minikube v1.32.0 on Microsoft Windows 10 Enterprise N 10.0.19045.3803 Build 19045.3803
	I1218 14:12:56.169550    4856 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	I1218 14:12:56.169550    4856 notify.go:220] Checking for updates...
	I1218 14:12:56.170483    4856 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1218 14:12:56.170483    4856 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube7\minikube-integration\.minikube
	I1218 14:12:56.171486    4856 out.go:177]   - MINIKUBE_LOCATION=17824
	I1218 14:12:56.172485    4856 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1218 14:12:56.175484    4856 config.go:182] Loaded profile config "enable-default-cni-353700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1218 14:12:56.176476    4856 config.go:182] Loaded profile config "false-353700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1218 14:12:56.176476    4856 config.go:182] Loaded profile config "flannel-353700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1218 14:12:56.176476    4856 config.go:182] Loaded profile config "multinode-015900-m01": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1218 14:12:56.177494    4856 driver.go:392] Setting default libvirt URI to qemu:///system
	I1218 14:13:03.007629    4856 out.go:177] * Using the hyperv driver based on user configuration
	I1218 14:13:03.008410    4856 start.go:298] selected driver: hyperv
	I1218 14:13:03.008578    4856 start.go:902] validating driver "hyperv" against <nil>
	I1218 14:13:03.008578    4856 start.go:913] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1218 14:13:03.078554    4856 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1218 14:13:03.079553    4856 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1218 14:13:03.080555    4856 cni.go:84] Creating CNI manager for "bridge"
	I1218 14:13:03.080555    4856 start_flags.go:318] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1218 14:13:03.080555    4856 start_flags.go:323] config:
	{Name:bridge-353700 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702585004-17765@sha256:ba2fbb9efd5b81a443834ba0800f3bc13feea942ce199df74b0054a9bdb32bbd Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:bridge-353700 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker C
RISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1218 14:13:03.080555    4856 iso.go:125] acquiring lock: {Name:mka7fdf4332632f41b99a369cc525e40499a0030 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1218 14:13:03.082546    4856 out.go:177] * Starting control plane node bridge-353700 in cluster bridge-353700
	I1218 14:13:03.083565    4856 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I1218 14:13:03.083565    4856 preload.go:148] Found local preload: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4
	I1218 14:13:03.083565    4856 cache.go:56] Caching tarball of preloaded images
	I1218 14:13:03.083565    4856 preload.go:174] Found C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1218 14:13:03.084554    4856 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on docker
	I1218 14:13:03.084554    4856 profile.go:148] Saving config to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\bridge-353700\config.json ...
	I1218 14:13:03.084554    4856 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\bridge-353700\config.json: {Name:mk3fadda15f697339c7301294a606d80c128a077 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1218 14:13:03.086555    4856 start.go:365] acquiring machines lock for bridge-353700: {Name:mk814f158b6187cc9297257c36fdbe0d2871c950 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1218 14:13:32.189984    4856 start.go:369] acquired machines lock for "bridge-353700" in 29.1032983s
	I1218 14:13:32.190985    4856 start.go:93] Provisioning new machine with config: &{Name:bridge-353700 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17765/minikube-v1.32.1-1702490427-17765-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702585004-17765@sha256:ba2fbb9efd5b81a443834ba0800f3bc13feea942ce199df74b0054a9bdb32bbd Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.28.4 ClusterName:bridge-353700 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP
: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1218 14:13:32.190985    4856 start.go:125] createHost starting for "" (driver="hyperv")
	I1218 14:13:32.191984    4856 out.go:204] * Creating hyperv VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1218 14:13:32.193004    4856 start.go:159] libmachine.API.Create for "bridge-353700" (driver="hyperv")
	I1218 14:13:32.193004    4856 client.go:168] LocalClient.Create starting
	I1218 14:13:32.193004    4856 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem
	I1218 14:13:32.194000    4856 main.go:141] libmachine: Decoding PEM data...
	I1218 14:13:32.194000    4856 main.go:141] libmachine: Parsing certificate...
	I1218 14:13:32.194000    4856 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem
	I1218 14:13:32.194000    4856 main.go:141] libmachine: Decoding PEM data...
	I1218 14:13:32.194000    4856 main.go:141] libmachine: Parsing certificate...
	I1218 14:13:32.194000    4856 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I1218 14:13:34.541554    4856 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I1218 14:13:34.541629    4856 main.go:141] libmachine: [stderr =====>] : 
	I1218 14:13:34.541771    4856 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I1218 14:13:36.802369    4856 main.go:141] libmachine: [stdout =====>] : False
	
	I1218 14:13:36.802497    4856 main.go:141] libmachine: [stderr =====>] : 
	I1218 14:13:36.802725    4856 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I1218 14:13:38.757306    4856 main.go:141] libmachine: [stdout =====>] : True
	
	I1218 14:13:38.757640    4856 main.go:141] libmachine: [stderr =====>] : 
	I1218 14:13:38.757712    4856 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I1218 14:13:43.841088    4856 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I1218 14:13:43.841181    4856 main.go:141] libmachine: [stderr =====>] : 
	I1218 14:13:43.844175    4856 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube7/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.32.1-1702490427-17765-amd64.iso...
	I1218 14:13:44.324799    4856 main.go:141] libmachine: Creating SSH key...
	I1218 14:13:44.480441    4856 main.go:141] libmachine: Creating VM...
	I1218 14:13:44.480441    4856 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I1218 14:13:48.347819    4856 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I1218 14:13:48.348083    4856 main.go:141] libmachine: [stderr =====>] : 
	I1218 14:13:48.348190    4856 main.go:141] libmachine: Using switch "Default Switch"
	I1218 14:13:48.348261    4856 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I1218 14:13:50.491668    4856 main.go:141] libmachine: [stdout =====>] : True
	
	I1218 14:13:50.491668    4856 main.go:141] libmachine: [stderr =====>] : 
	I1218 14:13:50.491668    4856 main.go:141] libmachine: Creating VHD
	I1218 14:13:50.491668    4856 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\bridge-353700\fixed.vhd' -SizeBytes 10MB -Fixed
	I1218 14:13:55.323443    4856 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube7
	Path                    : C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\bridge-353700\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 7A98D34D-CC12-4C4B-9EA5-7CA55DF66CDD
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I1218 14:13:55.323521    4856 main.go:141] libmachine: [stderr =====>] : 
	I1218 14:13:55.323521    4856 main.go:141] libmachine: Writing magic tar header
	I1218 14:13:55.323682    4856 main.go:141] libmachine: Writing SSH key tar header
	I1218 14:13:55.338862    4856 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\bridge-353700\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\bridge-353700\disk.vhd' -VHDType Dynamic -DeleteSource
	I1218 14:13:59.153947    4856 main.go:141] libmachine: [stdout =====>] : 
	I1218 14:13:59.154136    4856 main.go:141] libmachine: [stderr =====>] : 
	I1218 14:13:59.154231    4856 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\bridge-353700\disk.vhd' -SizeBytes 20000MB
	I1218 14:14:02.177477    4856 main.go:141] libmachine: [stdout =====>] : 
	I1218 14:14:02.177704    4856 main.go:141] libmachine: [stderr =====>] : 
	I1218 14:14:02.177799    4856 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM bridge-353700 -Path 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\bridge-353700' -SwitchName 'Default Switch' -MemoryStartupBytes 3072MB
	I1218 14:14:07.298735    4856 main.go:141] libmachine: [stdout =====>] : 
	Name          State CPUUsage(%) MemoryAssigned(M) Uptime   Status             Version
	----          ----- ----------- ----------------- ------   ------             -------
	bridge-353700 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I1218 14:14:07.298807    4856 main.go:141] libmachine: [stderr =====>] : 
	I1218 14:14:07.298807    4856 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName bridge-353700 -DynamicMemoryEnabled $false
	I1218 14:14:10.189994    4856 main.go:141] libmachine: [stdout =====>] : 
	I1218 14:14:10.190268    4856 main.go:141] libmachine: [stderr =====>] : 
	I1218 14:14:10.190268    4856 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor bridge-353700 -Count 2
	I1218 14:14:12.787907    4856 main.go:141] libmachine: [stdout =====>] : 
	I1218 14:14:12.788182    4856 main.go:141] libmachine: [stderr =====>] : 
	I1218 14:14:12.788244    4856 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName bridge-353700 -Path 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\bridge-353700\boot2docker.iso'
	I1218 14:14:15.912612    4856 main.go:141] libmachine: [stdout =====>] : 
	I1218 14:14:15.912612    4856 main.go:141] libmachine: [stderr =====>] : 
	I1218 14:14:15.912612    4856 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName bridge-353700 -Path 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\bridge-353700\disk.vhd'
	I1218 14:14:19.286010    4856 main.go:141] libmachine: [stdout =====>] : 
	I1218 14:14:19.286182    4856 main.go:141] libmachine: [stderr =====>] : 
	I1218 14:14:19.286263    4856 main.go:141] libmachine: Starting VM...
	I1218 14:14:19.286263    4856 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM bridge-353700
	I1218 14:14:22.887876    4856 main.go:141] libmachine: [stdout =====>] : 
	I1218 14:14:22.888010    4856 main.go:141] libmachine: [stderr =====>] : 
	I1218 14:14:22.888010    4856 main.go:141] libmachine: Waiting for host to start...
	I1218 14:14:22.888253    4856 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM bridge-353700 ).state
	I1218 14:14:25.720442    4856 main.go:141] libmachine: [stdout =====>] : Running
	
	I1218 14:14:25.720548    4856 main.go:141] libmachine: [stderr =====>] : 
	I1218 14:14:25.720548    4856 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM bridge-353700 ).networkadapters[0]).ipaddresses[0]
	I1218 14:14:28.898463    4856 main.go:141] libmachine: [stdout =====>] : 
	I1218 14:14:28.898914    4856 main.go:141] libmachine: [stderr =====>] : 
	I1218 14:14:29.907100    4856 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM bridge-353700 ).state
	I1218 14:14:32.632775    4856 main.go:141] libmachine: [stdout =====>] : Running
	
	I1218 14:14:32.632775    4856 main.go:141] libmachine: [stderr =====>] : 
	I1218 14:14:32.632775    4856 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM bridge-353700 ).networkadapters[0]).ipaddresses[0]
	I1218 14:14:35.846748    4856 main.go:141] libmachine: [stdout =====>] : 
	I1218 14:14:35.846748    4856 main.go:141] libmachine: [stderr =====>] : 
	I1218 14:14:36.862350    4856 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM bridge-353700 ).state
	I1218 14:14:39.919542    4856 main.go:141] libmachine: [stdout =====>] : Running
	
	I1218 14:14:39.919542    4856 main.go:141] libmachine: [stderr =====>] : 
	I1218 14:14:39.919542    4856 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM bridge-353700 ).networkadapters[0]).ipaddresses[0]
	I1218 14:14:43.939455    4856 main.go:141] libmachine: [stdout =====>] : 
	I1218 14:14:43.939455    4856 main.go:141] libmachine: [stderr =====>] : 
	I1218 14:14:44.949766    4856 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM bridge-353700 ).state
	I1218 14:14:48.356747    4856 main.go:141] libmachine: [stdout =====>] : Running
	
	I1218 14:14:48.356747    4856 main.go:141] libmachine: [stderr =====>] : 
	I1218 14:14:48.356747    4856 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM bridge-353700 ).networkadapters[0]).ipaddresses[0]
	I1218 14:14:52.202615    4856 main.go:141] libmachine: [stdout =====>] : 
	I1218 14:14:52.202685    4856 main.go:141] libmachine: [stderr =====>] : 
	I1218 14:14:53.209152    4856 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM bridge-353700 ).state
	I1218 14:14:56.074655    4856 main.go:141] libmachine: [stdout =====>] : Running
	
	I1218 14:14:56.074745    4856 main.go:141] libmachine: [stderr =====>] : 
	I1218 14:14:56.074745    4856 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM bridge-353700 ).networkadapters[0]).ipaddresses[0]
	I1218 14:14:59.406600    4856 main.go:141] libmachine: [stdout =====>] : 192.168.224.53
	
	I1218 14:14:59.406600    4856 main.go:141] libmachine: [stderr =====>] : 
	I1218 14:14:59.406600    4856 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM bridge-353700 ).state
	I1218 14:15:02.198255    4856 main.go:141] libmachine: [stdout =====>] : Running
	
	I1218 14:15:02.198255    4856 main.go:141] libmachine: [stderr =====>] : 
	I1218 14:15:02.198255    4856 machine.go:88] provisioning docker machine ...
	I1218 14:15:02.198478    4856 buildroot.go:166] provisioning hostname "bridge-353700"
	I1218 14:15:02.198557    4856 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM bridge-353700 ).state
	I1218 14:15:04.695533    4856 main.go:141] libmachine: [stdout =====>] : Running
	
	I1218 14:15:04.695533    4856 main.go:141] libmachine: [stderr =====>] : 
	I1218 14:15:04.695533    4856 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM bridge-353700 ).networkadapters[0]).ipaddresses[0]
	I1218 14:15:08.022746    4856 main.go:141] libmachine: [stdout =====>] : 192.168.224.53
	
	I1218 14:15:08.022847    4856 main.go:141] libmachine: [stderr =====>] : 
	I1218 14:15:08.032565    4856 main.go:141] libmachine: Using SSH client type: native
	I1218 14:15:08.034549    4856 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xdb4f40] 0xdb7a80 <nil>  [] 0s} 192.168.224.53 22 <nil> <nil>}
	I1218 14:15:08.034549    4856 main.go:141] libmachine: About to run SSH command:
	sudo hostname bridge-353700 && echo "bridge-353700" | sudo tee /etc/hostname
	I1218 14:15:08.246234    4856 main.go:141] libmachine: SSH cmd err, output: <nil>: bridge-353700
	
	I1218 14:15:08.246394    4856 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM bridge-353700 ).state
	I1218 14:15:11.317056    4856 main.go:141] libmachine: [stdout =====>] : Running
	
	I1218 14:15:11.317056    4856 main.go:141] libmachine: [stderr =====>] : 
	I1218 14:15:11.317056    4856 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM bridge-353700 ).networkadapters[0]).ipaddresses[0]
	I1218 14:15:14.637561    4856 main.go:141] libmachine: [stdout =====>] : 192.168.224.53
	
	I1218 14:15:14.637812    4856 main.go:141] libmachine: [stderr =====>] : 
	I1218 14:15:14.649526    4856 main.go:141] libmachine: Using SSH client type: native
	I1218 14:15:14.650838    4856 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xdb4f40] 0xdb7a80 <nil>  [] 0s} 192.168.224.53 22 <nil> <nil>}
	I1218 14:15:14.650838    4856 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sbridge-353700' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 bridge-353700/g' /etc/hosts;
				else 
					echo '127.0.1.1 bridge-353700' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1218 14:15:14.831448    4856 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1218 14:15:14.831695    4856 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube7\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube7\minikube-integration\.minikube}
	I1218 14:15:14.831695    4856 buildroot.go:174] setting up certificates
	I1218 14:15:14.831805    4856 provision.go:83] configureAuth start
	I1218 14:15:14.831887    4856 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM bridge-353700 ).state
	I1218 14:15:17.534354    4856 main.go:141] libmachine: [stdout =====>] : Running
	
	I1218 14:15:17.534445    4856 main.go:141] libmachine: [stderr =====>] : 
	I1218 14:15:17.534533    4856 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM bridge-353700 ).networkadapters[0]).ipaddresses[0]
	I1218 14:15:20.708825    4856 main.go:141] libmachine: [stdout =====>] : 192.168.224.53
	
	I1218 14:15:20.708915    4856 main.go:141] libmachine: [stderr =====>] : 
	I1218 14:15:20.708915    4856 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM bridge-353700 ).state
	I1218 14:15:23.333011    4856 main.go:141] libmachine: [stdout =====>] : Running
	
	I1218 14:15:23.333011    4856 main.go:141] libmachine: [stderr =====>] : 
	I1218 14:15:23.333288    4856 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM bridge-353700 ).networkadapters[0]).ipaddresses[0]
	I1218 14:15:26.483804    4856 main.go:141] libmachine: [stdout =====>] : 192.168.224.53
	
	I1218 14:15:26.483866    4856 main.go:141] libmachine: [stderr =====>] : 
	I1218 14:15:26.483866    4856 provision.go:138] copyHostCerts
	I1218 14:15:26.484068    4856 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/ca.pem, removing ...
	I1218 14:15:26.484068    4856 exec_runner.go:203] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.pem
	I1218 14:15:26.484809    4856 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/ca.pem (1078 bytes)
	I1218 14:15:26.485801    4856 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/cert.pem, removing ...
	I1218 14:15:26.485801    4856 exec_runner.go:203] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cert.pem
	I1218 14:15:26.486727    4856 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/cert.pem (1123 bytes)
	I1218 14:15:26.488174    4856 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/key.pem, removing ...
	I1218 14:15:26.488174    4856 exec_runner.go:203] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\key.pem
	I1218 14:15:26.488679    4856 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/key.pem (1679 bytes)
	I1218 14:15:26.489881    4856 provision.go:112] generating server cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.bridge-353700 san=[192.168.224.53 192.168.224.53 localhost 127.0.0.1 minikube bridge-353700]
	I1218 14:15:26.597940    4856 provision.go:172] copyRemoteCerts
	I1218 14:15:26.614041    4856 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1218 14:15:26.614041    4856 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM bridge-353700 ).state
	I1218 14:15:29.096220    4856 main.go:141] libmachine: [stdout =====>] : Running
	
	I1218 14:15:29.096266    4856 main.go:141] libmachine: [stderr =====>] : 
	I1218 14:15:29.096388    4856 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM bridge-353700 ).networkadapters[0]).ipaddresses[0]
	I1218 14:15:31.932100    4856 main.go:141] libmachine: [stdout =====>] : 192.168.224.53
	
	I1218 14:15:31.932179    4856 main.go:141] libmachine: [stderr =====>] : 
	I1218 14:15:31.933010    4856 sshutil.go:53] new ssh client: &{IP:192.168.224.53 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\bridge-353700\id_rsa Username:docker}
	I1218 14:15:32.043129    4856 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (5.4289951s)
	I1218 14:15:32.043616    4856 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1218 14:15:32.094274    4856 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1216 bytes)
	I1218 14:15:32.141667    4856 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1218 14:15:32.191218    4856 provision.go:86] duration metric: configureAuth took 17.3592658s
	I1218 14:15:32.191282    4856 buildroot.go:189] setting minikube options for container-runtime
	I1218 14:15:32.191701    4856 config.go:182] Loaded profile config "bridge-353700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1218 14:15:32.191701    4856 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM bridge-353700 ).state
	I1218 14:15:34.502098    4856 main.go:141] libmachine: [stdout =====>] : Running
	
	I1218 14:15:34.502383    4856 main.go:141] libmachine: [stderr =====>] : 
	I1218 14:15:34.502496    4856 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM bridge-353700 ).networkadapters[0]).ipaddresses[0]
	I1218 14:15:37.428409    4856 main.go:141] libmachine: [stdout =====>] : 192.168.224.53
	
	I1218 14:15:37.428409    4856 main.go:141] libmachine: [stderr =====>] : 
	I1218 14:15:37.433408    4856 main.go:141] libmachine: Using SSH client type: native
	I1218 14:15:37.434412    4856 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xdb4f40] 0xdb7a80 <nil>  [] 0s} 192.168.224.53 22 <nil> <nil>}
	I1218 14:15:37.434412    4856 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1218 14:15:37.578975    4856 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1218 14:15:37.579033    4856 buildroot.go:70] root file system type: tmpfs
	I1218 14:15:37.579359    4856 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I1218 14:15:37.579479    4856 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM bridge-353700 ).state
	I1218 14:15:39.910622    4856 main.go:141] libmachine: [stdout =====>] : Running
	
	I1218 14:15:39.910966    4856 main.go:141] libmachine: [stderr =====>] : 
	I1218 14:15:39.911045    4856 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM bridge-353700 ).networkadapters[0]).ipaddresses[0]
	I1218 14:15:42.857712    4856 main.go:141] libmachine: [stdout =====>] : 192.168.224.53
	
	I1218 14:15:42.858025    4856 main.go:141] libmachine: [stderr =====>] : 
	I1218 14:15:42.867400    4856 main.go:141] libmachine: Using SSH client type: native
	I1218 14:15:42.868383    4856 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xdb4f40] 0xdb7a80 <nil>  [] 0s} 192.168.224.53 22 <nil> <nil>}
	I1218 14:15:42.868383    4856 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1218 14:15:43.040905    4856 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1218 14:15:43.041923    4856 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM bridge-353700 ).state
	I1218 14:15:45.837659    4856 main.go:141] libmachine: [stdout =====>] : Running
	
	I1218 14:15:45.837857    4856 main.go:141] libmachine: [stderr =====>] : 
	I1218 14:15:45.837857    4856 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM bridge-353700 ).networkadapters[0]).ipaddresses[0]
	I1218 14:15:48.987868    4856 main.go:141] libmachine: [stdout =====>] : 192.168.224.53
	
	I1218 14:15:48.987924    4856 main.go:141] libmachine: [stderr =====>] : 
	I1218 14:15:48.993966    4856 main.go:141] libmachine: Using SSH client type: native
	I1218 14:15:48.994662    4856 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xdb4f40] 0xdb7a80 <nil>  [] 0s} 192.168.224.53 22 <nil> <nil>}
	I1218 14:15:48.994662    4856 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1218 14:15:52.979789    4856 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I1218 14:15:52.980340    4856 machine.go:91] provisioned docker machine in 50.781301s
	I1218 14:15:52.980449    4856 client.go:171] LocalClient.Create took 2m20.7867977s
	I1218 14:15:52.980449    4856 start.go:167] duration metric: libmachine.API.Create for "bridge-353700" took 2m20.7867977s
	I1218 14:15:52.980449    4856 start.go:300] post-start starting for "bridge-353700" (driver="hyperv")
	I1218 14:15:52.980449    4856 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1218 14:15:53.001188    4856 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1218 14:15:53.001188    4856 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM bridge-353700 ).state
	I1218 14:15:55.629925    4856 main.go:141] libmachine: [stdout =====>] : Running
	
	I1218 14:15:55.629925    4856 main.go:141] libmachine: [stderr =====>] : 
	I1218 14:15:55.629925    4856 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM bridge-353700 ).networkadapters[0]).ipaddresses[0]
	I1218 14:15:58.618567    4856 main.go:141] libmachine: [stdout =====>] : 192.168.224.53
	
	I1218 14:15:58.618839    4856 main.go:141] libmachine: [stderr =====>] : 
	I1218 14:15:58.619094    4856 sshutil.go:53] new ssh client: &{IP:192.168.224.53 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\bridge-353700\id_rsa Username:docker}
	I1218 14:15:58.755419    4856 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (5.7542048s)
	I1218 14:15:58.777847    4856 ssh_runner.go:195] Run: cat /etc/os-release
	I1218 14:15:58.788104    4856 info.go:137] Remote host: Buildroot 2021.02.12
	I1218 14:15:58.788316    4856 filesync.go:126] Scanning C:\Users\jenkins.minikube7\minikube-integration\.minikube\addons for local assets ...
	I1218 14:15:58.788952    4856 filesync.go:126] Scanning C:\Users\jenkins.minikube7\minikube-integration\.minikube\files for local assets ...
	I1218 14:15:58.790766    4856 filesync.go:149] local asset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\149282.pem -> 149282.pem in /etc/ssl/certs
	I1218 14:15:58.821355    4856 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1218 14:15:58.844458    4856 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\149282.pem --> /etc/ssl/certs/149282.pem (1708 bytes)
	I1218 14:15:58.896354    4856 start.go:303] post-start completed in 5.9158776s
	I1218 14:15:58.901317    4856 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM bridge-353700 ).state
	I1218 14:16:01.720437    4856 main.go:141] libmachine: [stdout =====>] : Running
	
	I1218 14:16:01.720437    4856 main.go:141] libmachine: [stderr =====>] : 
	I1218 14:16:01.720437    4856 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM bridge-353700 ).networkadapters[0]).ipaddresses[0]
	I1218 14:16:04.951317    4856 main.go:141] libmachine: [stdout =====>] : 192.168.224.53
	
	I1218 14:16:04.951844    4856 main.go:141] libmachine: [stderr =====>] : 
	I1218 14:16:04.951844    4856 profile.go:148] Saving config to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\bridge-353700\config.json ...
	I1218 14:16:04.957001    4856 start.go:128] duration metric: createHost completed in 2m32.7653134s
	I1218 14:16:04.957154    4856 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM bridge-353700 ).state
	I1218 14:16:07.536671    4856 main.go:141] libmachine: [stdout =====>] : Running
	
	I1218 14:16:07.536873    4856 main.go:141] libmachine: [stderr =====>] : 
	I1218 14:16:07.536873    4856 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM bridge-353700 ).networkadapters[0]).ipaddresses[0]
	I1218 14:16:10.457264    4856 main.go:141] libmachine: [stdout =====>] : 192.168.224.53
	
	I1218 14:16:10.457441    4856 main.go:141] libmachine: [stderr =====>] : 
	I1218 14:16:10.465260    4856 main.go:141] libmachine: Using SSH client type: native
	I1218 14:16:10.465937    4856 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xdb4f40] 0xdb7a80 <nil>  [] 0s} 192.168.224.53 22 <nil> <nil>}
	I1218 14:16:10.465937    4856 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1218 14:16:10.606949    4856 main.go:141] libmachine: SSH cmd err, output: <nil>: 1702908970.619134553
	
	I1218 14:16:10.606949    4856 fix.go:206] guest clock: 1702908970.619134553
	I1218 14:16:10.606949    4856 fix.go:219] Guest: 2023-12-18 14:16:10.619134553 +0000 UTC Remote: 2023-12-18 14:16:04.9570016 +0000 UTC m=+189.040801001 (delta=5.662132953s)
	I1218 14:16:10.606949    4856 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM bridge-353700 ).state
	I1218 14:16:13.058580    4856 main.go:141] libmachine: [stdout =====>] : Running
	
	I1218 14:16:13.058732    4856 main.go:141] libmachine: [stderr =====>] : 
	I1218 14:16:13.058732    4856 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM bridge-353700 ).networkadapters[0]).ipaddresses[0]
	I1218 14:16:15.977744    4856 main.go:141] libmachine: [stdout =====>] : 192.168.224.53
	
	I1218 14:16:15.977744    4856 main.go:141] libmachine: [stderr =====>] : 
	I1218 14:16:15.985132    4856 main.go:141] libmachine: Using SSH client type: native
	I1218 14:16:15.985902    4856 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xdb4f40] 0xdb7a80 <nil>  [] 0s} 192.168.224.53 22 <nil> <nil>}
	I1218 14:16:15.985902    4856 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1702908970
	I1218 14:16:16.137893    4856 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Dec 18 14:16:10 UTC 2023
	
	I1218 14:16:16.138587    4856 fix.go:226] clock set: Mon Dec 18 14:16:10 UTC 2023
	 (err=<nil>)
	I1218 14:16:16.138665    4856 start.go:83] releasing machines lock for "bridge-353700", held for 2m43.9479269s
	I1218 14:16:16.138761    4856 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM bridge-353700 ).state
	I1218 14:16:18.800205    4856 main.go:141] libmachine: [stdout =====>] : Running
	
	I1218 14:16:18.800388    4856 main.go:141] libmachine: [stderr =====>] : 
	I1218 14:16:18.800388    4856 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM bridge-353700 ).networkadapters[0]).ipaddresses[0]
	I1218 14:16:22.074466    4856 main.go:141] libmachine: [stdout =====>] : 192.168.224.53
	
	I1218 14:16:22.074722    4856 main.go:141] libmachine: [stderr =====>] : 
	I1218 14:16:22.080397    4856 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1218 14:16:22.080965    4856 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM bridge-353700 ).state
	I1218 14:16:22.104152    4856 ssh_runner.go:195] Run: cat /version.json
	I1218 14:16:22.105146    4856 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM bridge-353700 ).state
	I1218 14:16:25.010108    4856 main.go:141] libmachine: [stdout =====>] : Running
	
	I1218 14:16:25.010306    4856 main.go:141] libmachine: [stderr =====>] : 
	I1218 14:16:25.010539    4856 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM bridge-353700 ).networkadapters[0]).ipaddresses[0]
	I1218 14:16:25.015560    4856 main.go:141] libmachine: [stdout =====>] : Running
	
	I1218 14:16:25.015854    4856 main.go:141] libmachine: [stderr =====>] : 
	I1218 14:16:25.015909    4856 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM bridge-353700 ).networkadapters[0]).ipaddresses[0]
	I1218 14:16:28.437861    4856 main.go:141] libmachine: [stdout =====>] : 192.168.224.53
	
	I1218 14:16:28.437940    4856 main.go:141] libmachine: [stderr =====>] : 
	I1218 14:16:28.438743    4856 sshutil.go:53] new ssh client: &{IP:192.168.224.53 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\bridge-353700\id_rsa Username:docker}
	I1218 14:16:28.499401    4856 main.go:141] libmachine: [stdout =====>] : 192.168.224.53
	
	I1218 14:16:28.499401    4856 main.go:141] libmachine: [stderr =====>] : 
	I1218 14:16:28.499401    4856 sshutil.go:53] new ssh client: &{IP:192.168.224.53 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\bridge-353700\id_rsa Username:docker}
	I1218 14:16:28.555336    4856 ssh_runner.go:235] Completed: cat /version.json: (6.4501609s)
	I1218 14:16:28.578700    4856 ssh_runner.go:195] Run: systemctl --version
	I1218 14:16:28.660709    4856 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (6.5802819s)
	I1218 14:16:28.687522    4856 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1218 14:16:28.699934    4856 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1218 14:16:28.715609    4856 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1218 14:16:28.750949    4856 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1218 14:16:28.751022    4856 start.go:475] detecting cgroup driver to use...
	I1218 14:16:28.751426    4856 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1218 14:16:28.805607    4856 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I1218 14:16:28.838579    4856 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1218 14:16:28.858302    4856 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I1218 14:16:28.872574    4856 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1218 14:16:28.910582    4856 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1218 14:16:28.947584    4856 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1218 14:16:28.992160    4856 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1218 14:16:29.035752    4856 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1218 14:16:29.079678    4856 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1218 14:16:29.125679    4856 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1218 14:16:29.167212    4856 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1218 14:16:29.202231    4856 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1218 14:16:29.419998    4856 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1218 14:16:29.460382    4856 start.go:475] detecting cgroup driver to use...
	I1218 14:16:29.477489    4856 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1218 14:16:29.531502    4856 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1218 14:16:29.573489    4856 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1218 14:16:29.628485    4856 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1218 14:16:29.681488    4856 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1218 14:16:29.724423    4856 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1218 14:16:29.801401    4856 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1218 14:16:29.826032    4856 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1218 14:16:29.885936    4856 ssh_runner.go:195] Run: which cri-dockerd
	I1218 14:16:29.907471    4856 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1218 14:16:29.928269    4856 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I1218 14:16:29.974393    4856 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1218 14:16:30.176218    4856 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1218 14:16:30.352839    4856 docker.go:560] configuring docker to use "cgroupfs" as cgroup driver...
	I1218 14:16:30.353062    4856 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1218 14:16:30.405250    4856 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1218 14:16:30.593271    4856 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1218 14:17:31.759150    4856 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m1.1655972s)
	I1218 14:17:31.793267    4856 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I1218 14:17:31.828578    4856 out.go:177] 
	W1218 14:17:31.829572    4856 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	-- Journal begins at Mon 2023-12-18 14:14:48 UTC, ends at Mon 2023-12-18 14:17:31 UTC. --
	Dec 18 14:15:49 bridge-353700 systemd[1]: Starting Docker Application Container Engine...
	Dec 18 14:15:49 bridge-353700 dockerd[676]: time="2023-12-18T14:15:49.640424915Z" level=info msg="Starting up"
	Dec 18 14:15:49 bridge-353700 dockerd[676]: time="2023-12-18T14:15:49.641484774Z" level=info msg="containerd not running, starting managed containerd"
	Dec 18 14:15:49 bridge-353700 dockerd[676]: time="2023-12-18T14:15:49.643105366Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=682
	Dec 18 14:15:49 bridge-353700 dockerd[682]: time="2023-12-18T14:15:49.691825106Z" level=info msg="starting containerd" revision=64b8a811b07ba6288238eefc14d898ee0b5b99ba version=v1.7.11
	Dec 18 14:15:49 bridge-353700 dockerd[682]: time="2023-12-18T14:15:49.744951995Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Dec 18 14:15:49 bridge-353700 dockerd[682]: time="2023-12-18T14:15:49.745369419Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Dec 18 14:15:49 bridge-353700 dockerd[682]: time="2023-12-18T14:15:49.749853871Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.57\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Dec 18 14:15:49 bridge-353700 dockerd[682]: time="2023-12-18T14:15:49.750048882Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Dec 18 14:15:49 bridge-353700 dockerd[682]: time="2023-12-18T14:15:49.750997235Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Dec 18 14:15:49 bridge-353700 dockerd[682]: time="2023-12-18T14:15:49.751168745Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Dec 18 14:15:49 bridge-353700 dockerd[682]: time="2023-12-18T14:15:49.751377757Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Dec 18 14:15:49 bridge-353700 dockerd[682]: time="2023-12-18T14:15:49.751632071Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Dec 18 14:15:49 bridge-353700 dockerd[682]: time="2023-12-18T14:15:49.751795080Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Dec 18 14:15:49 bridge-353700 dockerd[682]: time="2023-12-18T14:15:49.751981691Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Dec 18 14:15:49 bridge-353700 dockerd[682]: time="2023-12-18T14:15:49.752642228Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Dec 18 14:15:49 bridge-353700 dockerd[682]: time="2023-12-18T14:15:49.752835439Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Dec 18 14:15:49 bridge-353700 dockerd[682]: time="2023-12-18T14:15:49.752926744Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Dec 18 14:15:49 bridge-353700 dockerd[682]: time="2023-12-18T14:15:49.753302465Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Dec 18 14:15:49 bridge-353700 dockerd[682]: time="2023-12-18T14:15:49.753366769Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Dec 18 14:15:49 bridge-353700 dockerd[682]: time="2023-12-18T14:15:49.753487075Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Dec 18 14:15:49 bridge-353700 dockerd[682]: time="2023-12-18T14:15:49.753657885Z" level=info msg="metadata content store policy set" policy=shared
	Dec 18 14:15:49 bridge-353700 dockerd[682]: time="2023-12-18T14:15:49.803046364Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Dec 18 14:15:49 bridge-353700 dockerd[682]: time="2023-12-18T14:15:49.803191772Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Dec 18 14:15:49 bridge-353700 dockerd[682]: time="2023-12-18T14:15:49.803217873Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Dec 18 14:15:49 bridge-353700 dockerd[682]: time="2023-12-18T14:15:49.803310378Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Dec 18 14:15:49 bridge-353700 dockerd[682]: time="2023-12-18T14:15:49.803335080Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Dec 18 14:15:49 bridge-353700 dockerd[682]: time="2023-12-18T14:15:49.803349681Z" level=info msg="NRI interface is disabled by configuration."
	Dec 18 14:15:49 bridge-353700 dockerd[682]: time="2023-12-18T14:15:49.803378982Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Dec 18 14:15:49 bridge-353700 dockerd[682]: time="2023-12-18T14:15:49.803711201Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Dec 18 14:15:49 bridge-353700 dockerd[682]: time="2023-12-18T14:15:49.803835708Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Dec 18 14:15:49 bridge-353700 dockerd[682]: time="2023-12-18T14:15:49.803876510Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Dec 18 14:15:49 bridge-353700 dockerd[682]: time="2023-12-18T14:15:49.803910012Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Dec 18 14:15:49 bridge-353700 dockerd[682]: time="2023-12-18T14:15:49.803945514Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Dec 18 14:15:49 bridge-353700 dockerd[682]: time="2023-12-18T14:15:49.803997517Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Dec 18 14:15:49 bridge-353700 dockerd[682]: time="2023-12-18T14:15:49.804068121Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Dec 18 14:15:49 bridge-353700 dockerd[682]: time="2023-12-18T14:15:49.804101223Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Dec 18 14:15:49 bridge-353700 dockerd[682]: time="2023-12-18T14:15:49.804134525Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Dec 18 14:15:49 bridge-353700 dockerd[682]: time="2023-12-18T14:15:49.804168727Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Dec 18 14:15:49 bridge-353700 dockerd[682]: time="2023-12-18T14:15:49.804207729Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Dec 18 14:15:49 bridge-353700 dockerd[682]: time="2023-12-18T14:15:49.804309335Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Dec 18 14:15:49 bridge-353700 dockerd[682]: time="2023-12-18T14:15:49.804497845Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Dec 18 14:15:49 bridge-353700 dockerd[682]: time="2023-12-18T14:15:49.804956171Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Dec 18 14:15:49 bridge-353700 dockerd[682]: time="2023-12-18T14:15:49.805015674Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Dec 18 14:15:49 bridge-353700 dockerd[682]: time="2023-12-18T14:15:49.805038176Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Dec 18 14:15:49 bridge-353700 dockerd[682]: time="2023-12-18T14:15:49.805065477Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Dec 18 14:15:49 bridge-353700 dockerd[682]: time="2023-12-18T14:15:49.805145182Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Dec 18 14:15:49 bridge-353700 dockerd[682]: time="2023-12-18T14:15:49.805184284Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Dec 18 14:15:49 bridge-353700 dockerd[682]: time="2023-12-18T14:15:49.805218786Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Dec 18 14:15:49 bridge-353700 dockerd[682]: time="2023-12-18T14:15:49.805233187Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Dec 18 14:15:49 bridge-353700 dockerd[682]: time="2023-12-18T14:15:49.805316291Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Dec 18 14:15:49 bridge-353700 dockerd[682]: time="2023-12-18T14:15:49.805334492Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Dec 18 14:15:49 bridge-353700 dockerd[682]: time="2023-12-18T14:15:49.805352493Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Dec 18 14:15:49 bridge-353700 dockerd[682]: time="2023-12-18T14:15:49.805368194Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Dec 18 14:15:49 bridge-353700 dockerd[682]: time="2023-12-18T14:15:49.805384595Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Dec 18 14:15:49 bridge-353700 dockerd[682]: time="2023-12-18T14:15:49.805524403Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Dec 18 14:15:49 bridge-353700 dockerd[682]: time="2023-12-18T14:15:49.805570806Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Dec 18 14:15:49 bridge-353700 dockerd[682]: time="2023-12-18T14:15:49.805590307Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Dec 18 14:15:49 bridge-353700 dockerd[682]: time="2023-12-18T14:15:49.805606808Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Dec 18 14:15:49 bridge-353700 dockerd[682]: time="2023-12-18T14:15:49.805632109Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Dec 18 14:15:49 bridge-353700 dockerd[682]: time="2023-12-18T14:15:49.805656710Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Dec 18 14:15:49 bridge-353700 dockerd[682]: time="2023-12-18T14:15:49.805673211Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Dec 18 14:15:49 bridge-353700 dockerd[682]: time="2023-12-18T14:15:49.805687612Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Dec 18 14:15:49 bridge-353700 dockerd[682]: time="2023-12-18T14:15:49.805705813Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	Dec 18 14:15:49 bridge-353700 dockerd[682]: time="2023-12-18T14:15:49.805723514Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Dec 18 14:15:49 bridge-353700 dockerd[682]: time="2023-12-18T14:15:49.805736515Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	Dec 18 14:15:49 bridge-353700 dockerd[682]: time="2023-12-18T14:15:49.806095535Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Dec 18 14:15:49 bridge-353700 dockerd[682]: time="2023-12-18T14:15:49.806296246Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Dec 18 14:15:49 bridge-353700 dockerd[682]: time="2023-12-18T14:15:49.806521959Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Dec 18 14:15:49 bridge-353700 dockerd[682]: time="2023-12-18T14:15:49.806730871Z" level=info msg="containerd successfully booted in 0.117085s"
	Dec 18 14:15:50 bridge-353700 dockerd[676]: time="2023-12-18T14:15:50.497180496Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Dec 18 14:15:50 bridge-353700 dockerd[676]: time="2023-12-18T14:15:50.796205967Z" level=info msg="Loading containers: start."
	Dec 18 14:15:51 bridge-353700 dockerd[676]: time="2023-12-18T14:15:51.902596103Z" level=info msg="Loading containers: done."
	Dec 18 14:15:52 bridge-353700 dockerd[676]: time="2023-12-18T14:15:52.090676401Z" level=warning msg="WARNING: No blkio throttle.read_bps_device support"
	Dec 18 14:15:52 bridge-353700 dockerd[676]: time="2023-12-18T14:15:52.090806407Z" level=warning msg="WARNING: No blkio throttle.write_bps_device support"
	Dec 18 14:15:52 bridge-353700 dockerd[676]: time="2023-12-18T14:15:52.090821508Z" level=warning msg="WARNING: No blkio throttle.read_iops_device support"
	Dec 18 14:15:52 bridge-353700 dockerd[676]: time="2023-12-18T14:15:52.090832308Z" level=warning msg="WARNING: No blkio throttle.write_iops_device support"
	Dec 18 14:15:52 bridge-353700 dockerd[676]: time="2023-12-18T14:15:52.090874810Z" level=info msg="Docker daemon" commit=311b9ff graphdriver=overlay2 version=24.0.7
	Dec 18 14:15:52 bridge-353700 dockerd[676]: time="2023-12-18T14:15:52.091061919Z" level=info msg="Daemon has completed initialization"
	Dec 18 14:15:52 bridge-353700 dockerd[676]: time="2023-12-18T14:15:52.987596478Z" level=info msg="API listen on /var/run/docker.sock"
	Dec 18 14:15:52 bridge-353700 systemd[1]: Started Docker Application Container Engine.
	Dec 18 14:15:52 bridge-353700 dockerd[676]: time="2023-12-18T14:15:52.989555569Z" level=info msg="API listen on [::]:2376"
	Dec 18 14:16:30 bridge-353700 systemd[1]: Stopping Docker Application Container Engine...
	Dec 18 14:16:30 bridge-353700 dockerd[676]: time="2023-12-18T14:16:30.635089169Z" level=info msg="Processing signal 'terminated'"
	Dec 18 14:16:30 bridge-353700 dockerd[676]: time="2023-12-18T14:16:30.637560174Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Dec 18 14:16:30 bridge-353700 dockerd[676]: time="2023-12-18T14:16:30.638447476Z" level=info msg="Daemon shutdown complete"
	Dec 18 14:16:30 bridge-353700 dockerd[676]: time="2023-12-18T14:16:30.638636176Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Dec 18 14:16:30 bridge-353700 dockerd[676]: time="2023-12-18T14:16:30.638659776Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Dec 18 14:16:31 bridge-353700 systemd[1]: docker.service: Succeeded.
	Dec 18 14:16:31 bridge-353700 systemd[1]: Stopped Docker Application Container Engine.
	Dec 18 14:16:31 bridge-353700 systemd[1]: Starting Docker Application Container Engine...
	Dec 18 14:16:31 bridge-353700 dockerd[1013]: time="2023-12-18T14:16:31.758033887Z" level=info msg="Starting up"
	Dec 18 14:17:31 bridge-353700 dockerd[1013]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Dec 18 14:17:31 bridge-353700 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Dec 18 14:17:31 bridge-353700 systemd[1]: docker.service: Failed with result 'exit-code'.
	Dec 18 14:17:31 bridge-353700 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	-- Journal begins at Mon 2023-12-18 14:14:48 UTC, ends at Mon 2023-12-18 14:17:31 UTC. --
	Dec 18 14:15:49 bridge-353700 systemd[1]: Starting Docker Application Container Engine...
	Dec 18 14:15:49 bridge-353700 dockerd[676]: time="2023-12-18T14:15:49.640424915Z" level=info msg="Starting up"
	Dec 18 14:15:49 bridge-353700 dockerd[676]: time="2023-12-18T14:15:49.641484774Z" level=info msg="containerd not running, starting managed containerd"
	Dec 18 14:15:49 bridge-353700 dockerd[676]: time="2023-12-18T14:15:49.643105366Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=682
	Dec 18 14:15:49 bridge-353700 dockerd[682]: time="2023-12-18T14:15:49.691825106Z" level=info msg="starting containerd" revision=64b8a811b07ba6288238eefc14d898ee0b5b99ba version=v1.7.11
	Dec 18 14:15:49 bridge-353700 dockerd[682]: time="2023-12-18T14:15:49.744951995Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Dec 18 14:15:49 bridge-353700 dockerd[682]: time="2023-12-18T14:15:49.745369419Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Dec 18 14:15:49 bridge-353700 dockerd[682]: time="2023-12-18T14:15:49.749853871Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.57\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Dec 18 14:15:49 bridge-353700 dockerd[682]: time="2023-12-18T14:15:49.750048882Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Dec 18 14:15:49 bridge-353700 dockerd[682]: time="2023-12-18T14:15:49.750997235Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Dec 18 14:15:49 bridge-353700 dockerd[682]: time="2023-12-18T14:15:49.751168745Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Dec 18 14:15:49 bridge-353700 dockerd[682]: time="2023-12-18T14:15:49.751377757Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Dec 18 14:15:49 bridge-353700 dockerd[682]: time="2023-12-18T14:15:49.751632071Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Dec 18 14:15:49 bridge-353700 dockerd[682]: time="2023-12-18T14:15:49.751795080Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Dec 18 14:15:49 bridge-353700 dockerd[682]: time="2023-12-18T14:15:49.751981691Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Dec 18 14:15:49 bridge-353700 dockerd[682]: time="2023-12-18T14:15:49.752642228Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Dec 18 14:15:49 bridge-353700 dockerd[682]: time="2023-12-18T14:15:49.752835439Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Dec 18 14:15:49 bridge-353700 dockerd[682]: time="2023-12-18T14:15:49.752926744Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Dec 18 14:15:49 bridge-353700 dockerd[682]: time="2023-12-18T14:15:49.753302465Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Dec 18 14:15:49 bridge-353700 dockerd[682]: time="2023-12-18T14:15:49.753366769Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Dec 18 14:15:49 bridge-353700 dockerd[682]: time="2023-12-18T14:15:49.753487075Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Dec 18 14:15:49 bridge-353700 dockerd[682]: time="2023-12-18T14:15:49.753657885Z" level=info msg="metadata content store policy set" policy=shared
	Dec 18 14:15:49 bridge-353700 dockerd[682]: time="2023-12-18T14:15:49.803046364Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Dec 18 14:15:49 bridge-353700 dockerd[682]: time="2023-12-18T14:15:49.803191772Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Dec 18 14:15:49 bridge-353700 dockerd[682]: time="2023-12-18T14:15:49.803217873Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Dec 18 14:15:49 bridge-353700 dockerd[682]: time="2023-12-18T14:15:49.803310378Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Dec 18 14:15:49 bridge-353700 dockerd[682]: time="2023-12-18T14:15:49.803335080Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Dec 18 14:15:49 bridge-353700 dockerd[682]: time="2023-12-18T14:15:49.803349681Z" level=info msg="NRI interface is disabled by configuration."
	Dec 18 14:15:49 bridge-353700 dockerd[682]: time="2023-12-18T14:15:49.803378982Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Dec 18 14:15:49 bridge-353700 dockerd[682]: time="2023-12-18T14:15:49.803711201Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Dec 18 14:15:49 bridge-353700 dockerd[682]: time="2023-12-18T14:15:49.803835708Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Dec 18 14:15:49 bridge-353700 dockerd[682]: time="2023-12-18T14:15:49.803876510Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Dec 18 14:15:49 bridge-353700 dockerd[682]: time="2023-12-18T14:15:49.803910012Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Dec 18 14:15:49 bridge-353700 dockerd[682]: time="2023-12-18T14:15:49.803945514Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Dec 18 14:15:49 bridge-353700 dockerd[682]: time="2023-12-18T14:15:49.803997517Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Dec 18 14:15:49 bridge-353700 dockerd[682]: time="2023-12-18T14:15:49.804068121Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Dec 18 14:15:49 bridge-353700 dockerd[682]: time="2023-12-18T14:15:49.804101223Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Dec 18 14:15:49 bridge-353700 dockerd[682]: time="2023-12-18T14:15:49.804134525Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Dec 18 14:15:49 bridge-353700 dockerd[682]: time="2023-12-18T14:15:49.804168727Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Dec 18 14:15:49 bridge-353700 dockerd[682]: time="2023-12-18T14:15:49.804207729Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Dec 18 14:15:49 bridge-353700 dockerd[682]: time="2023-12-18T14:15:49.804309335Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Dec 18 14:15:49 bridge-353700 dockerd[682]: time="2023-12-18T14:15:49.804497845Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Dec 18 14:15:49 bridge-353700 dockerd[682]: time="2023-12-18T14:15:49.804956171Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Dec 18 14:15:49 bridge-353700 dockerd[682]: time="2023-12-18T14:15:49.805015674Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Dec 18 14:15:49 bridge-353700 dockerd[682]: time="2023-12-18T14:15:49.805038176Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Dec 18 14:15:49 bridge-353700 dockerd[682]: time="2023-12-18T14:15:49.805065477Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Dec 18 14:15:49 bridge-353700 dockerd[682]: time="2023-12-18T14:15:49.805145182Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Dec 18 14:15:49 bridge-353700 dockerd[682]: time="2023-12-18T14:15:49.805184284Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Dec 18 14:15:49 bridge-353700 dockerd[682]: time="2023-12-18T14:15:49.805218786Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Dec 18 14:15:49 bridge-353700 dockerd[682]: time="2023-12-18T14:15:49.805233187Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Dec 18 14:15:49 bridge-353700 dockerd[682]: time="2023-12-18T14:15:49.805316291Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Dec 18 14:15:49 bridge-353700 dockerd[682]: time="2023-12-18T14:15:49.805334492Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Dec 18 14:15:49 bridge-353700 dockerd[682]: time="2023-12-18T14:15:49.805352493Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Dec 18 14:15:49 bridge-353700 dockerd[682]: time="2023-12-18T14:15:49.805368194Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Dec 18 14:15:49 bridge-353700 dockerd[682]: time="2023-12-18T14:15:49.805384595Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Dec 18 14:15:49 bridge-353700 dockerd[682]: time="2023-12-18T14:15:49.805524403Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Dec 18 14:15:49 bridge-353700 dockerd[682]: time="2023-12-18T14:15:49.805570806Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Dec 18 14:15:49 bridge-353700 dockerd[682]: time="2023-12-18T14:15:49.805590307Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Dec 18 14:15:49 bridge-353700 dockerd[682]: time="2023-12-18T14:15:49.805606808Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Dec 18 14:15:49 bridge-353700 dockerd[682]: time="2023-12-18T14:15:49.805632109Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Dec 18 14:15:49 bridge-353700 dockerd[682]: time="2023-12-18T14:15:49.805656710Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Dec 18 14:15:49 bridge-353700 dockerd[682]: time="2023-12-18T14:15:49.805673211Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Dec 18 14:15:49 bridge-353700 dockerd[682]: time="2023-12-18T14:15:49.805687612Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Dec 18 14:15:49 bridge-353700 dockerd[682]: time="2023-12-18T14:15:49.805705813Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	Dec 18 14:15:49 bridge-353700 dockerd[682]: time="2023-12-18T14:15:49.805723514Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Dec 18 14:15:49 bridge-353700 dockerd[682]: time="2023-12-18T14:15:49.805736515Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	Dec 18 14:15:49 bridge-353700 dockerd[682]: time="2023-12-18T14:15:49.806095535Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Dec 18 14:15:49 bridge-353700 dockerd[682]: time="2023-12-18T14:15:49.806296246Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Dec 18 14:15:49 bridge-353700 dockerd[682]: time="2023-12-18T14:15:49.806521959Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Dec 18 14:15:49 bridge-353700 dockerd[682]: time="2023-12-18T14:15:49.806730871Z" level=info msg="containerd successfully booted in 0.117085s"
	Dec 18 14:15:50 bridge-353700 dockerd[676]: time="2023-12-18T14:15:50.497180496Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Dec 18 14:15:50 bridge-353700 dockerd[676]: time="2023-12-18T14:15:50.796205967Z" level=info msg="Loading containers: start."
	Dec 18 14:15:51 bridge-353700 dockerd[676]: time="2023-12-18T14:15:51.902596103Z" level=info msg="Loading containers: done."
	Dec 18 14:15:52 bridge-353700 dockerd[676]: time="2023-12-18T14:15:52.090676401Z" level=warning msg="WARNING: No blkio throttle.read_bps_device support"
	Dec 18 14:15:52 bridge-353700 dockerd[676]: time="2023-12-18T14:15:52.090806407Z" level=warning msg="WARNING: No blkio throttle.write_bps_device support"
	Dec 18 14:15:52 bridge-353700 dockerd[676]: time="2023-12-18T14:15:52.090821508Z" level=warning msg="WARNING: No blkio throttle.read_iops_device support"
	Dec 18 14:15:52 bridge-353700 dockerd[676]: time="2023-12-18T14:15:52.090832308Z" level=warning msg="WARNING: No blkio throttle.write_iops_device support"
	Dec 18 14:15:52 bridge-353700 dockerd[676]: time="2023-12-18T14:15:52.090874810Z" level=info msg="Docker daemon" commit=311b9ff graphdriver=overlay2 version=24.0.7
	Dec 18 14:15:52 bridge-353700 dockerd[676]: time="2023-12-18T14:15:52.091061919Z" level=info msg="Daemon has completed initialization"
	Dec 18 14:15:52 bridge-353700 dockerd[676]: time="2023-12-18T14:15:52.987596478Z" level=info msg="API listen on /var/run/docker.sock"
	Dec 18 14:15:52 bridge-353700 systemd[1]: Started Docker Application Container Engine.
	Dec 18 14:15:52 bridge-353700 dockerd[676]: time="2023-12-18T14:15:52.989555569Z" level=info msg="API listen on [::]:2376"
	Dec 18 14:16:30 bridge-353700 systemd[1]: Stopping Docker Application Container Engine...
	Dec 18 14:16:30 bridge-353700 dockerd[676]: time="2023-12-18T14:16:30.635089169Z" level=info msg="Processing signal 'terminated'"
	Dec 18 14:16:30 bridge-353700 dockerd[676]: time="2023-12-18T14:16:30.637560174Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Dec 18 14:16:30 bridge-353700 dockerd[676]: time="2023-12-18T14:16:30.638447476Z" level=info msg="Daemon shutdown complete"
	Dec 18 14:16:30 bridge-353700 dockerd[676]: time="2023-12-18T14:16:30.638636176Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Dec 18 14:16:30 bridge-353700 dockerd[676]: time="2023-12-18T14:16:30.638659776Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Dec 18 14:16:31 bridge-353700 systemd[1]: docker.service: Succeeded.
	Dec 18 14:16:31 bridge-353700 systemd[1]: Stopped Docker Application Container Engine.
	Dec 18 14:16:31 bridge-353700 systemd[1]: Starting Docker Application Container Engine...
	Dec 18 14:16:31 bridge-353700 dockerd[1013]: time="2023-12-18T14:16:31.758033887Z" level=info msg="Starting up"
	Dec 18 14:17:31 bridge-353700 dockerd[1013]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Dec 18 14:17:31 bridge-353700 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Dec 18 14:17:31 bridge-353700 systemd[1]: docker.service: Failed with result 'exit-code'.
	Dec 18 14:17:31 bridge-353700 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W1218 14:17:31.829572    4856 out.go:239] * 
	* 
	W1218 14:17:31.830561    4856 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1218 14:17:31.831574    4856 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 90
--- FAIL: TestNetworkPlugins/group/bridge/Start (276.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (248.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-windows-amd64.exe start -p kubenet-353700 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=hyperv
E1218 14:17:02.241059   14928 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\calico-353700\client.crt: The system cannot find the path specified.
E1218 14:17:10.017204   14928 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\custom-flannel-353700\client.crt: The system cannot find the path specified.
E1218 14:17:31.334173   14928 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\calico-353700\client.crt: The system cannot find the path specified.
net_test.go:112: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p kubenet-353700 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=hyperv: exit status 90 (4m8.1475554s)

                                                
                                                
-- stdout --
	* [kubenet-353700] minikube v1.32.0 on Microsoft Windows 10 Enterprise N 10.0.19045.3803 Build 19045.3803
	  - KUBECONFIG=C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube7\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=17824
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the hyperv driver based on user configuration
	* Starting control plane node kubenet-353700 in cluster kubenet-353700
	* Creating hyperv VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	W1218 14:15:59.499014   10172 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I1218 14:15:59.590011   10172 out.go:296] Setting OutFile to fd 704 ...
	I1218 14:15:59.591012   10172 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1218 14:15:59.591012   10172 out.go:309] Setting ErrFile to fd 1596...
	I1218 14:15:59.591012   10172 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1218 14:15:59.617013   10172 out.go:303] Setting JSON to false
	I1218 14:15:59.622002   10172 start.go:128] hostinfo: {"hostname":"minikube7","uptime":9434,"bootTime":1702899525,"procs":207,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.3803 Build 19045.3803","kernelVersion":"10.0.19045.3803 Build 19045.3803","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f66ed2ea-6c04-4a6b-8eea-b2fb0953e990"}
	W1218 14:15:59.622002   10172 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1218 14:15:59.623014   10172 out.go:177] * [kubenet-353700] minikube v1.32.0 on Microsoft Windows 10 Enterprise N 10.0.19045.3803 Build 19045.3803
	I1218 14:15:59.623999   10172 notify.go:220] Checking for updates...
	I1218 14:15:59.625032   10172 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	I1218 14:15:59.625032   10172 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1218 14:15:59.626044   10172 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube7\minikube-integration\.minikube
	I1218 14:15:59.627011   10172 out.go:177]   - MINIKUBE_LOCATION=17824
	I1218 14:15:59.628008   10172 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1218 14:15:59.630020   10172 config.go:182] Loaded profile config "bridge-353700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1218 14:15:59.631015   10172 config.go:182] Loaded profile config "enable-default-cni-353700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1218 14:15:59.631015   10172 config.go:182] Loaded profile config "flannel-353700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1218 14:15:59.632002   10172 config.go:182] Loaded profile config "multinode-015900-m01": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1218 14:15:59.632002   10172 driver.go:392] Setting default libvirt URI to qemu:///system
	I1218 14:16:06.334084   10172 out.go:177] * Using the hyperv driver based on user configuration
	I1218 14:16:06.335087   10172 start.go:298] selected driver: hyperv
	I1218 14:16:06.335087   10172 start.go:902] validating driver "hyperv" against <nil>
	I1218 14:16:06.335216   10172 start.go:913] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1218 14:16:06.394874   10172 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1218 14:16:06.395877   10172 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1218 14:16:06.395877   10172 cni.go:80] network plugin configured as "kubenet", returning disabled
	I1218 14:16:06.395877   10172 start_flags.go:323] config:
	{Name:kubenet-353700 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702585004-17765@sha256:ba2fbb9efd5b81a443834ba0800f3bc13feea942ce199df74b0054a9bdb32bbd Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:kubenet-353700 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker
CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1218 14:16:06.395877   10172 iso.go:125] acquiring lock: {Name:mka7fdf4332632f41b99a369cc525e40499a0030 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1218 14:16:06.398695   10172 out.go:177] * Starting control plane node kubenet-353700 in cluster kubenet-353700
	I1218 14:16:06.399362   10172 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I1218 14:16:06.399612   10172 preload.go:148] Found local preload: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4
	I1218 14:16:06.399612   10172 cache.go:56] Caching tarball of preloaded images
	I1218 14:16:06.400039   10172 preload.go:174] Found C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1218 14:16:06.400170   10172 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on docker
	I1218 14:16:06.400387   10172 profile.go:148] Saving config to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\kubenet-353700\config.json ...
	I1218 14:16:06.400729   10172 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\kubenet-353700\config.json: {Name:mkc232b5e06c2db6ab0f51d1be3aac1d2e9868bb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1218 14:16:06.403116   10172 start.go:365] acquiring machines lock for kubenet-353700: {Name:mk814f158b6187cc9297257c36fdbe0d2871c950 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1218 14:16:16.138761   10172 start.go:369] acquired machines lock for "kubenet-353700" in 9.7356006s
	I1218 14:16:16.138761   10172 start.go:93] Provisioning new machine with config: &{Name:kubenet-353700 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17765/minikube-v1.32.1-1702490427-17765-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702585004-17765@sha256:ba2fbb9efd5b81a443834ba0800f3bc13feea942ce199df74b0054a9bdb32bbd Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCon
fig:{KubernetesVersion:v1.28.4 ClusterName:kubenet-353700 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP
: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1218 14:16:16.139516   10172 start.go:125] createHost starting for "" (driver="hyperv")
	I1218 14:16:16.140492   10172 out.go:204] * Creating hyperv VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1218 14:16:16.140492   10172 start.go:159] libmachine.API.Create for "kubenet-353700" (driver="hyperv")
	I1218 14:16:16.140492   10172 client.go:168] LocalClient.Create starting
	I1218 14:16:16.141502   10172 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem
	I1218 14:16:16.141502   10172 main.go:141] libmachine: Decoding PEM data...
	I1218 14:16:16.141502   10172 main.go:141] libmachine: Parsing certificate...
	I1218 14:16:16.141502   10172 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem
	I1218 14:16:16.141502   10172 main.go:141] libmachine: Decoding PEM data...
	I1218 14:16:16.141502   10172 main.go:141] libmachine: Parsing certificate...
	I1218 14:16:16.141502   10172 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I1218 14:16:18.501414   10172 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I1218 14:16:18.501457   10172 main.go:141] libmachine: [stderr =====>] : 
	I1218 14:16:18.501768   10172 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I1218 14:16:20.741915   10172 main.go:141] libmachine: [stdout =====>] : False
	
	I1218 14:16:20.742003   10172 main.go:141] libmachine: [stderr =====>] : 
	I1218 14:16:20.742099   10172 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I1218 14:16:22.775135   10172 main.go:141] libmachine: [stdout =====>] : True
	
	I1218 14:16:22.775411   10172 main.go:141] libmachine: [stderr =====>] : 
	I1218 14:16:22.775604   10172 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I1218 14:16:27.885685   10172 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I1218 14:16:27.885733   10172 main.go:141] libmachine: [stderr =====>] : 
	I1218 14:16:27.889502   10172 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube7/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.32.1-1702490427-17765-amd64.iso...
	I1218 14:16:28.408875   10172 main.go:141] libmachine: Creating SSH key...
	I1218 14:16:28.541406   10172 main.go:141] libmachine: Creating VM...
	I1218 14:16:28.541406   10172 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I1218 14:16:32.291759   10172 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I1218 14:16:32.291840   10172 main.go:141] libmachine: [stderr =====>] : 
	I1218 14:16:32.291840   10172 main.go:141] libmachine: Using switch "Default Switch"
	I1218 14:16:32.291840   10172 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I1218 14:16:34.278129   10172 main.go:141] libmachine: [stdout =====>] : True
	
	I1218 14:16:34.278221   10172 main.go:141] libmachine: [stderr =====>] : 
	I1218 14:16:34.278298   10172 main.go:141] libmachine: Creating VHD
	I1218 14:16:34.278419   10172 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\kubenet-353700\fixed.vhd' -SizeBytes 10MB -Fixed
	I1218 14:16:38.370146   10172 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube7
	Path                    : C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\kubenet-353700\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : C2304166-B615-456C-91C6-6436D5059070
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I1218 14:16:38.370403   10172 main.go:141] libmachine: [stderr =====>] : 
	I1218 14:16:38.370403   10172 main.go:141] libmachine: Writing magic tar header
	I1218 14:16:38.370403   10172 main.go:141] libmachine: Writing SSH key tar header
	I1218 14:16:38.380670   10172 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\kubenet-353700\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\kubenet-353700\disk.vhd' -VHDType Dynamic -DeleteSource
	I1218 14:16:42.009667   10172 main.go:141] libmachine: [stdout =====>] : 
	I1218 14:16:42.009667   10172 main.go:141] libmachine: [stderr =====>] : 
	I1218 14:16:42.009667   10172 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\kubenet-353700\disk.vhd' -SizeBytes 20000MB
	I1218 14:16:44.842175   10172 main.go:141] libmachine: [stdout =====>] : 
	I1218 14:16:44.842350   10172 main.go:141] libmachine: [stderr =====>] : 
	I1218 14:16:44.842562   10172 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM kubenet-353700 -Path 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\kubenet-353700' -SwitchName 'Default Switch' -MemoryStartupBytes 3072MB
	I1218 14:16:48.974189   10172 main.go:141] libmachine: [stdout =====>] : 
	Name           State CPUUsage(%) MemoryAssigned(M) Uptime   Status             Version
	----           ----- ----------- ----------------- ------   ------             -------
	kubenet-353700 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I1218 14:16:48.974403   10172 main.go:141] libmachine: [stderr =====>] : 
	I1218 14:16:48.974463   10172 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName kubenet-353700 -DynamicMemoryEnabled $false
	I1218 14:16:51.527658   10172 main.go:141] libmachine: [stdout =====>] : 
	I1218 14:16:51.527968   10172 main.go:141] libmachine: [stderr =====>] : 
	I1218 14:16:51.527968   10172 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor kubenet-353700 -Count 2
	I1218 14:16:54.114399   10172 main.go:141] libmachine: [stdout =====>] : 
	I1218 14:16:54.114399   10172 main.go:141] libmachine: [stderr =====>] : 
	I1218 14:16:54.114557   10172 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName kubenet-353700 -Path 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\kubenet-353700\boot2docker.iso'
	I1218 14:16:57.116028   10172 main.go:141] libmachine: [stdout =====>] : 
	I1218 14:16:57.116157   10172 main.go:141] libmachine: [stderr =====>] : 
	I1218 14:16:57.116347   10172 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName kubenet-353700 -Path 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\kubenet-353700\disk.vhd'
	I1218 14:17:00.362371   10172 main.go:141] libmachine: [stdout =====>] : 
	I1218 14:17:00.362714   10172 main.go:141] libmachine: [stderr =====>] : 
	I1218 14:17:00.362714   10172 main.go:141] libmachine: Starting VM...
	I1218 14:17:00.362801   10172 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM kubenet-353700
	I1218 14:17:03.743490   10172 main.go:141] libmachine: [stdout =====>] : 
	I1218 14:17:03.743656   10172 main.go:141] libmachine: [stderr =====>] : 
	I1218 14:17:03.743656   10172 main.go:141] libmachine: Waiting for host to start...
	I1218 14:17:03.743846   10172 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubenet-353700 ).state
	I1218 14:17:06.334108   10172 main.go:141] libmachine: [stdout =====>] : Running
	
	I1218 14:17:06.334297   10172 main.go:141] libmachine: [stderr =====>] : 
	I1218 14:17:06.334447   10172 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubenet-353700 ).networkadapters[0]).ipaddresses[0]
	I1218 14:17:09.447661   10172 main.go:141] libmachine: [stdout =====>] : 
	I1218 14:17:09.447661   10172 main.go:141] libmachine: [stderr =====>] : 
	I1218 14:17:10.460707   10172 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubenet-353700 ).state
	I1218 14:17:13.050603   10172 main.go:141] libmachine: [stdout =====>] : Running
	
	I1218 14:17:13.050721   10172 main.go:141] libmachine: [stderr =====>] : 
	I1218 14:17:13.050721   10172 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubenet-353700 ).networkadapters[0]).ipaddresses[0]
	I1218 14:17:15.887962   10172 main.go:141] libmachine: [stdout =====>] : 
	I1218 14:17:15.887962   10172 main.go:141] libmachine: [stderr =====>] : 
	I1218 14:17:16.901676   10172 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubenet-353700 ).state
	I1218 14:17:19.438533   10172 main.go:141] libmachine: [stdout =====>] : Running
	
	I1218 14:17:19.438597   10172 main.go:141] libmachine: [stderr =====>] : 
	I1218 14:17:19.438730   10172 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubenet-353700 ).networkadapters[0]).ipaddresses[0]
	I1218 14:17:22.629565   10172 main.go:141] libmachine: [stdout =====>] : 
	I1218 14:17:22.629841   10172 main.go:141] libmachine: [stderr =====>] : 
	I1218 14:17:23.634600   10172 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubenet-353700 ).state
	I1218 14:17:26.222723   10172 main.go:141] libmachine: [stdout =====>] : Running
	
	I1218 14:17:26.222723   10172 main.go:141] libmachine: [stderr =====>] : 
	I1218 14:17:26.222815   10172 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubenet-353700 ).networkadapters[0]).ipaddresses[0]
	I1218 14:17:29.234501   10172 main.go:141] libmachine: [stdout =====>] : 
	I1218 14:17:29.234501   10172 main.go:141] libmachine: [stderr =====>] : 
	I1218 14:17:30.247180   10172 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubenet-353700 ).state
	I1218 14:17:33.111609   10172 main.go:141] libmachine: [stdout =====>] : Running
	
	I1218 14:17:33.111609   10172 main.go:141] libmachine: [stderr =====>] : 
	I1218 14:17:33.111609   10172 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubenet-353700 ).networkadapters[0]).ipaddresses[0]
	I1218 14:17:36.666768   10172 main.go:141] libmachine: [stdout =====>] : 192.168.226.47
	
	I1218 14:17:36.667008   10172 main.go:141] libmachine: [stderr =====>] : 
	I1218 14:17:36.667123   10172 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubenet-353700 ).state
	I1218 14:17:39.498001   10172 main.go:141] libmachine: [stdout =====>] : Running
	
	I1218 14:17:39.498001   10172 main.go:141] libmachine: [stderr =====>] : 
	I1218 14:17:39.498001   10172 machine.go:88] provisioning docker machine ...
	I1218 14:17:39.498001   10172 buildroot.go:166] provisioning hostname "kubenet-353700"
	I1218 14:17:39.498001   10172 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubenet-353700 ).state
	I1218 14:17:42.181955   10172 main.go:141] libmachine: [stdout =====>] : Running
	
	I1218 14:17:42.182146   10172 main.go:141] libmachine: [stderr =====>] : 
	I1218 14:17:42.182146   10172 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubenet-353700 ).networkadapters[0]).ipaddresses[0]
	I1218 14:17:45.728749   10172 main.go:141] libmachine: [stdout =====>] : 192.168.226.47
	
	I1218 14:17:45.728749   10172 main.go:141] libmachine: [stderr =====>] : 
	I1218 14:17:45.734750   10172 main.go:141] libmachine: Using SSH client type: native
	I1218 14:17:45.746750   10172 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xdb4f40] 0xdb7a80 <nil>  [] 0s} 192.168.226.47 22 <nil> <nil>}
	I1218 14:17:45.746750   10172 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubenet-353700 && echo "kubenet-353700" | sudo tee /etc/hostname
	I1218 14:17:45.954641   10172 main.go:141] libmachine: SSH cmd err, output: <nil>: kubenet-353700
	
	I1218 14:17:45.954808   10172 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubenet-353700 ).state
	I1218 14:17:48.799675   10172 main.go:141] libmachine: [stdout =====>] : Running
	
	I1218 14:17:48.799824   10172 main.go:141] libmachine: [stderr =====>] : 
	I1218 14:17:48.800125   10172 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubenet-353700 ).networkadapters[0]).ipaddresses[0]
	I1218 14:17:52.077338   10172 main.go:141] libmachine: [stdout =====>] : 192.168.226.47
	
	I1218 14:17:52.077382   10172 main.go:141] libmachine: [stderr =====>] : 
	I1218 14:17:52.084739   10172 main.go:141] libmachine: Using SSH client type: native
	I1218 14:17:52.085499   10172 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xdb4f40] 0xdb7a80 <nil>  [] 0s} 192.168.226.47 22 <nil> <nil>}
	I1218 14:17:52.085662   10172 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubenet-353700' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubenet-353700/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubenet-353700' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1218 14:17:52.273756   10172 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1218 14:17:52.273756   10172 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube7\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube7\minikube-integration\.minikube}
	I1218 14:17:52.273756   10172 buildroot.go:174] setting up certificates
	I1218 14:17:52.274332   10172 provision.go:83] configureAuth start
	I1218 14:17:52.274412   10172 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubenet-353700 ).state
	I1218 14:17:54.893707   10172 main.go:141] libmachine: [stdout =====>] : Running
	
	I1218 14:17:54.893772   10172 main.go:141] libmachine: [stderr =====>] : 
	I1218 14:17:54.893835   10172 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubenet-353700 ).networkadapters[0]).ipaddresses[0]
	I1218 14:17:58.071265   10172 main.go:141] libmachine: [stdout =====>] : 192.168.226.47
	
	I1218 14:17:58.071455   10172 main.go:141] libmachine: [stderr =====>] : 
	I1218 14:17:58.071579   10172 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubenet-353700 ).state
	I1218 14:18:00.877443   10172 main.go:141] libmachine: [stdout =====>] : Running
	
	I1218 14:18:00.877704   10172 main.go:141] libmachine: [stderr =====>] : 
	I1218 14:18:00.877848   10172 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubenet-353700 ).networkadapters[0]).ipaddresses[0]
	I1218 14:18:04.163091   10172 main.go:141] libmachine: [stdout =====>] : 192.168.226.47
	
	I1218 14:18:04.163357   10172 main.go:141] libmachine: [stderr =====>] : 
	I1218 14:18:04.163357   10172 provision.go:138] copyHostCerts
	I1218 14:18:04.164294   10172 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/ca.pem, removing ...
	I1218 14:18:04.164449   10172 exec_runner.go:203] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.pem
	I1218 14:18:04.164927   10172 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/ca.pem (1078 bytes)
	I1218 14:18:04.166693   10172 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/cert.pem, removing ...
	I1218 14:18:04.166693   10172 exec_runner.go:203] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cert.pem
	I1218 14:18:04.167415   10172 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/cert.pem (1123 bytes)
	I1218 14:18:04.168999   10172 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/key.pem, removing ...
	I1218 14:18:04.169129   10172 exec_runner.go:203] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\key.pem
	I1218 14:18:04.169534   10172 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/key.pem (1679 bytes)
	I1218 14:18:04.170658   10172 provision.go:112] generating server cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.kubenet-353700 san=[192.168.226.47 192.168.226.47 localhost 127.0.0.1 minikube kubenet-353700]
	I1218 14:18:04.399173   10172 provision.go:172] copyRemoteCerts
	I1218 14:18:04.418046   10172 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1218 14:18:04.418046   10172 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubenet-353700 ).state
	I1218 14:18:06.801021   10172 main.go:141] libmachine: [stdout =====>] : Running
	
	I1218 14:18:06.801214   10172 main.go:141] libmachine: [stderr =====>] : 
	I1218 14:18:06.801278   10172 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubenet-353700 ).networkadapters[0]).ipaddresses[0]
	I1218 14:18:09.788417   10172 main.go:141] libmachine: [stdout =====>] : 192.168.226.47
	
	I1218 14:18:09.788417   10172 main.go:141] libmachine: [stderr =====>] : 
	I1218 14:18:09.788417   10172 sshutil.go:53] new ssh client: &{IP:192.168.226.47 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\kubenet-353700\id_rsa Username:docker}
	I1218 14:18:09.903222   10172 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (5.4851505s)
	I1218 14:18:09.903634   10172 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1218 14:18:09.947828   10172 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1220 bytes)
	I1218 14:18:09.988831   10172 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1218 14:18:10.033049   10172 provision.go:86] duration metric: configureAuth took 17.7586355s
	I1218 14:18:10.033182   10172 buildroot.go:189] setting minikube options for container-runtime
	I1218 14:18:10.034004   10172 config.go:182] Loaded profile config "kubenet-353700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1218 14:18:10.034004   10172 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubenet-353700 ).state
	I1218 14:18:12.580187   10172 main.go:141] libmachine: [stdout =====>] : Running
	
	I1218 14:18:12.580280   10172 main.go:141] libmachine: [stderr =====>] : 
	I1218 14:18:12.580368   10172 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubenet-353700 ).networkadapters[0]).ipaddresses[0]
	I1218 14:18:15.631169   10172 main.go:141] libmachine: [stdout =====>] : 192.168.226.47
	
	I1218 14:18:15.631169   10172 main.go:141] libmachine: [stderr =====>] : 
	I1218 14:18:15.638717   10172 main.go:141] libmachine: Using SSH client type: native
	I1218 14:18:15.639643   10172 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xdb4f40] 0xdb7a80 <nil>  [] 0s} 192.168.226.47 22 <nil> <nil>}
	I1218 14:18:15.639694   10172 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1218 14:18:15.813966   10172 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1218 14:18:15.814036   10172 buildroot.go:70] root file system type: tmpfs
	I1218 14:18:15.814424   10172 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I1218 14:18:15.814424   10172 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubenet-353700 ).state
	I1218 14:18:18.503304   10172 main.go:141] libmachine: [stdout =====>] : Running
	
	I1218 14:18:18.503402   10172 main.go:141] libmachine: [stderr =====>] : 
	I1218 14:18:18.503442   10172 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubenet-353700 ).networkadapters[0]).ipaddresses[0]
	I1218 14:18:21.631835   10172 main.go:141] libmachine: [stdout =====>] : 192.168.226.47
	
	I1218 14:18:21.631835   10172 main.go:141] libmachine: [stderr =====>] : 
	I1218 14:18:21.642612   10172 main.go:141] libmachine: Using SSH client type: native
	I1218 14:18:21.643610   10172 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xdb4f40] 0xdb7a80 <nil>  [] 0s} 192.168.226.47 22 <nil> <nil>}
	I1218 14:18:21.643610   10172 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1218 14:18:21.856094   10172 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1218 14:18:21.856245   10172 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubenet-353700 ).state
	I1218 14:18:24.586545   10172 main.go:141] libmachine: [stdout =====>] : Running
	
	I1218 14:18:24.586545   10172 main.go:141] libmachine: [stderr =====>] : 
	I1218 14:18:24.586719   10172 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubenet-353700 ).networkadapters[0]).ipaddresses[0]
	I1218 14:18:27.671784   10172 main.go:141] libmachine: [stdout =====>] : 192.168.226.47
	
	I1218 14:18:27.671905   10172 main.go:141] libmachine: [stderr =====>] : 
	I1218 14:18:27.678406   10172 main.go:141] libmachine: Using SSH client type: native
	I1218 14:18:27.679306   10172 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xdb4f40] 0xdb7a80 <nil>  [] 0s} 192.168.226.47 22 <nil> <nil>}
	I1218 14:18:27.679363   10172 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1218 14:18:29.376328   10172 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I1218 14:18:29.376328   10172 machine.go:91] provisioned docker machine in 49.8780979s
	I1218 14:18:29.376328   10172 client.go:171] LocalClient.Create took 2m13.2352236s
	I1218 14:18:29.376328   10172 start.go:167] duration metric: libmachine.API.Create for "kubenet-353700" took 2m13.2352236s
	I1218 14:18:29.376328   10172 start.go:300] post-start starting for "kubenet-353700" (driver="hyperv")
	I1218 14:18:29.376328   10172 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1218 14:18:29.393075   10172 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1218 14:18:29.393075   10172 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubenet-353700 ).state
	I1218 14:18:31.882102   10172 main.go:141] libmachine: [stdout =====>] : Running
	
	I1218 14:18:31.882173   10172 main.go:141] libmachine: [stderr =====>] : 
	I1218 14:18:31.882173   10172 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubenet-353700 ).networkadapters[0]).ipaddresses[0]
	I1218 14:18:35.086240   10172 main.go:141] libmachine: [stdout =====>] : 192.168.226.47
	
	I1218 14:18:35.086436   10172 main.go:141] libmachine: [stderr =====>] : 
	I1218 14:18:35.086889   10172 sshutil.go:53] new ssh client: &{IP:192.168.226.47 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\kubenet-353700\id_rsa Username:docker}
	I1218 14:18:35.230081   10172 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (5.8369794s)
	I1218 14:18:35.245363   10172 ssh_runner.go:195] Run: cat /etc/os-release
	I1218 14:18:35.253522   10172 info.go:137] Remote host: Buildroot 2021.02.12
	I1218 14:18:35.253522   10172 filesync.go:126] Scanning C:\Users\jenkins.minikube7\minikube-integration\.minikube\addons for local assets ...
	I1218 14:18:35.254095   10172 filesync.go:126] Scanning C:\Users\jenkins.minikube7\minikube-integration\.minikube\files for local assets ...
	I1218 14:18:35.255499   10172 filesync.go:149] local asset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\149282.pem -> 149282.pem in /etc/ssl/certs
	I1218 14:18:35.271691   10172 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1218 14:18:35.293722   10172 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\149282.pem --> /etc/ssl/certs/149282.pem (1708 bytes)
	I1218 14:18:35.347890   10172 start.go:303] post-start completed in 5.971534s
	I1218 14:18:35.352516   10172 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubenet-353700 ).state
	I1218 14:18:38.135118   10172 main.go:141] libmachine: [stdout =====>] : Running
	
	I1218 14:18:38.135118   10172 main.go:141] libmachine: [stderr =====>] : 
	I1218 14:18:38.135333   10172 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubenet-353700 ).networkadapters[0]).ipaddresses[0]
	I1218 14:18:41.107724   10172 main.go:141] libmachine: [stdout =====>] : 192.168.226.47
	
	I1218 14:18:41.107833   10172 main.go:141] libmachine: [stderr =====>] : 
	I1218 14:18:41.108195   10172 profile.go:148] Saving config to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\kubenet-353700\config.json ...
	I1218 14:18:41.111725   10172 start.go:128] duration metric: createHost completed in 2m24.9715422s
	I1218 14:18:41.111725   10172 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubenet-353700 ).state
	I1218 14:18:43.494747   10172 main.go:141] libmachine: [stdout =====>] : Running
	
	I1218 14:18:43.494747   10172 main.go:141] libmachine: [stderr =====>] : 
	I1218 14:18:43.494747   10172 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubenet-353700 ).networkadapters[0]).ipaddresses[0]
	I1218 14:18:46.422922   10172 main.go:141] libmachine: [stdout =====>] : 192.168.226.47
	
	I1218 14:18:46.423165   10172 main.go:141] libmachine: [stderr =====>] : 
	I1218 14:18:46.430429   10172 main.go:141] libmachine: Using SSH client type: native
	I1218 14:18:46.431130   10172 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xdb4f40] 0xdb7a80 <nil>  [] 0s} 192.168.226.47 22 <nil> <nil>}
	I1218 14:18:46.431130   10172 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1218 14:18:46.588590   10172 main.go:141] libmachine: SSH cmd err, output: <nil>: 1702909126.598390577
	
	I1218 14:18:46.588590   10172 fix.go:206] guest clock: 1702909126.598390577
	I1218 14:18:46.588590   10172 fix.go:219] Guest: 2023-12-18 14:18:46.598390577 +0000 UTC Remote: 2023-12-18 14:18:41.1117259 +0000 UTC m=+161.757023101 (delta=5.486664677s)
	I1218 14:18:46.588701   10172 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubenet-353700 ).state
	I1218 14:18:49.102575   10172 main.go:141] libmachine: [stdout =====>] : Running
	
	I1218 14:18:49.102575   10172 main.go:141] libmachine: [stderr =====>] : 
	I1218 14:18:49.103070   10172 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubenet-353700 ).networkadapters[0]).ipaddresses[0]
	I1218 14:18:51.891075   10172 main.go:141] libmachine: [stdout =====>] : 192.168.226.47
	
	I1218 14:18:51.891292   10172 main.go:141] libmachine: [stderr =====>] : 
	I1218 14:18:51.898964   10172 main.go:141] libmachine: Using SSH client type: native
	I1218 14:18:51.899870   10172 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xdb4f40] 0xdb7a80 <nil>  [] 0s} 192.168.226.47 22 <nil> <nil>}
	I1218 14:18:51.899870   10172 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1702909126
	I1218 14:18:52.067344   10172 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Dec 18 14:18:46 UTC 2023
	
	I1218 14:18:52.067454   10172 fix.go:226] clock set: Mon Dec 18 14:18:46 UTC 2023
	 (err=<nil>)
	I1218 14:18:52.067454   10172 start.go:83] releasing machines lock for "kubenet-353700", held for 2m35.9279749s
	I1218 14:18:52.067749   10172 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubenet-353700 ).state
	I1218 14:18:54.783480   10172 main.go:141] libmachine: [stdout =====>] : Running
	
	I1218 14:18:54.783553   10172 main.go:141] libmachine: [stderr =====>] : 
	I1218 14:18:54.783633   10172 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubenet-353700 ).networkadapters[0]).ipaddresses[0]
	I1218 14:18:57.881189   10172 main.go:141] libmachine: [stdout =====>] : 192.168.226.47
	
	I1218 14:18:57.881189   10172 main.go:141] libmachine: [stderr =====>] : 
	I1218 14:18:57.887842   10172 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1218 14:18:57.887921   10172 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubenet-353700 ).state
	I1218 14:18:57.901251   10172 ssh_runner.go:195] Run: cat /version.json
	I1218 14:18:57.901251   10172 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubenet-353700 ).state
	I1218 14:19:00.699264   10172 main.go:141] libmachine: [stdout =====>] : Running
	
	I1218 14:19:00.699264   10172 main.go:141] libmachine: [stderr =====>] : 
	I1218 14:19:00.699551   10172 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubenet-353700 ).networkadapters[0]).ipaddresses[0]
	I1218 14:19:00.794369   10172 main.go:141] libmachine: [stdout =====>] : Running
	
	I1218 14:19:00.794466   10172 main.go:141] libmachine: [stderr =====>] : 
	I1218 14:19:00.794560   10172 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubenet-353700 ).networkadapters[0]).ipaddresses[0]
	I1218 14:19:03.793910   10172 main.go:141] libmachine: [stdout =====>] : 192.168.226.47
	
	I1218 14:19:03.793910   10172 main.go:141] libmachine: [stderr =====>] : 
	I1218 14:19:03.793910   10172 sshutil.go:53] new ssh client: &{IP:192.168.226.47 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\kubenet-353700\id_rsa Username:docker}
	I1218 14:19:03.983245   10172 main.go:141] libmachine: [stdout =====>] : 192.168.226.47
	
	I1218 14:19:03.983277   10172 main.go:141] libmachine: [stderr =====>] : 
	I1218 14:19:03.983549   10172 sshutil.go:53] new ssh client: &{IP:192.168.226.47 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\kubenet-353700\id_rsa Username:docker}
	I1218 14:19:04.006953   10172 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (6.119083s)
	I1218 14:19:04.099192   10172 ssh_runner.go:235] Completed: cat /version.json: (6.1978292s)
	I1218 14:19:04.117779   10172 ssh_runner.go:195] Run: systemctl --version
	I1218 14:19:04.164428   10172 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1218 14:19:04.175411   10172 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1218 14:19:04.192417   10172 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1218 14:19:04.226971   10172 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1218 14:19:04.227048   10172 start.go:475] detecting cgroup driver to use...
	I1218 14:19:04.227198   10172 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1218 14:19:04.277442   10172 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I1218 14:19:04.312441   10172 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1218 14:19:04.330772   10172 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I1218 14:19:04.355917   10172 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1218 14:19:04.400942   10172 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1218 14:19:04.456189   10172 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1218 14:19:04.497719   10172 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1218 14:19:04.538669   10172 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1218 14:19:04.576001   10172 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1218 14:19:04.615732   10172 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1218 14:19:04.661296   10172 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1218 14:19:04.707287   10172 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1218 14:19:04.973933   10172 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1218 14:19:05.015925   10172 start.go:475] detecting cgroup driver to use...
	I1218 14:19:05.035942   10172 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1218 14:19:05.087931   10172 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1218 14:19:05.127956   10172 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1218 14:19:05.189059   10172 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1218 14:19:05.231766   10172 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1218 14:19:05.271784   10172 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1218 14:19:05.350622   10172 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1218 14:19:05.383259   10172 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1218 14:19:05.437923   10172 ssh_runner.go:195] Run: which cri-dockerd
	I1218 14:19:05.460179   10172 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1218 14:19:05.484181   10172 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (193 bytes)
	I1218 14:19:05.532880   10172 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1218 14:19:05.742155   10172 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1218 14:19:05.937159   10172 docker.go:560] configuring docker to use "cgroupfs" as cgroup driver...
	I1218 14:19:05.937159   10172 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1218 14:19:05.983372   10172 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1218 14:19:06.173025   10172 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1218 14:20:07.326492   10172 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m1.1531863s)
	I1218 14:20:07.350855   10172 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I1218 14:20:07.399779   10172 out.go:177] 
	W1218 14:20:07.400780   10172 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	-- Journal begins at Mon 2023-12-18 14:17:24 UTC, ends at Mon 2023-12-18 14:20:07 UTC. --
	Dec 18 14:18:28 kubenet-353700 systemd[1]: Starting Docker Application Container Engine...
	Dec 18 14:18:28 kubenet-353700 dockerd[695]: time="2023-12-18T14:18:28.310188400Z" level=info msg="Starting up"
	Dec 18 14:18:28 kubenet-353700 dockerd[695]: time="2023-12-18T14:18:28.311447246Z" level=info msg="containerd not running, starting managed containerd"
	Dec 18 14:18:28 kubenet-353700 dockerd[695]: time="2023-12-18T14:18:28.312701292Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=701
	Dec 18 14:18:28 kubenet-353700 dockerd[701]: time="2023-12-18T14:18:28.351426916Z" level=info msg="starting containerd" revision=64b8a811b07ba6288238eefc14d898ee0b5b99ba version=v1.7.11
	Dec 18 14:18:28 kubenet-353700 dockerd[701]: time="2023-12-18T14:18:28.377699981Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Dec 18 14:18:28 kubenet-353700 dockerd[701]: time="2023-12-18T14:18:28.377812285Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Dec 18 14:18:28 kubenet-353700 dockerd[701]: time="2023-12-18T14:18:28.380515085Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.57\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Dec 18 14:18:28 kubenet-353700 dockerd[701]: time="2023-12-18T14:18:28.380634789Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Dec 18 14:18:28 kubenet-353700 dockerd[701]: time="2023-12-18T14:18:28.381011103Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Dec 18 14:18:28 kubenet-353700 dockerd[701]: time="2023-12-18T14:18:28.381118107Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Dec 18 14:18:28 kubenet-353700 dockerd[701]: time="2023-12-18T14:18:28.381489621Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Dec 18 14:18:28 kubenet-353700 dockerd[701]: time="2023-12-18T14:18:28.381773531Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Dec 18 14:18:28 kubenet-353700 dockerd[701]: time="2023-12-18T14:18:28.381886335Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Dec 18 14:18:28 kubenet-353700 dockerd[701]: time="2023-12-18T14:18:28.382005640Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Dec 18 14:18:28 kubenet-353700 dockerd[701]: time="2023-12-18T14:18:28.382716866Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Dec 18 14:18:28 kubenet-353700 dockerd[701]: time="2023-12-18T14:18:28.382883272Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Dec 18 14:18:28 kubenet-353700 dockerd[701]: time="2023-12-18T14:18:28.382903573Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Dec 18 14:18:28 kubenet-353700 dockerd[701]: time="2023-12-18T14:18:28.383084079Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Dec 18 14:18:28 kubenet-353700 dockerd[701]: time="2023-12-18T14:18:28.383924910Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Dec 18 14:18:28 kubenet-353700 dockerd[701]: time="2023-12-18T14:18:28.385866781Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Dec 18 14:18:28 kubenet-353700 dockerd[701]: time="2023-12-18T14:18:28.385973585Z" level=info msg="metadata content store policy set" policy=shared
	Dec 18 14:18:28 kubenet-353700 dockerd[701]: time="2023-12-18T14:18:28.399915498Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Dec 18 14:18:28 kubenet-353700 dockerd[701]: time="2023-12-18T14:18:28.399955599Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Dec 18 14:18:28 kubenet-353700 dockerd[701]: time="2023-12-18T14:18:28.399983200Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Dec 18 14:18:28 kubenet-353700 dockerd[701]: time="2023-12-18T14:18:28.400052803Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Dec 18 14:18:28 kubenet-353700 dockerd[701]: time="2023-12-18T14:18:28.400205909Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Dec 18 14:18:28 kubenet-353700 dockerd[701]: time="2023-12-18T14:18:28.400337813Z" level=info msg="NRI interface is disabled by configuration."
	Dec 18 14:18:28 kubenet-353700 dockerd[701]: time="2023-12-18T14:18:28.400365014Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Dec 18 14:18:28 kubenet-353700 dockerd[701]: time="2023-12-18T14:18:28.400516720Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Dec 18 14:18:28 kubenet-353700 dockerd[701]: time="2023-12-18T14:18:28.400566622Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Dec 18 14:18:28 kubenet-353700 dockerd[701]: time="2023-12-18T14:18:28.400587623Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Dec 18 14:18:28 kubenet-353700 dockerd[701]: time="2023-12-18T14:18:28.400605523Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Dec 18 14:18:28 kubenet-353700 dockerd[701]: time="2023-12-18T14:18:28.400629724Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Dec 18 14:18:28 kubenet-353700 dockerd[701]: time="2023-12-18T14:18:28.400655625Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Dec 18 14:18:28 kubenet-353700 dockerd[701]: time="2023-12-18T14:18:28.400672426Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Dec 18 14:18:28 kubenet-353700 dockerd[701]: time="2023-12-18T14:18:28.400687526Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Dec 18 14:18:28 kubenet-353700 dockerd[701]: time="2023-12-18T14:18:28.400704027Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Dec 18 14:18:28 kubenet-353700 dockerd[701]: time="2023-12-18T14:18:28.400720427Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Dec 18 14:18:28 kubenet-353700 dockerd[701]: time="2023-12-18T14:18:28.400736828Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Dec 18 14:18:28 kubenet-353700 dockerd[701]: time="2023-12-18T14:18:28.400751429Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Dec 18 14:18:28 kubenet-353700 dockerd[701]: time="2023-12-18T14:18:28.400846932Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Dec 18 14:18:28 kubenet-353700 dockerd[701]: time="2023-12-18T14:18:28.401450354Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Dec 18 14:18:28 kubenet-353700 dockerd[701]: time="2023-12-18T14:18:28.401565058Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Dec 18 14:18:28 kubenet-353700 dockerd[701]: time="2023-12-18T14:18:28.401590759Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Dec 18 14:18:28 kubenet-353700 dockerd[701]: time="2023-12-18T14:18:28.401634261Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Dec 18 14:18:28 kubenet-353700 dockerd[701]: time="2023-12-18T14:18:28.401692763Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Dec 18 14:18:28 kubenet-353700 dockerd[701]: time="2023-12-18T14:18:28.401737665Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Dec 18 14:18:28 kubenet-353700 dockerd[701]: time="2023-12-18T14:18:28.401760266Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Dec 18 14:18:28 kubenet-353700 dockerd[701]: time="2023-12-18T14:18:28.401779266Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Dec 18 14:18:28 kubenet-353700 dockerd[701]: time="2023-12-18T14:18:28.401795767Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Dec 18 14:18:28 kubenet-353700 dockerd[701]: time="2023-12-18T14:18:28.401811668Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Dec 18 14:18:28 kubenet-353700 dockerd[701]: time="2023-12-18T14:18:28.401841469Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Dec 18 14:18:28 kubenet-353700 dockerd[701]: time="2023-12-18T14:18:28.401858869Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Dec 18 14:18:28 kubenet-353700 dockerd[701]: time="2023-12-18T14:18:28.401874770Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Dec 18 14:18:28 kubenet-353700 dockerd[701]: time="2023-12-18T14:18:28.401960473Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Dec 18 14:18:28 kubenet-353700 dockerd[701]: time="2023-12-18T14:18:28.401978674Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Dec 18 14:18:28 kubenet-353700 dockerd[701]: time="2023-12-18T14:18:28.402014675Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Dec 18 14:18:28 kubenet-353700 dockerd[701]: time="2023-12-18T14:18:28.402045876Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Dec 18 14:18:28 kubenet-353700 dockerd[701]: time="2023-12-18T14:18:28.402060077Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Dec 18 14:18:28 kubenet-353700 dockerd[701]: time="2023-12-18T14:18:28.402074277Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Dec 18 14:18:28 kubenet-353700 dockerd[701]: time="2023-12-18T14:18:28.402086478Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Dec 18 14:18:28 kubenet-353700 dockerd[701]: time="2023-12-18T14:18:28.402099078Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Dec 18 14:18:28 kubenet-353700 dockerd[701]: time="2023-12-18T14:18:28.402113979Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	Dec 18 14:18:28 kubenet-353700 dockerd[701]: time="2023-12-18T14:18:28.402131379Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Dec 18 14:18:28 kubenet-353700 dockerd[701]: time="2023-12-18T14:18:28.402146580Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	Dec 18 14:18:28 kubenet-353700 dockerd[701]: time="2023-12-18T14:18:28.402685600Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Dec 18 14:18:28 kubenet-353700 dockerd[701]: time="2023-12-18T14:18:28.402787403Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Dec 18 14:18:28 kubenet-353700 dockerd[701]: time="2023-12-18T14:18:28.409469849Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Dec 18 14:18:28 kubenet-353700 dockerd[701]: time="2023-12-18T14:18:28.410342681Z" level=info msg="containerd successfully booted in 0.062231s"
	Dec 18 14:18:28 kubenet-353700 dockerd[695]: time="2023-12-18T14:18:28.456169965Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Dec 18 14:18:28 kubenet-353700 dockerd[695]: time="2023-12-18T14:18:28.488390850Z" level=info msg="Loading containers: start."
	Dec 18 14:18:28 kubenet-353700 dockerd[695]: time="2023-12-18T14:18:28.914008093Z" level=info msg="Loading containers: done."
	Dec 18 14:18:28 kubenet-353700 dockerd[695]: time="2023-12-18T14:18:28.948460659Z" level=warning msg="WARNING: No blkio throttle.read_bps_device support"
	Dec 18 14:18:28 kubenet-353700 dockerd[695]: time="2023-12-18T14:18:28.948524162Z" level=warning msg="WARNING: No blkio throttle.write_bps_device support"
	Dec 18 14:18:28 kubenet-353700 dockerd[695]: time="2023-12-18T14:18:28.948534662Z" level=warning msg="WARNING: No blkio throttle.read_iops_device support"
	Dec 18 14:18:28 kubenet-353700 dockerd[695]: time="2023-12-18T14:18:28.948541462Z" level=warning msg="WARNING: No blkio throttle.write_iops_device support"
	Dec 18 14:18:28 kubenet-353700 dockerd[695]: time="2023-12-18T14:18:28.948567363Z" level=info msg="Docker daemon" commit=311b9ff graphdriver=overlay2 version=24.0.7
	Dec 18 14:18:28 kubenet-353700 dockerd[695]: time="2023-12-18T14:18:28.948694968Z" level=info msg="Daemon has completed initialization"
	Dec 18 14:18:29 kubenet-353700 dockerd[695]: time="2023-12-18T14:18:29.383945664Z" level=info msg="API listen on /var/run/docker.sock"
	Dec 18 14:18:29 kubenet-353700 systemd[1]: Started Docker Application Container Engine.
	Dec 18 14:18:29 kubenet-353700 dockerd[695]: time="2023-12-18T14:18:29.384637488Z" level=info msg="API listen on [::]:2376"
	Dec 18 14:19:06 kubenet-353700 dockerd[695]: time="2023-12-18T14:19:06.215393565Z" level=info msg="Processing signal 'terminated'"
	Dec 18 14:19:06 kubenet-353700 dockerd[695]: time="2023-12-18T14:19:06.216397768Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Dec 18 14:19:06 kubenet-353700 dockerd[695]: time="2023-12-18T14:19:06.217026569Z" level=info msg="Daemon shutdown complete"
	Dec 18 14:19:06 kubenet-353700 dockerd[695]: time="2023-12-18T14:19:06.217170369Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Dec 18 14:19:06 kubenet-353700 dockerd[695]: time="2023-12-18T14:19:06.217202769Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Dec 18 14:19:06 kubenet-353700 systemd[1]: Stopping Docker Application Container Engine...
	Dec 18 14:19:07 kubenet-353700 systemd[1]: docker.service: Succeeded.
	Dec 18 14:19:07 kubenet-353700 systemd[1]: Stopped Docker Application Container Engine.
	Dec 18 14:19:07 kubenet-353700 systemd[1]: Starting Docker Application Container Engine...
	Dec 18 14:19:07 kubenet-353700 dockerd[1032]: time="2023-12-18T14:19:07.319925823Z" level=info msg="Starting up"
	Dec 18 14:20:07 kubenet-353700 dockerd[1032]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Dec 18 14:20:07 kubenet-353700 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Dec 18 14:20:07 kubenet-353700 systemd[1]: docker.service: Failed with result 'exit-code'.
	Dec 18 14:20:07 kubenet-353700 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	-- Journal begins at Mon 2023-12-18 14:17:24 UTC, ends at Mon 2023-12-18 14:20:07 UTC. --
	Dec 18 14:18:28 kubenet-353700 systemd[1]: Starting Docker Application Container Engine...
	Dec 18 14:18:28 kubenet-353700 dockerd[695]: time="2023-12-18T14:18:28.310188400Z" level=info msg="Starting up"
	Dec 18 14:18:28 kubenet-353700 dockerd[695]: time="2023-12-18T14:18:28.311447246Z" level=info msg="containerd not running, starting managed containerd"
	Dec 18 14:18:28 kubenet-353700 dockerd[695]: time="2023-12-18T14:18:28.312701292Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=701
	Dec 18 14:18:28 kubenet-353700 dockerd[701]: time="2023-12-18T14:18:28.351426916Z" level=info msg="starting containerd" revision=64b8a811b07ba6288238eefc14d898ee0b5b99ba version=v1.7.11
	Dec 18 14:18:28 kubenet-353700 dockerd[701]: time="2023-12-18T14:18:28.377699981Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Dec 18 14:18:28 kubenet-353700 dockerd[701]: time="2023-12-18T14:18:28.377812285Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Dec 18 14:18:28 kubenet-353700 dockerd[701]: time="2023-12-18T14:18:28.380515085Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.57\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Dec 18 14:18:28 kubenet-353700 dockerd[701]: time="2023-12-18T14:18:28.380634789Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Dec 18 14:18:28 kubenet-353700 dockerd[701]: time="2023-12-18T14:18:28.381011103Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Dec 18 14:18:28 kubenet-353700 dockerd[701]: time="2023-12-18T14:18:28.381118107Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Dec 18 14:18:28 kubenet-353700 dockerd[701]: time="2023-12-18T14:18:28.381489621Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Dec 18 14:18:28 kubenet-353700 dockerd[701]: time="2023-12-18T14:18:28.381773531Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Dec 18 14:18:28 kubenet-353700 dockerd[701]: time="2023-12-18T14:18:28.381886335Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Dec 18 14:18:28 kubenet-353700 dockerd[701]: time="2023-12-18T14:18:28.382005640Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Dec 18 14:18:28 kubenet-353700 dockerd[701]: time="2023-12-18T14:18:28.382716866Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Dec 18 14:18:28 kubenet-353700 dockerd[701]: time="2023-12-18T14:18:28.382883272Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Dec 18 14:18:28 kubenet-353700 dockerd[701]: time="2023-12-18T14:18:28.382903573Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Dec 18 14:18:28 kubenet-353700 dockerd[701]: time="2023-12-18T14:18:28.383084079Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Dec 18 14:18:28 kubenet-353700 dockerd[701]: time="2023-12-18T14:18:28.383924910Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Dec 18 14:18:28 kubenet-353700 dockerd[701]: time="2023-12-18T14:18:28.385866781Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Dec 18 14:18:28 kubenet-353700 dockerd[701]: time="2023-12-18T14:18:28.385973585Z" level=info msg="metadata content store policy set" policy=shared
	Dec 18 14:18:28 kubenet-353700 dockerd[701]: time="2023-12-18T14:18:28.399915498Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Dec 18 14:18:28 kubenet-353700 dockerd[701]: time="2023-12-18T14:18:28.399955599Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Dec 18 14:18:28 kubenet-353700 dockerd[701]: time="2023-12-18T14:18:28.399983200Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Dec 18 14:18:28 kubenet-353700 dockerd[701]: time="2023-12-18T14:18:28.400052803Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Dec 18 14:18:28 kubenet-353700 dockerd[701]: time="2023-12-18T14:18:28.400205909Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Dec 18 14:18:28 kubenet-353700 dockerd[701]: time="2023-12-18T14:18:28.400337813Z" level=info msg="NRI interface is disabled by configuration."
	Dec 18 14:18:28 kubenet-353700 dockerd[701]: time="2023-12-18T14:18:28.400365014Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Dec 18 14:18:28 kubenet-353700 dockerd[701]: time="2023-12-18T14:18:28.400516720Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Dec 18 14:18:28 kubenet-353700 dockerd[701]: time="2023-12-18T14:18:28.400566622Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Dec 18 14:18:28 kubenet-353700 dockerd[701]: time="2023-12-18T14:18:28.400587623Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Dec 18 14:18:28 kubenet-353700 dockerd[701]: time="2023-12-18T14:18:28.400605523Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Dec 18 14:18:28 kubenet-353700 dockerd[701]: time="2023-12-18T14:18:28.400629724Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Dec 18 14:18:28 kubenet-353700 dockerd[701]: time="2023-12-18T14:18:28.400655625Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Dec 18 14:18:28 kubenet-353700 dockerd[701]: time="2023-12-18T14:18:28.400672426Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Dec 18 14:18:28 kubenet-353700 dockerd[701]: time="2023-12-18T14:18:28.400687526Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Dec 18 14:18:28 kubenet-353700 dockerd[701]: time="2023-12-18T14:18:28.400704027Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Dec 18 14:18:28 kubenet-353700 dockerd[701]: time="2023-12-18T14:18:28.400720427Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Dec 18 14:18:28 kubenet-353700 dockerd[701]: time="2023-12-18T14:18:28.400736828Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Dec 18 14:18:28 kubenet-353700 dockerd[701]: time="2023-12-18T14:18:28.400751429Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Dec 18 14:18:28 kubenet-353700 dockerd[701]: time="2023-12-18T14:18:28.400846932Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Dec 18 14:18:28 kubenet-353700 dockerd[701]: time="2023-12-18T14:18:28.401450354Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Dec 18 14:18:28 kubenet-353700 dockerd[701]: time="2023-12-18T14:18:28.401565058Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Dec 18 14:18:28 kubenet-353700 dockerd[701]: time="2023-12-18T14:18:28.401590759Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Dec 18 14:18:28 kubenet-353700 dockerd[701]: time="2023-12-18T14:18:28.401634261Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Dec 18 14:18:28 kubenet-353700 dockerd[701]: time="2023-12-18T14:18:28.401692763Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Dec 18 14:18:28 kubenet-353700 dockerd[701]: time="2023-12-18T14:18:28.401737665Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Dec 18 14:18:28 kubenet-353700 dockerd[701]: time="2023-12-18T14:18:28.401760266Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Dec 18 14:18:28 kubenet-353700 dockerd[701]: time="2023-12-18T14:18:28.401779266Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Dec 18 14:18:28 kubenet-353700 dockerd[701]: time="2023-12-18T14:18:28.401795767Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Dec 18 14:18:28 kubenet-353700 dockerd[701]: time="2023-12-18T14:18:28.401811668Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Dec 18 14:18:28 kubenet-353700 dockerd[701]: time="2023-12-18T14:18:28.401841469Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Dec 18 14:18:28 kubenet-353700 dockerd[701]: time="2023-12-18T14:18:28.401858869Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Dec 18 14:18:28 kubenet-353700 dockerd[701]: time="2023-12-18T14:18:28.401874770Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Dec 18 14:18:28 kubenet-353700 dockerd[701]: time="2023-12-18T14:18:28.401960473Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Dec 18 14:18:28 kubenet-353700 dockerd[701]: time="2023-12-18T14:18:28.401978674Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Dec 18 14:18:28 kubenet-353700 dockerd[701]: time="2023-12-18T14:18:28.402014675Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Dec 18 14:18:28 kubenet-353700 dockerd[701]: time="2023-12-18T14:18:28.402045876Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Dec 18 14:18:28 kubenet-353700 dockerd[701]: time="2023-12-18T14:18:28.402060077Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Dec 18 14:18:28 kubenet-353700 dockerd[701]: time="2023-12-18T14:18:28.402074277Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Dec 18 14:18:28 kubenet-353700 dockerd[701]: time="2023-12-18T14:18:28.402086478Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Dec 18 14:18:28 kubenet-353700 dockerd[701]: time="2023-12-18T14:18:28.402099078Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Dec 18 14:18:28 kubenet-353700 dockerd[701]: time="2023-12-18T14:18:28.402113979Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	Dec 18 14:18:28 kubenet-353700 dockerd[701]: time="2023-12-18T14:18:28.402131379Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Dec 18 14:18:28 kubenet-353700 dockerd[701]: time="2023-12-18T14:18:28.402146580Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	Dec 18 14:18:28 kubenet-353700 dockerd[701]: time="2023-12-18T14:18:28.402685600Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Dec 18 14:18:28 kubenet-353700 dockerd[701]: time="2023-12-18T14:18:28.402787403Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Dec 18 14:18:28 kubenet-353700 dockerd[701]: time="2023-12-18T14:18:28.409469849Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Dec 18 14:18:28 kubenet-353700 dockerd[701]: time="2023-12-18T14:18:28.410342681Z" level=info msg="containerd successfully booted in 0.062231s"
	Dec 18 14:18:28 kubenet-353700 dockerd[695]: time="2023-12-18T14:18:28.456169965Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Dec 18 14:18:28 kubenet-353700 dockerd[695]: time="2023-12-18T14:18:28.488390850Z" level=info msg="Loading containers: start."
	Dec 18 14:18:28 kubenet-353700 dockerd[695]: time="2023-12-18T14:18:28.914008093Z" level=info msg="Loading containers: done."
	Dec 18 14:18:28 kubenet-353700 dockerd[695]: time="2023-12-18T14:18:28.948460659Z" level=warning msg="WARNING: No blkio throttle.read_bps_device support"
	Dec 18 14:18:28 kubenet-353700 dockerd[695]: time="2023-12-18T14:18:28.948524162Z" level=warning msg="WARNING: No blkio throttle.write_bps_device support"
	Dec 18 14:18:28 kubenet-353700 dockerd[695]: time="2023-12-18T14:18:28.948534662Z" level=warning msg="WARNING: No blkio throttle.read_iops_device support"
	Dec 18 14:18:28 kubenet-353700 dockerd[695]: time="2023-12-18T14:18:28.948541462Z" level=warning msg="WARNING: No blkio throttle.write_iops_device support"
	Dec 18 14:18:28 kubenet-353700 dockerd[695]: time="2023-12-18T14:18:28.948567363Z" level=info msg="Docker daemon" commit=311b9ff graphdriver=overlay2 version=24.0.7
	Dec 18 14:18:28 kubenet-353700 dockerd[695]: time="2023-12-18T14:18:28.948694968Z" level=info msg="Daemon has completed initialization"
	Dec 18 14:18:29 kubenet-353700 dockerd[695]: time="2023-12-18T14:18:29.383945664Z" level=info msg="API listen on /var/run/docker.sock"
	Dec 18 14:18:29 kubenet-353700 systemd[1]: Started Docker Application Container Engine.
	Dec 18 14:18:29 kubenet-353700 dockerd[695]: time="2023-12-18T14:18:29.384637488Z" level=info msg="API listen on [::]:2376"
	Dec 18 14:19:06 kubenet-353700 dockerd[695]: time="2023-12-18T14:19:06.215393565Z" level=info msg="Processing signal 'terminated'"
	Dec 18 14:19:06 kubenet-353700 dockerd[695]: time="2023-12-18T14:19:06.216397768Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Dec 18 14:19:06 kubenet-353700 dockerd[695]: time="2023-12-18T14:19:06.217026569Z" level=info msg="Daemon shutdown complete"
	Dec 18 14:19:06 kubenet-353700 dockerd[695]: time="2023-12-18T14:19:06.217170369Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Dec 18 14:19:06 kubenet-353700 dockerd[695]: time="2023-12-18T14:19:06.217202769Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Dec 18 14:19:06 kubenet-353700 systemd[1]: Stopping Docker Application Container Engine...
	Dec 18 14:19:07 kubenet-353700 systemd[1]: docker.service: Succeeded.
	Dec 18 14:19:07 kubenet-353700 systemd[1]: Stopped Docker Application Container Engine.
	Dec 18 14:19:07 kubenet-353700 systemd[1]: Starting Docker Application Container Engine...
	Dec 18 14:19:07 kubenet-353700 dockerd[1032]: time="2023-12-18T14:19:07.319925823Z" level=info msg="Starting up"
	Dec 18 14:20:07 kubenet-353700 dockerd[1032]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Dec 18 14:20:07 kubenet-353700 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Dec 18 14:20:07 kubenet-353700 systemd[1]: docker.service: Failed with result 'exit-code'.
	Dec 18 14:20:07 kubenet-353700 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W1218 14:20:07.401186   10172 out.go:239] * 
	* 
	W1218 14:20:07.402784   10172 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1218 14:20:07.403784   10172 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 90
--- FAIL: TestNetworkPlugins/group/kubenet/Start (248.32s)

                                                
                                    

Test pass (191/252)

Order passed test Duration
3 TestDownloadOnly/v1.16.0/json-events 17.14
4 TestDownloadOnly/v1.16.0/preload-exists 0.07
7 TestDownloadOnly/v1.16.0/kubectl 0
8 TestDownloadOnly/v1.16.0/LogsDuration 0.29
10 TestDownloadOnly/v1.28.4/json-events 14.04
11 TestDownloadOnly/v1.28.4/preload-exists 0
14 TestDownloadOnly/v1.28.4/kubectl 0
15 TestDownloadOnly/v1.28.4/LogsDuration 0.27
17 TestDownloadOnly/v1.29.0-rc.2/json-events 13.5
18 TestDownloadOnly/v1.29.0-rc.2/preload-exists 0
21 TestDownloadOnly/v1.29.0-rc.2/kubectl 0
22 TestDownloadOnly/v1.29.0-rc.2/LogsDuration 0.26
23 TestDownloadOnly/DeleteAll 1.31
24 TestDownloadOnly/DeleteAlwaysSucceeds 1.27
26 TestBinaryMirror 7.17
27 TestOffline 442.09
30 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.3
31 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.28
32 TestAddons/Setup 383.12
35 TestAddons/parallel/Ingress 63.91
36 TestAddons/parallel/InspektorGadget 26.74
37 TestAddons/parallel/MetricsServer 20.96
38 TestAddons/parallel/HelmTiller 27.67
40 TestAddons/parallel/CSI 87.39
41 TestAddons/parallel/Headlamp 41.06
42 TestAddons/parallel/CloudSpanner 22.73
43 TestAddons/parallel/LocalPath 44.34
44 TestAddons/parallel/NvidiaDevicePlugin 23.01
47 TestAddons/serial/GCPAuth/Namespaces 0.36
48 TestAddons/StoppedEnableDisable 48.31
49 TestCertOptions 493.77
50 TestCertExpiration 938.81
51 TestDockerFlags 457.5
60 TestErrorSpam/start 17.42
61 TestErrorSpam/status 36.82
62 TestErrorSpam/pause 22.67
63 TestErrorSpam/unpause 22.71
64 TestErrorSpam/stop 46.87
67 TestFunctional/serial/CopySyncFile 0.04
68 TestFunctional/serial/StartWithProxy 198.06
69 TestFunctional/serial/AuditLog 0
70 TestFunctional/serial/SoftStart 113.36
71 TestFunctional/serial/KubeContext 0.15
72 TestFunctional/serial/KubectlGetPods 0.23
75 TestFunctional/serial/CacheCmd/cache/add_remote 26.66
76 TestFunctional/serial/CacheCmd/cache/add_local 10.52
77 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.3
78 TestFunctional/serial/CacheCmd/cache/list 0.28
79 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 9.31
80 TestFunctional/serial/CacheCmd/cache/cache_reload 36.22
81 TestFunctional/serial/CacheCmd/cache/delete 0.58
82 TestFunctional/serial/MinikubeKubectlCmd 0.51
83 TestFunctional/serial/MinikubeKubectlCmdDirectly 1.39
84 TestFunctional/serial/ExtraConfig 116.32
85 TestFunctional/serial/ComponentHealth 0.2
86 TestFunctional/serial/LogsCmd 8.39
87 TestFunctional/serial/LogsFileCmd 10.45
88 TestFunctional/serial/InvalidService 21.18
94 TestFunctional/parallel/StatusCmd 42.49
98 TestFunctional/parallel/ServiceCmdConnect 26.24
99 TestFunctional/parallel/AddonsCmd 0.81
100 TestFunctional/parallel/PersistentVolumeClaim 41.57
102 TestFunctional/parallel/SSHCmd 19.67
103 TestFunctional/parallel/CpCmd 59.57
104 TestFunctional/parallel/MySQL 58.6
105 TestFunctional/parallel/FileSync 10.24
106 TestFunctional/parallel/CertSync 63.44
110 TestFunctional/parallel/NodeLabels 0.2
112 TestFunctional/parallel/NonActiveRuntimeDisabled 10.83
114 TestFunctional/parallel/License 3.3
115 TestFunctional/parallel/ProfileCmd/profile_not_create 9.5
116 TestFunctional/parallel/ServiceCmd/DeployApp 19.46
117 TestFunctional/parallel/ProfileCmd/profile_list 8.49
118 TestFunctional/parallel/ProfileCmd/profile_json_output 9.24
119 TestFunctional/parallel/ServiceCmd/List 14.59
120 TestFunctional/parallel/ServiceCmd/JSONOutput 14.55
125 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 8.85
126 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
128 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 15.59
129 TestFunctional/parallel/Version/short 0.29
130 TestFunctional/parallel/Version/components 8.39
131 TestFunctional/parallel/ImageCommands/ImageListShort 8.16
132 TestFunctional/parallel/ImageCommands/ImageListTable 7.9
133 TestFunctional/parallel/ImageCommands/ImageListJson 7.89
134 TestFunctional/parallel/ImageCommands/ImageListYaml 8.07
135 TestFunctional/parallel/ImageCommands/ImageBuild 26.3
136 TestFunctional/parallel/ImageCommands/Setup 3.83
142 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
143 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 22.57
144 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 20.43
145 TestFunctional/parallel/DockerEnv/powershell 45.13
146 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 26.3
147 TestFunctional/parallel/UpdateContextCmd/no_changes 2.58
148 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 2.56
149 TestFunctional/parallel/UpdateContextCmd/no_clusters 2.58
150 TestFunctional/parallel/ImageCommands/ImageSaveToFile 8.8
151 TestFunctional/parallel/ImageCommands/ImageRemove 15.55
152 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 17.84
153 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 9.44
154 TestFunctional/delete_addon-resizer_images 0.49
155 TestFunctional/delete_my-image_image 0.19
156 TestFunctional/delete_minikube_cached_images 0.2
160 TestImageBuild/serial/Setup 187.57
161 TestImageBuild/serial/NormalBuild 9.04
162 TestImageBuild/serial/BuildWithBuildArg 8.69
163 TestImageBuild/serial/BuildWithDockerIgnore 7.53
164 TestImageBuild/serial/BuildWithSpecifiedDockerfile 7.46
167 TestIngressAddonLegacy/StartLegacyK8sCluster 209.87
169 TestIngressAddonLegacy/serial/ValidateIngressAddonActivation 38.31
170 TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation 14.48
171 TestIngressAddonLegacy/serial/ValidateIngressAddons 80.89
174 TestJSONOutput/start/Command 232.78
175 TestJSONOutput/start/Audit 0
177 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
178 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
180 TestJSONOutput/pause/Command 7.82
181 TestJSONOutput/pause/Audit 0
183 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
184 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
186 TestJSONOutput/unpause/Command 7.73
187 TestJSONOutput/unpause/Audit 0
189 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
190 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
192 TestJSONOutput/stop/Command 28.5
193 TestJSONOutput/stop/Audit 0
195 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
196 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
197 TestErrorJSONOutput 1.51
202 TestMainNoArgs 0.26
203 TestMinikubeProfile 491.42
206 TestMountStart/serial/StartWithMountFirst 146.98
207 TestMountStart/serial/VerifyMountFirst 9.44
228 TestPreload 492.84
229 TestScheduledStopWindows 325.23
236 TestKubernetesUpgrade 879.72
238 TestStoppedBinaryUpgrade/Setup 0.53
259 TestPause/serial/Start 342.67
261 TestNoKubernetes/serial/StartNoK8sWithVersion 0.29
263 TestStoppedBinaryUpgrade/MinikubeLogs 10.41
264 TestPause/serial/SecondStartNoReconfiguration 332.14
265 TestPause/serial/Pause 8.45
266 TestPause/serial/VerifyStatus 12.92
267 TestPause/serial/Unpause 8.05
268 TestPause/serial/PauseAgain 8.18
269 TestPause/serial/DeletePaused 42.52
270 TestPause/serial/VerifyDeletedResources 18.26
271 TestNetworkPlugins/group/auto/Start 431.63
272 TestNetworkPlugins/group/kindnet/Start 422.02
273 TestNetworkPlugins/group/calico/Start 505.25
274 TestNetworkPlugins/group/auto/KubeletFlags 10.1
275 TestNetworkPlugins/group/auto/NetCatPod 16.49
276 TestNetworkPlugins/group/auto/DNS 0.34
277 TestNetworkPlugins/group/auto/Localhost 0.42
278 TestNetworkPlugins/group/auto/HairPin 0.35
279 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
280 TestNetworkPlugins/group/kindnet/KubeletFlags 11.15
281 TestNetworkPlugins/group/kindnet/NetCatPod 17.53
282 TestNetworkPlugins/group/kindnet/DNS 0.54
283 TestNetworkPlugins/group/kindnet/Localhost 0.44
284 TestNetworkPlugins/group/kindnet/HairPin 0.43
285 TestNetworkPlugins/group/custom-flannel/Start 306.62
286 TestNetworkPlugins/group/calico/ControllerPod 6.03
287 TestNetworkPlugins/group/calico/KubeletFlags 12.27
288 TestNetworkPlugins/group/calico/NetCatPod 17.59
289 TestNetworkPlugins/group/calico/DNS 0.4
290 TestNetworkPlugins/group/calico/Localhost 0.36
291 TestNetworkPlugins/group/calico/HairPin 0.38
292 TestNetworkPlugins/group/false/Start 252.6
293 TestNetworkPlugins/group/custom-flannel/KubeletFlags 12.38
294 TestNetworkPlugins/group/custom-flannel/NetCatPod 16.66
295 TestNetworkPlugins/group/custom-flannel/DNS 0.66
296 TestNetworkPlugins/group/custom-flannel/Localhost 0.44
297 TestNetworkPlugins/group/custom-flannel/HairPin 0.42
298 TestNetworkPlugins/group/enable-default-cni/Start 275.8
299 TestNetworkPlugins/group/false/KubeletFlags 12.91
300 TestNetworkPlugins/group/false/NetCatPod 17.6
301 TestNetworkPlugins/group/false/DNS 0.44
302 TestNetworkPlugins/group/false/Localhost 0.41
303 TestNetworkPlugins/group/false/HairPin 0.43
304 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 12.15
305 TestNetworkPlugins/group/flannel/Start 255.61
306 TestNetworkPlugins/group/enable-default-cni/NetCatPod 16.69
307 TestNetworkPlugins/group/enable-default-cni/DNS 0.41
308 TestNetworkPlugins/group/enable-default-cni/Localhost 0.41
309 TestNetworkPlugins/group/enable-default-cni/HairPin 0.39
311 TestNetworkPlugins/group/flannel/ControllerPod 6.03
312 TestNetworkPlugins/group/flannel/KubeletFlags 12.48
313 TestNetworkPlugins/group/flannel/NetCatPod 17.58
314 TestNetworkPlugins/group/flannel/DNS 0.35
315 TestNetworkPlugins/group/flannel/Localhost 0.34
316 TestNetworkPlugins/group/flannel/HairPin 0.34
x
+
TestDownloadOnly/v1.16.0/json-events (17.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-453500 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=hyperv
aaa_download_only_test.go:69: (dbg) Done: out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-453500 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=hyperv: (17.1355425s)
--- PASS: TestDownloadOnly/v1.16.0/json-events (17.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/preload-exists (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/preload-exists
--- PASS: TestDownloadOnly/v1.16.0/preload-exists (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/kubectl
--- PASS: TestDownloadOnly/v1.16.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/LogsDuration (0.29s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/LogsDuration
aaa_download_only_test.go:172: (dbg) Run:  out/minikube-windows-amd64.exe logs -p download-only-453500
aaa_download_only_test.go:172: (dbg) Non-zero exit: out/minikube-windows-amd64.exe logs -p download-only-453500: exit status 85 (283.9273ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|-------------------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |       User        | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|-------------------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-453500 | minikube7\jenkins | v1.32.0 | 18 Dec 23 11:41 UTC |          |
	|         | -p download-only-453500        |                      |                   |         |                     |          |
	|         | --force --alsologtostderr      |                      |                   |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |                   |         |                     |          |
	|         | --container-runtime=docker     |                      |                   |         |                     |          |
	|         | --driver=hyperv                |                      |                   |         |                     |          |
	|---------|--------------------------------|----------------------|-------------------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/12/18 11:41:42
	Running on machine: minikube7
	Binary: Built with gc go1.21.5 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1218 11:41:42.569224    4732 out.go:296] Setting OutFile to fd 600 ...
	I1218 11:41:42.572575    4732 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1218 11:41:42.572575    4732 out.go:309] Setting ErrFile to fd 604...
	I1218 11:41:42.572575    4732 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W1218 11:41:42.591023    4732 root.go:314] Error reading config file at C:\Users\jenkins.minikube7\minikube-integration\.minikube\config\config.json: open C:\Users\jenkins.minikube7\minikube-integration\.minikube\config\config.json: The system cannot find the path specified.
	I1218 11:41:42.604608    4732 out.go:303] Setting JSON to true
	I1218 11:41:42.608359    4732 start.go:128] hostinfo: {"hostname":"minikube7","uptime":177,"bootTime":1702899525,"procs":207,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.3803 Build 19045.3803","kernelVersion":"10.0.19045.3803 Build 19045.3803","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f66ed2ea-6c04-4a6b-8eea-b2fb0953e990"}
	W1218 11:41:42.609026    4732 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1218 11:41:42.610494    4732 out.go:97] [download-only-453500] minikube v1.32.0 on Microsoft Windows 10 Enterprise N 10.0.19045.3803 Build 19045.3803
	I1218 11:41:42.611022    4732 notify.go:220] Checking for updates...
	W1218 11:41:42.611022    4732 preload.go:295] Failed to list preload files: open C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball: The system cannot find the file specified.
	I1218 11:41:42.612151    4732 out.go:169] KUBECONFIG=C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	I1218 11:41:42.613086    4732 out.go:169] MINIKUBE_HOME=C:\Users\jenkins.minikube7\minikube-integration\.minikube
	I1218 11:41:42.614269    4732 out.go:169] MINIKUBE_LOCATION=17824
	I1218 11:41:42.614892    4732 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	W1218 11:41:42.616707    4732 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1218 11:41:42.618352    4732 driver.go:392] Setting default libvirt URI to qemu:///system
	I1218 11:41:48.343243    4732 out.go:97] Using the hyperv driver based on user configuration
	I1218 11:41:48.343243    4732 start.go:298] selected driver: hyperv
	I1218 11:41:48.343243    4732 start.go:902] validating driver "hyperv" against <nil>
	I1218 11:41:48.344230    4732 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1218 11:41:48.396491    4732 start_flags.go:394] Using suggested 6000MB memory alloc based on sys=65534MB, container=0MB
	I1218 11:41:48.397340    4732 start_flags.go:913] Wait components to verify : map[apiserver:true system_pods:true]
	I1218 11:41:48.397340    4732 cni.go:84] Creating CNI manager for ""
	I1218 11:41:48.397340    4732 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I1218 11:41:48.397340    4732 start_flags.go:323] config:
	{Name:download-only-453500 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702585004-17765@sha256:ba2fbb9efd5b81a443834ba0800f3bc13feea942ce199df74b0054a9bdb32bbd Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-453500 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1218 11:41:48.398866    4732 iso.go:125] acquiring lock: {Name:mka7fdf4332632f41b99a369cc525e40499a0030 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1218 11:41:48.399998    4732 out.go:97] Downloading VM boot image ...
	I1218 11:41:48.400163    4732 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/17765/minikube-v1.32.1-1702490427-17765-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/17765/minikube-v1.32.1-1702490427-17765-amd64.iso.sha256 -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\iso\amd64\minikube-v1.32.1-1702490427-17765-amd64.iso
	I1218 11:41:52.174060    4732 out.go:97] Starting control plane node download-only-453500 in cluster download-only-453500
	I1218 11:41:52.174716    4732 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I1218 11:41:52.213712    4732 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I1218 11:41:52.214491    4732 cache.go:56] Caching tarball of preloaded images
	I1218 11:41:52.214491    4732 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I1218 11:41:52.215966    4732 out.go:97] Downloading Kubernetes v1.16.0 preload ...
	I1218 11:41:52.215966    4732 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 ...
	I1218 11:41:52.294955    4732 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4?checksum=md5:326f3ce331abb64565b50b8c9e791244 -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I1218 11:41:56.285981    4732 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 ...
	I1218 11:41:56.288037    4732 preload.go:256] verifying checksum of C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 ...
	I1218 11:41:57.225305    4732 cache.go:59] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I1218 11:41:57.226064    4732 profile.go:148] Saving config to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\download-only-453500\config.json ...
	I1218 11:41:57.226064    4732 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\download-only-453500\config.json: {Name:mkeb7cf7bace36290836dfea36deb52e6edfd19c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1218 11:41:57.227531    4732 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I1218 11:41:57.231568    4732 download.go:107] Downloading: https://dl.k8s.io/release/v1.16.0/bin/windows/amd64/kubectl.exe?checksum=file:https://dl.k8s.io/release/v1.16.0/bin/windows/amd64/kubectl.exe.sha1 -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\windows\amd64\v1.16.0/kubectl.exe
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-453500"

                                                
                                                
-- /stdout --
** stderr ** 
	W1218 11:41:59.721310    8736 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
aaa_download_only_test.go:173: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.16.0/LogsDuration (0.29s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/json-events (14.04s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-453500 --force --alsologtostderr --kubernetes-version=v1.28.4 --container-runtime=docker --driver=hyperv
aaa_download_only_test.go:69: (dbg) Done: out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-453500 --force --alsologtostderr --kubernetes-version=v1.28.4 --container-runtime=docker --driver=hyperv: (14.0349529s)
--- PASS: TestDownloadOnly/v1.28.4/json-events (14.04s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/preload-exists
--- PASS: TestDownloadOnly/v1.28.4/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/kubectl
--- PASS: TestDownloadOnly/v1.28.4/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/LogsDuration (0.27s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/LogsDuration
aaa_download_only_test.go:172: (dbg) Run:  out/minikube-windows-amd64.exe logs -p download-only-453500
aaa_download_only_test.go:172: (dbg) Non-zero exit: out/minikube-windows-amd64.exe logs -p download-only-453500: exit status 85 (267.3549ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|-------------------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |       User        | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|-------------------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-453500 | minikube7\jenkins | v1.32.0 | 18 Dec 23 11:41 UTC |          |
	|         | -p download-only-453500        |                      |                   |         |                     |          |
	|         | --force --alsologtostderr      |                      |                   |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |                   |         |                     |          |
	|         | --container-runtime=docker     |                      |                   |         |                     |          |
	|         | --driver=hyperv                |                      |                   |         |                     |          |
	| start   | -o=json --download-only        | download-only-453500 | minikube7\jenkins | v1.32.0 | 18 Dec 23 11:42 UTC |          |
	|         | -p download-only-453500        |                      |                   |         |                     |          |
	|         | --force --alsologtostderr      |                      |                   |         |                     |          |
	|         | --kubernetes-version=v1.28.4   |                      |                   |         |                     |          |
	|         | --container-runtime=docker     |                      |                   |         |                     |          |
	|         | --driver=hyperv                |                      |                   |         |                     |          |
	|---------|--------------------------------|----------------------|-------------------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/12/18 11:42:00
	Running on machine: minikube7
	Binary: Built with gc go1.21.5 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1218 11:42:00.084085    1644 out.go:296] Setting OutFile to fd 724 ...
	I1218 11:42:00.084822    1644 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1218 11:42:00.084822    1644 out.go:309] Setting ErrFile to fd 728...
	I1218 11:42:00.084822    1644 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W1218 11:42:00.101663    1644 root.go:314] Error reading config file at C:\Users\jenkins.minikube7\minikube-integration\.minikube\config\config.json: open C:\Users\jenkins.minikube7\minikube-integration\.minikube\config\config.json: The system cannot find the file specified.
	I1218 11:42:00.109181    1644 out.go:303] Setting JSON to true
	I1218 11:42:00.111726    1644 start.go:128] hostinfo: {"hostname":"minikube7","uptime":194,"bootTime":1702899525,"procs":207,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.3803 Build 19045.3803","kernelVersion":"10.0.19045.3803 Build 19045.3803","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f66ed2ea-6c04-4a6b-8eea-b2fb0953e990"}
	W1218 11:42:00.112720    1644 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1218 11:42:00.113299    1644 out.go:97] [download-only-453500] minikube v1.32.0 on Microsoft Windows 10 Enterprise N 10.0.19045.3803 Build 19045.3803
	I1218 11:42:00.114319    1644 out.go:169] KUBECONFIG=C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	I1218 11:42:00.113299    1644 notify.go:220] Checking for updates...
	I1218 11:42:00.115505    1644 out.go:169] MINIKUBE_HOME=C:\Users\jenkins.minikube7\minikube-integration\.minikube
	I1218 11:42:00.116142    1644 out.go:169] MINIKUBE_LOCATION=17824
	I1218 11:42:00.117387    1644 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	W1218 11:42:00.118822    1644 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1218 11:42:00.119862    1644 config.go:182] Loaded profile config "download-only-453500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	W1218 11:42:00.120203    1644 start.go:810] api.Load failed for download-only-453500: filestore "download-only-453500": Docker machine "download-only-453500" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I1218 11:42:00.120203    1644 driver.go:392] Setting default libvirt URI to qemu:///system
	W1218 11:42:00.120203    1644 start.go:810] api.Load failed for download-only-453500: filestore "download-only-453500": Docker machine "download-only-453500" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I1218 11:42:05.470924    1644 out.go:97] Using the hyperv driver based on existing profile
	I1218 11:42:05.470924    1644 start.go:298] selected driver: hyperv
	I1218 11:42:05.470924    1644 start.go:902] validating driver "hyperv" against &{Name:download-only-453500 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17765/minikube-v1.32.1-1702490427-17765-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702585004-17765@sha256:ba2fbb9efd5b81a443834ba0800f3bc13feea942ce199df74b0054a9bdb32bbd Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernete
sConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-453500 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:do
cker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1218 11:42:05.519916    1644 cni.go:84] Creating CNI manager for ""
	I1218 11:42:05.519916    1644 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1218 11:42:05.519916    1644 start_flags.go:323] config:
	{Name:download-only-453500 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17765/minikube-v1.32.1-1702490427-17765-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702585004-17765@sha256:ba2fbb9efd5b81a443834ba0800f3bc13feea942ce199df74b0054a9bdb32bbd Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:download-only-453500 Namespace:de
fault APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFir
mwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1218 11:42:05.520484    1644 iso.go:125] acquiring lock: {Name:mka7fdf4332632f41b99a369cc525e40499a0030 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1218 11:42:05.521652    1644 out.go:97] Starting control plane node download-only-453500 in cluster download-only-453500
	I1218 11:42:05.521652    1644 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I1218 11:42:05.564312    1644 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.4/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4
	I1218 11:42:05.565188    1644 cache.go:56] Caching tarball of preloaded images
	I1218 11:42:05.565258    1644 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I1218 11:42:05.566743    1644 out.go:97] Downloading Kubernetes v1.28.4 preload ...
	I1218 11:42:05.566743    1644 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 ...
	I1218 11:42:05.642182    1644 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.4/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4?checksum=md5:7ebdea7754e21f51b865dbfc36b53b7d -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-453500"

                                                
                                                
-- /stdout --
** stderr ** 
	W1218 11:42:14.042373    8280 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
aaa_download_only_test.go:173: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.4/LogsDuration (0.27s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/json-events (13.5s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-453500 --force --alsologtostderr --kubernetes-version=v1.29.0-rc.2 --container-runtime=docker --driver=hyperv
aaa_download_only_test.go:69: (dbg) Done: out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-453500 --force --alsologtostderr --kubernetes-version=v1.29.0-rc.2 --container-runtime=docker --driver=hyperv: (13.4994811s)
--- PASS: TestDownloadOnly/v1.29.0-rc.2/json-events (13.50s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/preload-exists
--- PASS: TestDownloadOnly/v1.29.0-rc.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/kubectl
--- PASS: TestDownloadOnly/v1.29.0-rc.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/LogsDuration (0.26s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/LogsDuration
aaa_download_only_test.go:172: (dbg) Run:  out/minikube-windows-amd64.exe logs -p download-only-453500
aaa_download_only_test.go:172: (dbg) Non-zero exit: out/minikube-windows-amd64.exe logs -p download-only-453500: exit status 85 (262.7832ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|-----------------------------------|----------------------|-------------------|---------|---------------------|----------|
	| Command |               Args                |       Profile        |       User        | Version |     Start Time      | End Time |
	|---------|-----------------------------------|----------------------|-------------------|---------|---------------------|----------|
	| start   | -o=json --download-only           | download-only-453500 | minikube7\jenkins | v1.32.0 | 18 Dec 23 11:41 UTC |          |
	|         | -p download-only-453500           |                      |                   |         |                     |          |
	|         | --force --alsologtostderr         |                      |                   |         |                     |          |
	|         | --kubernetes-version=v1.16.0      |                      |                   |         |                     |          |
	|         | --container-runtime=docker        |                      |                   |         |                     |          |
	|         | --driver=hyperv                   |                      |                   |         |                     |          |
	| start   | -o=json --download-only           | download-only-453500 | minikube7\jenkins | v1.32.0 | 18 Dec 23 11:42 UTC |          |
	|         | -p download-only-453500           |                      |                   |         |                     |          |
	|         | --force --alsologtostderr         |                      |                   |         |                     |          |
	|         | --kubernetes-version=v1.28.4      |                      |                   |         |                     |          |
	|         | --container-runtime=docker        |                      |                   |         |                     |          |
	|         | --driver=hyperv                   |                      |                   |         |                     |          |
	| start   | -o=json --download-only           | download-only-453500 | minikube7\jenkins | v1.32.0 | 18 Dec 23 11:42 UTC |          |
	|         | -p download-only-453500           |                      |                   |         |                     |          |
	|         | --force --alsologtostderr         |                      |                   |         |                     |          |
	|         | --kubernetes-version=v1.29.0-rc.2 |                      |                   |         |                     |          |
	|         | --container-runtime=docker        |                      |                   |         |                     |          |
	|         | --driver=hyperv                   |                      |                   |         |                     |          |
	|---------|-----------------------------------|----------------------|-------------------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/12/18 11:42:14
	Running on machine: minikube7
	Binary: Built with gc go1.21.5 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1218 11:42:14.393473    1668 out.go:296] Setting OutFile to fd 596 ...
	I1218 11:42:14.394474    1668 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1218 11:42:14.394474    1668 out.go:309] Setting ErrFile to fd 600...
	I1218 11:42:14.394474    1668 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W1218 11:42:14.417528    1668 root.go:314] Error reading config file at C:\Users\jenkins.minikube7\minikube-integration\.minikube\config\config.json: open C:\Users\jenkins.minikube7\minikube-integration\.minikube\config\config.json: The system cannot find the file specified.
	I1218 11:42:14.431650    1668 out.go:303] Setting JSON to true
	I1218 11:42:14.437844    1668 start.go:128] hostinfo: {"hostname":"minikube7","uptime":209,"bootTime":1702899525,"procs":202,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.3803 Build 19045.3803","kernelVersion":"10.0.19045.3803 Build 19045.3803","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f66ed2ea-6c04-4a6b-8eea-b2fb0953e990"}
	W1218 11:42:14.437844    1668 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1218 11:42:14.438898    1668 out.go:97] [download-only-453500] minikube v1.32.0 on Microsoft Windows 10 Enterprise N 10.0.19045.3803 Build 19045.3803
	I1218 11:42:14.439859    1668 notify.go:220] Checking for updates...
	I1218 11:42:14.445849    1668 out.go:169] KUBECONFIG=C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	I1218 11:42:14.447145    1668 out.go:169] MINIKUBE_HOME=C:\Users\jenkins.minikube7\minikube-integration\.minikube
	I1218 11:42:14.447853    1668 out.go:169] MINIKUBE_LOCATION=17824
	I1218 11:42:14.448866    1668 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	W1218 11:42:14.449852    1668 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1218 11:42:14.450865    1668 config.go:182] Loaded profile config "download-only-453500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	W1218 11:42:14.451855    1668 start.go:810] api.Load failed for download-only-453500: filestore "download-only-453500": Docker machine "download-only-453500" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I1218 11:42:14.451855    1668 driver.go:392] Setting default libvirt URI to qemu:///system
	W1218 11:42:14.451855    1668 start.go:810] api.Load failed for download-only-453500: filestore "download-only-453500": Docker machine "download-only-453500" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I1218 11:42:19.868888    1668 out.go:97] Using the hyperv driver based on existing profile
	I1218 11:42:19.868965    1668 start.go:298] selected driver: hyperv
	I1218 11:42:19.868965    1668 start.go:902] validating driver "hyperv" against &{Name:download-only-453500 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17765/minikube-v1.32.1-1702490427-17765-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702585004-17765@sha256:ba2fbb9efd5b81a443834ba0800f3bc13feea942ce199df74b0054a9bdb32bbd Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernete
sConfig:{KubernetesVersion:v1.28.4 ClusterName:download-only-453500 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:do
cker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1218 11:42:19.918066    1668 cni.go:84] Creating CNI manager for ""
	I1218 11:42:19.918190    1668 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1218 11:42:19.918190    1668 start_flags.go:323] config:
	{Name:download-only-453500 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17765/minikube-v1.32.1-1702490427-17765-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702585004-17765@sha256:ba2fbb9efd5b81a443834ba0800f3bc13feea942ce199df74b0054a9bdb32bbd Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:download-only-453500 Namespa
ce:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQe
muFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1218 11:42:19.918707    1668 iso.go:125] acquiring lock: {Name:mka7fdf4332632f41b99a369cc525e40499a0030 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1218 11:42:19.922114    1668 out.go:97] Starting control plane node download-only-453500 in cluster download-only-453500
	I1218 11:42:19.922114    1668 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime docker
	I1218 11:42:19.969490    1668 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.29.0-rc.2/preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-amd64.tar.lz4
	I1218 11:42:19.969490    1668 cache.go:56] Caching tarball of preloaded images
	I1218 11:42:19.970339    1668 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime docker
	I1218 11:42:19.971149    1668 out.go:97] Downloading Kubernetes v1.29.0-rc.2 preload ...
	I1218 11:42:19.971214    1668 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-amd64.tar.lz4 ...
	I1218 11:42:20.045777    1668 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.29.0-rc.2/preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-amd64.tar.lz4?checksum=md5:d472e9d5f1548dd0d68eb75b714c5436 -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-amd64.tar.lz4
	I1218 11:42:24.197789    1668 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-amd64.tar.lz4 ...
	I1218 11:42:24.199778    1668 preload.go:256] verifying checksum of C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-amd64.tar.lz4 ...
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-453500"

                                                
                                                
-- /stdout --
** stderr ** 
	W1218 11:42:27.804235   14924 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
aaa_download_only_test.go:173: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.29.0-rc.2/LogsDuration (0.26s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAll (1.31s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAll
aaa_download_only_test.go:190: (dbg) Run:  out/minikube-windows-amd64.exe delete --all
aaa_download_only_test.go:190: (dbg) Done: out/minikube-windows-amd64.exe delete --all: (1.3086695s)
--- PASS: TestDownloadOnly/DeleteAll (1.31s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAlwaysSucceeds (1.27s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAlwaysSucceeds
aaa_download_only_test.go:202: (dbg) Run:  out/minikube-windows-amd64.exe delete -p download-only-453500
aaa_download_only_test.go:202: (dbg) Done: out/minikube-windows-amd64.exe delete -p download-only-453500: (1.2746883s)
--- PASS: TestDownloadOnly/DeleteAlwaysSucceeds (1.27s)

                                                
                                    
x
+
TestBinaryMirror (7.17s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:307: (dbg) Run:  out/minikube-windows-amd64.exe start --download-only -p binary-mirror-275000 --alsologtostderr --binary-mirror http://127.0.0.1:57962 --driver=hyperv
aaa_download_only_test.go:307: (dbg) Done: out/minikube-windows-amd64.exe start --download-only -p binary-mirror-275000 --alsologtostderr --binary-mirror http://127.0.0.1:57962 --driver=hyperv: (6.2285572s)
helpers_test.go:175: Cleaning up "binary-mirror-275000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p binary-mirror-275000
--- PASS: TestBinaryMirror (7.17s)

                                                
                                    
x
+
TestOffline (442.09s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-windows-amd64.exe start -p offline-docker-592200 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=hyperv
aab_offline_test.go:55: (dbg) Done: out/minikube-windows-amd64.exe start -p offline-docker-592200 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=hyperv: (6m32.5970209s)
helpers_test.go:175: Cleaning up "offline-docker-592200" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p offline-docker-592200
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p offline-docker-592200: (49.4874541s)
--- PASS: TestOffline (442.09s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.3s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:927: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p addons-922300
addons_test.go:927: (dbg) Non-zero exit: out/minikube-windows-amd64.exe addons enable dashboard -p addons-922300: exit status 85 (299.8197ms)

                                                
                                                
-- stdout --
	* Profile "addons-922300" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-922300"

                                                
                                                
-- /stdout --
** stderr ** 
	W1218 11:42:39.099865    2296 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.30s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.28s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:938: (dbg) Run:  out/minikube-windows-amd64.exe addons disable dashboard -p addons-922300
addons_test.go:938: (dbg) Non-zero exit: out/minikube-windows-amd64.exe addons disable dashboard -p addons-922300: exit status 85 (279.4077ms)

                                                
                                                
-- stdout --
	* Profile "addons-922300" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-922300"

                                                
                                                
-- /stdout --
** stderr ** 
	W1218 11:42:39.098857    3372 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.28s)

                                                
                                    
x
+
TestAddons/Setup (383.12s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:109: (dbg) Run:  out/minikube-windows-amd64.exe start -p addons-922300 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --driver=hyperv --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:109: (dbg) Done: out/minikube-windows-amd64.exe start -p addons-922300 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --driver=hyperv --addons=ingress --addons=ingress-dns --addons=helm-tiller: (6m23.1215488s)
--- PASS: TestAddons/Setup (383.12s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (63.91s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:206: (dbg) Run:  kubectl --context addons-922300 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:231: (dbg) Run:  kubectl --context addons-922300 replace --force -f testdata\nginx-ingress-v1.yaml
addons_test.go:244: (dbg) Run:  kubectl --context addons-922300 replace --force -f testdata\nginx-pod-svc.yaml
addons_test.go:249: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [a783dcec-65e8-419a-aa55-7031b4d65a4d] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [a783dcec-65e8-419a-aa55-7031b4d65a4d] Running
addons_test.go:249: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 13.0091683s
addons_test.go:261: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-922300 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:261: (dbg) Done: out/minikube-windows-amd64.exe -p addons-922300 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": (9.5998257s)
addons_test.go:268: debug: unexpected stderr for out/minikube-windows-amd64.exe -p addons-922300 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'":
W1218 11:50:41.279790    4756 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
addons_test.go:285: (dbg) Run:  kubectl --context addons-922300 replace --force -f testdata\ingress-dns-example-v1.yaml
addons_test.go:290: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-922300 ip
addons_test.go:290: (dbg) Done: out/minikube-windows-amd64.exe -p addons-922300 ip: (2.5174877s)
addons_test.go:296: (dbg) Run:  nslookup hello-john.test 192.168.238.87
addons_test.go:305: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-922300 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:305: (dbg) Done: out/minikube-windows-amd64.exe -p addons-922300 addons disable ingress-dns --alsologtostderr -v=1: (15.0954624s)
addons_test.go:310: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-922300 addons disable ingress --alsologtostderr -v=1
addons_test.go:310: (dbg) Done: out/minikube-windows-amd64.exe -p addons-922300 addons disable ingress --alsologtostderr -v=1: (21.6912236s)
--- PASS: TestAddons/parallel/Ingress (63.91s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (26.74s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:837: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-46nrg" [dc076148-5aff-4283-a788-f9cc8dc92a19] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:837: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.0167609s
addons_test.go:840: (dbg) Run:  out/minikube-windows-amd64.exe addons disable inspektor-gadget -p addons-922300
addons_test.go:840: (dbg) Done: out/minikube-windows-amd64.exe addons disable inspektor-gadget -p addons-922300: (21.7199147s)
--- PASS: TestAddons/parallel/InspektorGadget (26.74s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (20.96s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:406: metrics-server stabilized in 27.9431ms
addons_test.go:408: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-7c66d45ddc-z6jw8" [42fa398b-e99a-4a62-b71a-e0b0b7401b30] Running
addons_test.go:408: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.0436036s
addons_test.go:414: (dbg) Run:  kubectl --context addons-922300 top pods -n kube-system
addons_test.go:431: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-922300 addons disable metrics-server --alsologtostderr -v=1
addons_test.go:431: (dbg) Done: out/minikube-windows-amd64.exe -p addons-922300 addons disable metrics-server --alsologtostderr -v=1: (15.6442256s)
--- PASS: TestAddons/parallel/MetricsServer (20.96s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (27.67s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:455: tiller-deploy stabilized in 4.9982ms
addons_test.go:457: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-7b677967b9-btzrr" [d71469da-c580-44a0-88d1-3239dbeb8d89] Running
addons_test.go:457: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.0128126s
addons_test.go:472: (dbg) Run:  kubectl --context addons-922300 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:472: (dbg) Done: kubectl --context addons-922300 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (7.4896245s)
addons_test.go:489: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-922300 addons disable helm-tiller --alsologtostderr -v=1
addons_test.go:489: (dbg) Done: out/minikube-windows-amd64.exe -p addons-922300 addons disable helm-tiller --alsologtostderr -v=1: (15.1479026s)
--- PASS: TestAddons/parallel/HelmTiller (27.67s)

                                                
                                    
x
+
TestAddons/parallel/CSI (87.39s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:560: csi-hostpath-driver pods stabilized in 28.6962ms
addons_test.go:563: (dbg) Run:  kubectl --context addons-922300 create -f testdata\csi-hostpath-driver\pvc.yaml
addons_test.go:568: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-922300 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-922300 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:573: (dbg) Run:  kubectl --context addons-922300 create -f testdata\csi-hostpath-driver\pv-pod.yaml
addons_test.go:578: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [3cd550bf-fc65-44fd-b358-0b5af8f90495] Pending
helpers_test.go:344: "task-pv-pod" [3cd550bf-fc65-44fd-b358-0b5af8f90495] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [3cd550bf-fc65-44fd-b358-0b5af8f90495] Running
addons_test.go:578: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 23.0165966s
addons_test.go:583: (dbg) Run:  kubectl --context addons-922300 create -f testdata\csi-hostpath-driver\snapshot.yaml
addons_test.go:588: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-922300 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-922300 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:593: (dbg) Run:  kubectl --context addons-922300 delete pod task-pv-pod
addons_test.go:593: (dbg) Done: kubectl --context addons-922300 delete pod task-pv-pod: (1.0629084s)
addons_test.go:599: (dbg) Run:  kubectl --context addons-922300 delete pvc hpvc
addons_test.go:605: (dbg) Run:  kubectl --context addons-922300 create -f testdata\csi-hostpath-driver\pvc-restore.yaml
addons_test.go:610: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-922300 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-922300 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-922300 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-922300 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-922300 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-922300 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-922300 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:615: (dbg) Run:  kubectl --context addons-922300 create -f testdata\csi-hostpath-driver\pv-pod-restore.yaml
addons_test.go:620: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [894075bf-b191-465d-bd3b-e726b181cacb] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [894075bf-b191-465d-bd3b-e726b181cacb] Running
addons_test.go:620: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 9.0167946s
addons_test.go:625: (dbg) Run:  kubectl --context addons-922300 delete pod task-pv-pod-restore
addons_test.go:625: (dbg) Done: kubectl --context addons-922300 delete pod task-pv-pod-restore: (2.1327389s)
addons_test.go:629: (dbg) Run:  kubectl --context addons-922300 delete pvc hpvc-restore
addons_test.go:633: (dbg) Run:  kubectl --context addons-922300 delete volumesnapshot new-snapshot-demo
addons_test.go:637: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-922300 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:637: (dbg) Done: out/minikube-windows-amd64.exe -p addons-922300 addons disable csi-hostpath-driver --alsologtostderr -v=1: (23.8904896s)
addons_test.go:641: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-922300 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:641: (dbg) Done: out/minikube-windows-amd64.exe -p addons-922300 addons disable volumesnapshots --alsologtostderr -v=1: (16.2678681s)
--- PASS: TestAddons/parallel/CSI (87.39s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (41.06s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:823: (dbg) Run:  out/minikube-windows-amd64.exe addons enable headlamp -p addons-922300 --alsologtostderr -v=1
addons_test.go:823: (dbg) Done: out/minikube-windows-amd64.exe addons enable headlamp -p addons-922300 --alsologtostderr -v=1: (18.0457124s)
addons_test.go:828: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-777fd4b855-mcx44" [9af17293-2a1d-48eb-83d7-f255bb7a8072] Pending
helpers_test.go:344: "headlamp-777fd4b855-mcx44" [9af17293-2a1d-48eb-83d7-f255bb7a8072] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-777fd4b855-mcx44" [9af17293-2a1d-48eb-83d7-f255bb7a8072] Running
addons_test.go:828: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 23.0118712s
--- PASS: TestAddons/parallel/Headlamp (41.06s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (22.73s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:856: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-5649c69bf6-ggs97" [3bbaf30f-02ed-4812-bb5d-b665f77651b0] Running
addons_test.go:856: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.0104625s
addons_test.go:859: (dbg) Run:  out/minikube-windows-amd64.exe addons disable cloud-spanner -p addons-922300
addons_test.go:859: (dbg) Done: out/minikube-windows-amd64.exe addons disable cloud-spanner -p addons-922300: (17.7131531s)
--- PASS: TestAddons/parallel/CloudSpanner (22.73s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (44.34s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:872: (dbg) Run:  kubectl --context addons-922300 apply -f testdata\storage-provisioner-rancher\pvc.yaml
addons_test.go:878: (dbg) Run:  kubectl --context addons-922300 apply -f testdata\storage-provisioner-rancher\pod.yaml
addons_test.go:882: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-922300 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-922300 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-922300 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-922300 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-922300 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-922300 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-922300 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-922300 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:885: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [a8768d4c-527f-4c0f-a813-53ff74ec9174] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [a8768d4c-527f-4c0f-a813-53ff74ec9174] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [a8768d4c-527f-4c0f-a813-53ff74ec9174] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:885: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 15.0196843s
addons_test.go:890: (dbg) Run:  kubectl --context addons-922300 get pvc test-pvc -o=json
addons_test.go:899: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-922300 ssh "cat /opt/local-path-provisioner/pvc-72013ae2-6033-4d78-8186-560dbbf121d3_default_test-pvc/file1"
addons_test.go:899: (dbg) Done: out/minikube-windows-amd64.exe -p addons-922300 ssh "cat /opt/local-path-provisioner/pvc-72013ae2-6033-4d78-8186-560dbbf121d3_default_test-pvc/file1": (11.3355858s)
addons_test.go:911: (dbg) Run:  kubectl --context addons-922300 delete pod test-local-path
addons_test.go:915: (dbg) Run:  kubectl --context addons-922300 delete pvc test-pvc
addons_test.go:919: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-922300 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:919: (dbg) Done: out/minikube-windows-amd64.exe -p addons-922300 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (9.0415139s)
--- PASS: TestAddons/parallel/LocalPath (44.34s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (23.01s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:951: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-l942k" [67ec697e-80eb-48d5-9be5-6aa964458aac] Running
addons_test.go:951: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.0238068s
addons_test.go:954: (dbg) Run:  out/minikube-windows-amd64.exe addons disable nvidia-device-plugin -p addons-922300
addons_test.go:954: (dbg) Done: out/minikube-windows-amd64.exe addons disable nvidia-device-plugin -p addons-922300: (17.9788125s)
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (23.01s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.36s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:649: (dbg) Run:  kubectl --context addons-922300 create ns new-namespace
addons_test.go:663: (dbg) Run:  kubectl --context addons-922300 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.36s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (48.31s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:171: (dbg) Run:  out/minikube-windows-amd64.exe stop -p addons-922300
addons_test.go:171: (dbg) Done: out/minikube-windows-amd64.exe stop -p addons-922300: (36.3730767s)
addons_test.go:175: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p addons-922300
addons_test.go:175: (dbg) Done: out/minikube-windows-amd64.exe addons enable dashboard -p addons-922300: (4.624098s)
addons_test.go:179: (dbg) Run:  out/minikube-windows-amd64.exe addons disable dashboard -p addons-922300
addons_test.go:179: (dbg) Done: out/minikube-windows-amd64.exe addons disable dashboard -p addons-922300: (4.7622018s)
addons_test.go:184: (dbg) Run:  out/minikube-windows-amd64.exe addons disable gvisor -p addons-922300
addons_test.go:184: (dbg) Done: out/minikube-windows-amd64.exe addons disable gvisor -p addons-922300: (2.5471607s)
--- PASS: TestAddons/StoppedEnableDisable (48.31s)

                                                
                                    
x
+
TestCertOptions (493.77s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-windows-amd64.exe start -p cert-options-933400 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=hyperv
cert_options_test.go:49: (dbg) Done: out/minikube-windows-amd64.exe start -p cert-options-933400 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=hyperv: (7m15.0537723s)
cert_options_test.go:60: (dbg) Run:  out/minikube-windows-amd64.exe -p cert-options-933400 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:60: (dbg) Done: out/minikube-windows-amd64.exe -p cert-options-933400 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt": (10.4595874s)
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-933400 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p cert-options-933400 -- "sudo cat /etc/kubernetes/admin.conf"
cert_options_test.go:100: (dbg) Done: out/minikube-windows-amd64.exe ssh -p cert-options-933400 -- "sudo cat /etc/kubernetes/admin.conf": (10.2487758s)
helpers_test.go:175: Cleaning up "cert-options-933400" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p cert-options-933400
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p cert-options-933400: (37.8311857s)
--- PASS: TestCertOptions (493.77s)

                                                
                                    
x
+
TestCertExpiration (938.81s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-windows-amd64.exe start -p cert-expiration-339800 --memory=2048 --cert-expiration=3m --driver=hyperv
E1218 13:43:42.387271   14928 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-806500\client.crt: The system cannot find the path specified.
cert_options_test.go:123: (dbg) Done: out/minikube-windows-amd64.exe start -p cert-expiration-339800 --memory=2048 --cert-expiration=3m --driver=hyperv: (6m54.8381202s)
E1218 13:50:25.652176   14928 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-922300\client.crt: The system cannot find the path specified.
cert_options_test.go:131: (dbg) Run:  out/minikube-windows-amd64.exe start -p cert-expiration-339800 --memory=2048 --cert-expiration=8760h --driver=hyperv
cert_options_test.go:131: (dbg) Done: out/minikube-windows-amd64.exe start -p cert-expiration-339800 --memory=2048 --cert-expiration=8760h --driver=hyperv: (5m1.0715258s)
helpers_test.go:175: Cleaning up "cert-expiration-339800" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p cert-expiration-339800
E1218 13:58:25.588775   14928 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-806500\client.crt: The system cannot find the path specified.
E1218 13:58:42.380236   14928 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-806500\client.crt: The system cannot find the path specified.
E1218 13:58:56.582332   14928 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ingress-addon-legacy-134100\client.crt: The system cannot find the path specified.
E1218 13:59:02.432696   14928 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-922300\client.crt: The system cannot find the path specified.
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p cert-expiration-339800: (42.8959676s)
--- PASS: TestCertExpiration (938.81s)

                                                
                                    
x
+
TestDockerFlags (457.5s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-windows-amd64.exe start -p docker-flags-904000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=hyperv
docker_test.go:51: (dbg) Done: out/minikube-windows-amd64.exe start -p docker-flags-904000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=hyperv: (6m37.8181489s)
docker_test.go:56: (dbg) Run:  out/minikube-windows-amd64.exe -p docker-flags-904000 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:56: (dbg) Done: out/minikube-windows-amd64.exe -p docker-flags-904000 ssh "sudo systemctl show docker --property=Environment --no-pager": (10.200257s)
docker_test.go:67: (dbg) Run:  out/minikube-windows-amd64.exe -p docker-flags-904000 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
docker_test.go:67: (dbg) Done: out/minikube-windows-amd64.exe -p docker-flags-904000 ssh "sudo systemctl show docker --property=ExecStart --no-pager": (11.6488457s)
helpers_test.go:175: Cleaning up "docker-flags-904000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p docker-flags-904000
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p docker-flags-904000: (37.8351349s)
--- PASS: TestDockerFlags (457.50s)

                                                
                                    
x
+
TestErrorSpam/start (17.42s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-356100 --log_dir C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-356100 start --dry-run
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-356100 --log_dir C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-356100 start --dry-run: (5.7771576s)
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-356100 --log_dir C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-356100 start --dry-run
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-356100 --log_dir C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-356100 start --dry-run: (5.7566318s)
error_spam_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-356100 --log_dir C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-356100 start --dry-run
error_spam_test.go:182: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-356100 --log_dir C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-356100 start --dry-run: (5.8769633s)
--- PASS: TestErrorSpam/start (17.42s)

                                                
                                    
x
+
TestErrorSpam/status (36.82s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-356100 --log_dir C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-356100 status
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-356100 --log_dir C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-356100 status: (12.6787116s)
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-356100 --log_dir C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-356100 status
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-356100 --log_dir C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-356100 status: (12.0918432s)
error_spam_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-356100 --log_dir C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-356100 status
error_spam_test.go:182: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-356100 --log_dir C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-356100 status: (12.0446208s)
--- PASS: TestErrorSpam/status (36.82s)

                                                
                                    
x
+
TestErrorSpam/pause (22.67s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-356100 --log_dir C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-356100 pause
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-356100 --log_dir C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-356100 pause: (7.8123534s)
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-356100 --log_dir C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-356100 pause
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-356100 --log_dir C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-356100 pause: (7.4259723s)
error_spam_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-356100 --log_dir C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-356100 pause
error_spam_test.go:182: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-356100 --log_dir C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-356100 pause: (7.4256643s)
--- PASS: TestErrorSpam/pause (22.67s)

                                                
                                    
x
+
TestErrorSpam/unpause (22.71s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-356100 --log_dir C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-356100 unpause
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-356100 --log_dir C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-356100 unpause: (7.6751974s)
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-356100 --log_dir C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-356100 unpause
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-356100 --log_dir C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-356100 unpause: (7.5182245s)
error_spam_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-356100 --log_dir C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-356100 unpause
error_spam_test.go:182: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-356100 --log_dir C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-356100 unpause: (7.5087329s)
--- PASS: TestErrorSpam/unpause (22.71s)

                                                
                                    
x
+
TestErrorSpam/stop (46.87s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-356100 --log_dir C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-356100 stop
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-356100 --log_dir C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-356100 stop: (29.2471623s)
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-356100 --log_dir C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-356100 stop
E1218 11:59:02.404700   14928 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-922300\client.crt: The system cannot find the path specified.
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-356100 --log_dir C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-356100 stop: (8.9644628s)
error_spam_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-356100 --log_dir C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-356100 stop
error_spam_test.go:182: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-356100 --log_dir C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-356100 stop: (8.6560672s)
--- PASS: TestErrorSpam/stop (46.87s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1854: local sync path: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\test\nested\copy\14928\hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.04s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (198.06s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2233: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-806500 --memory=4000 --apiserver-port=8441 --wait=all --driver=hyperv
E1218 11:59:30.209760   14928 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-922300\client.crt: The system cannot find the path specified.
functional_test.go:2233: (dbg) Done: out/minikube-windows-amd64.exe start -p functional-806500 --memory=4000 --apiserver-port=8441 --wait=all --driver=hyperv: (3m18.0556423s)
--- PASS: TestFunctional/serial/StartWithProxy (198.06s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (113.36s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-806500 --alsologtostderr -v=8
E1218 12:04:02.406327   14928 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-922300\client.crt: The system cannot find the path specified.
functional_test.go:655: (dbg) Done: out/minikube-windows-amd64.exe start -p functional-806500 --alsologtostderr -v=8: (1m53.3592112s)
functional_test.go:659: soft start took 1m53.3608725s for "functional-806500" cluster.
--- PASS: TestFunctional/serial/SoftStart (113.36s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.15s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.15s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.23s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-806500 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.23s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (26.66s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-806500 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-windows-amd64.exe -p functional-806500 cache add registry.k8s.io/pause:3.1: (9.3887281s)
functional_test.go:1045: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-806500 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-windows-amd64.exe -p functional-806500 cache add registry.k8s.io/pause:3.3: (8.7091469s)
functional_test.go:1045: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-806500 cache add registry.k8s.io/pause:latest
functional_test.go:1045: (dbg) Done: out/minikube-windows-amd64.exe -p functional-806500 cache add registry.k8s.io/pause:latest: (8.5654114s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (26.66s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (10.52s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-806500 C:\Users\jenkins.minikube7\AppData\Local\Temp\TestFunctionalserialCacheCmdcacheadd_local914759941\001
functional_test.go:1073: (dbg) Done: docker build -t minikube-local-cache-test:functional-806500 C:\Users\jenkins.minikube7\AppData\Local\Temp\TestFunctionalserialCacheCmdcacheadd_local914759941\001: (1.9289913s)
functional_test.go:1085: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-806500 cache add minikube-local-cache-test:functional-806500
functional_test.go:1085: (dbg) Done: out/minikube-windows-amd64.exe -p functional-806500 cache add minikube-local-cache-test:functional-806500: (8.0990469s)
functional_test.go:1090: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-806500 cache delete minikube-local-cache-test:functional-806500
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-806500
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (10.52s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.3s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-windows-amd64.exe cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.30s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.28s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-windows-amd64.exe cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.28s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (9.31s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-806500 ssh sudo crictl images
functional_test.go:1120: (dbg) Done: out/minikube-windows-amd64.exe -p functional-806500 ssh sudo crictl images: (9.3134478s)
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (9.31s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (36.22s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-806500 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1143: (dbg) Done: out/minikube-windows-amd64.exe -p functional-806500 ssh sudo docker rmi registry.k8s.io/pause:latest: (9.3547066s)
functional_test.go:1149: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-806500 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-806500 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (9.3551181s)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	W1218 12:05:36.144120    9088 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-806500 cache reload
functional_test.go:1154: (dbg) Done: out/minikube-windows-amd64.exe -p functional-806500 cache reload: (8.1589248s)
functional_test.go:1159: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-806500 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1159: (dbg) Done: out/minikube-windows-amd64.exe -p functional-806500 ssh sudo crictl inspecti registry.k8s.io/pause:latest: (9.3503056s)
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (36.22s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.58s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-windows-amd64.exe cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-windows-amd64.exe cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.58s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.51s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-806500 kubectl -- --context functional-806500 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.51s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (1.39s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out\kubectl.exe --context functional-806500 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (1.39s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (116.32s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-806500 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:753: (dbg) Done: out/minikube-windows-amd64.exe start -p functional-806500 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (1m56.315923s)
functional_test.go:757: restart took 1m56.3166039s for "functional-806500" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (116.32s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.2s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-806500 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.20s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (8.39s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-806500 logs
functional_test.go:1232: (dbg) Done: out/minikube-windows-amd64.exe -p functional-806500 logs: (8.3929628s)
--- PASS: TestFunctional/serial/LogsCmd (8.39s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (10.45s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-806500 logs --file C:\Users\jenkins.minikube7\AppData\Local\Temp\TestFunctionalserialLogsFileCmd2212837014\001\logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-windows-amd64.exe -p functional-806500 logs --file C:\Users\jenkins.minikube7\AppData\Local\Temp\TestFunctionalserialLogsFileCmd2212837014\001\logs.txt: (10.4455274s)
--- PASS: TestFunctional/serial/LogsFileCmd (10.45s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (21.18s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2320: (dbg) Run:  kubectl --context functional-806500 apply -f testdata\invalidsvc.yaml
functional_test.go:2334: (dbg) Run:  out/minikube-windows-amd64.exe service invalid-svc -p functional-806500
functional_test.go:2334: (dbg) Non-zero exit: out/minikube-windows-amd64.exe service invalid-svc -p functional-806500: exit status 115 (16.7321788s)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL             |
	|-----------|-------------|-------------|----------------------------|
	| default   | invalid-svc |          80 | http://192.168.235.2:32114 |
	|-----------|-------------|-------------|----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	W1218 12:08:24.250573    9536 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - C:\Users\jenkins.minikube7\AppData\Local\Temp\minikube_service_c9bf6787273d25f6c9d72c0b156373dea6a4fe44_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2326: (dbg) Run:  kubectl --context functional-806500 delete -f testdata\invalidsvc.yaml
functional_test.go:2326: (dbg) Done: kubectl --context functional-806500 delete -f testdata\invalidsvc.yaml: (1.0345288s)
--- PASS: TestFunctional/serial/InvalidService (21.18s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (42.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-806500 status
functional_test.go:850: (dbg) Done: out/minikube-windows-amd64.exe -p functional-806500 status: (13.8565695s)
functional_test.go:856: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-806500 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:856: (dbg) Done: out/minikube-windows-amd64.exe -p functional-806500 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}: (14.2389446s)
functional_test.go:868: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-806500 status -o json
functional_test.go:868: (dbg) Done: out/minikube-windows-amd64.exe -p functional-806500 status -o json: (14.3951454s)
--- PASS: TestFunctional/parallel/StatusCmd (42.49s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (26.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1628: (dbg) Run:  kubectl --context functional-806500 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1634: (dbg) Run:  kubectl --context functional-806500 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-55497b8b78-zgv86" [ca5e4409-2efa-461f-b108-5b7d1b8fbdbe] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-55497b8b78-zgv86" [ca5e4409-2efa-461f-b108-5b7d1b8fbdbe] Running
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 8.011203s
functional_test.go:1648: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-806500 service hello-node-connect --url
functional_test.go:1648: (dbg) Done: out/minikube-windows-amd64.exe -p functional-806500 service hello-node-connect --url: (17.7992238s)
functional_test.go:1654: found endpoint for hello-node-connect: http://192.168.235.2:31936
functional_test.go:1674: http://192.168.235.2:31936: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-55497b8b78-zgv86

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.235.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.235.2:31936
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (26.24s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.81s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1689: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-806500 addons list
functional_test.go:1701: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-806500 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.81s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (41.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [9d9c990f-70f0-465e-9b92-5fbcd70c5eb7] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.0202361s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-806500 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-806500 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-806500 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-806500 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [69095af7-fb51-4cc4-af1d-6821f9448bbd] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [69095af7-fb51-4cc4-af1d-6821f9448bbd] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 22.0135191s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-806500 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-806500 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-806500 delete -f testdata/storage-provisioner/pod.yaml: (1.2920259s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-806500 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [ff73aabc-0169-4fd9-9f32-306d945bcd69] Pending
helpers_test.go:344: "sp-pod" [ff73aabc-0169-4fd9-9f32-306d945bcd69] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [ff73aabc-0169-4fd9-9f32-306d945bcd69] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 10.0143677s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-806500 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (41.57s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (19.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1724: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-806500 ssh "echo hello"
functional_test.go:1724: (dbg) Done: out/minikube-windows-amd64.exe -p functional-806500 ssh "echo hello": (9.8982943s)
functional_test.go:1741: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-806500 ssh "cat /etc/hostname"
functional_test.go:1741: (dbg) Done: out/minikube-windows-amd64.exe -p functional-806500 ssh "cat /etc/hostname": (9.7698809s)
--- PASS: TestFunctional/parallel/SSHCmd (19.67s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (59.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-806500 cp testdata\cp-test.txt /home/docker/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p functional-806500 cp testdata\cp-test.txt /home/docker/cp-test.txt: (8.7357444s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-806500 ssh -n functional-806500 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p functional-806500 ssh -n functional-806500 "sudo cat /home/docker/cp-test.txt": (10.3368951s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-806500 cp functional-806500:/home/docker/cp-test.txt C:\Users\jenkins.minikube7\AppData\Local\Temp\TestFunctionalparallelCpCmd115578923\001\cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p functional-806500 cp functional-806500:/home/docker/cp-test.txt C:\Users\jenkins.minikube7\AppData\Local\Temp\TestFunctionalparallelCpCmd115578923\001\cp-test.txt: (11.2883484s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-806500 ssh -n functional-806500 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p functional-806500 ssh -n functional-806500 "sudo cat /home/docker/cp-test.txt": (11.2274s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-806500 cp testdata\cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p functional-806500 cp testdata\cp-test.txt /tmp/does/not/exist/cp-test.txt: (7.9804831s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-806500 ssh -n functional-806500 "sudo cat /tmp/does/not/exist/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p functional-806500 ssh -n functional-806500 "sudo cat /tmp/does/not/exist/cp-test.txt": (9.9943253s)
--- PASS: TestFunctional/parallel/CpCmd (59.57s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (58.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1792: (dbg) Run:  kubectl --context functional-806500 replace --force -f testdata\mysql.yaml
functional_test.go:1798: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-859648c796-w9sw9" [9939be79-6008-4646-b69f-3897bc2d6b5a] Pending
helpers_test.go:344: "mysql-859648c796-w9sw9" [9939be79-6008-4646-b69f-3897bc2d6b5a] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-859648c796-w9sw9" [9939be79-6008-4646-b69f-3897bc2d6b5a] Running
functional_test.go:1798: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 48.0087082s
functional_test.go:1806: (dbg) Run:  kubectl --context functional-806500 exec mysql-859648c796-w9sw9 -- mysql -ppassword -e "show databases;"
functional_test.go:1806: (dbg) Non-zero exit: kubectl --context functional-806500 exec mysql-859648c796-w9sw9 -- mysql -ppassword -e "show databases;": exit status 1 (451.5515ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1806: (dbg) Run:  kubectl --context functional-806500 exec mysql-859648c796-w9sw9 -- mysql -ppassword -e "show databases;"
functional_test.go:1806: (dbg) Non-zero exit: kubectl --context functional-806500 exec mysql-859648c796-w9sw9 -- mysql -ppassword -e "show databases;": exit status 1 (364.7949ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1806: (dbg) Run:  kubectl --context functional-806500 exec mysql-859648c796-w9sw9 -- mysql -ppassword -e "show databases;"
functional_test.go:1806: (dbg) Non-zero exit: kubectl --context functional-806500 exec mysql-859648c796-w9sw9 -- mysql -ppassword -e "show databases;": exit status 1 (423.4452ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1806: (dbg) Run:  kubectl --context functional-806500 exec mysql-859648c796-w9sw9 -- mysql -ppassword -e "show databases;"
functional_test.go:1806: (dbg) Non-zero exit: kubectl --context functional-806500 exec mysql-859648c796-w9sw9 -- mysql -ppassword -e "show databases;": exit status 1 (345.4432ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1806: (dbg) Run:  kubectl --context functional-806500 exec mysql-859648c796-w9sw9 -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (58.60s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (10.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1928: Checking for existence of /etc/test/nested/copy/14928/hosts within VM
functional_test.go:1930: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-806500 ssh "sudo cat /etc/test/nested/copy/14928/hosts"
functional_test.go:1930: (dbg) Done: out/minikube-windows-amd64.exe -p functional-806500 ssh "sudo cat /etc/test/nested/copy/14928/hosts": (10.2436465s)
functional_test.go:1935: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (10.24s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (63.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1971: Checking for existence of /etc/ssl/certs/14928.pem within VM
functional_test.go:1972: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-806500 ssh "sudo cat /etc/ssl/certs/14928.pem"
functional_test.go:1972: (dbg) Done: out/minikube-windows-amd64.exe -p functional-806500 ssh "sudo cat /etc/ssl/certs/14928.pem": (10.6704614s)
functional_test.go:1971: Checking for existence of /usr/share/ca-certificates/14928.pem within VM
functional_test.go:1972: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-806500 ssh "sudo cat /usr/share/ca-certificates/14928.pem"
functional_test.go:1972: (dbg) Done: out/minikube-windows-amd64.exe -p functional-806500 ssh "sudo cat /usr/share/ca-certificates/14928.pem": (10.108514s)
functional_test.go:1971: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1972: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-806500 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1972: (dbg) Done: out/minikube-windows-amd64.exe -p functional-806500 ssh "sudo cat /etc/ssl/certs/51391683.0": (10.7498635s)
functional_test.go:1998: Checking for existence of /etc/ssl/certs/149282.pem within VM
functional_test.go:1999: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-806500 ssh "sudo cat /etc/ssl/certs/149282.pem"
functional_test.go:1999: (dbg) Done: out/minikube-windows-amd64.exe -p functional-806500 ssh "sudo cat /etc/ssl/certs/149282.pem": (10.6625218s)
functional_test.go:1998: Checking for existence of /usr/share/ca-certificates/149282.pem within VM
functional_test.go:1999: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-806500 ssh "sudo cat /usr/share/ca-certificates/149282.pem"
functional_test.go:1999: (dbg) Done: out/minikube-windows-amd64.exe -p functional-806500 ssh "sudo cat /usr/share/ca-certificates/149282.pem": (11.1483764s)
functional_test.go:1998: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1999: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-806500 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
functional_test.go:1999: (dbg) Done: out/minikube-windows-amd64.exe -p functional-806500 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0": (10.0932914s)
--- PASS: TestFunctional/parallel/CertSync (63.44s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-806500 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (10.83s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2026: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-806500 ssh "sudo systemctl is-active crio"
functional_test.go:2026: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-806500 ssh "sudo systemctl is-active crio": exit status 1 (10.8284546s)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	W1218 12:10:11.315690    2904 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (10.83s)

                                                
                                    
x
+
TestFunctional/parallel/License (3.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2287: (dbg) Run:  out/minikube-windows-amd64.exe license
functional_test.go:2287: (dbg) Done: out/minikube-windows-amd64.exe license: (3.280632s)
--- PASS: TestFunctional/parallel/License (3.30s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (9.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1269: (dbg) Run:  out/minikube-windows-amd64.exe profile lis
functional_test.go:1274: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
functional_test.go:1274: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (8.914593s)
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (9.50s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (19.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1438: (dbg) Run:  kubectl --context functional-806500 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1444: (dbg) Run:  kubectl --context functional-806500 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-d7447cc7f-clrbf" [5707c8a1-988f-4a6b-b541-352c348adda4] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-d7447cc7f-clrbf" [5707c8a1-988f-4a6b-b541-352c348adda4] Running
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 19.0137911s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (19.46s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (8.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1309: (dbg) Run:  out/minikube-windows-amd64.exe profile list
functional_test.go:1309: (dbg) Done: out/minikube-windows-amd64.exe profile list: (8.2281739s)
functional_test.go:1314: Took "8.2284163s" to run "out/minikube-windows-amd64.exe profile list"
functional_test.go:1323: (dbg) Run:  out/minikube-windows-amd64.exe profile list -l
functional_test.go:1328: Took "264.7522ms" to run "out/minikube-windows-amd64.exe profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (8.49s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (9.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1360: (dbg) Run:  out/minikube-windows-amd64.exe profile list -o json
functional_test.go:1360: (dbg) Done: out/minikube-windows-amd64.exe profile list -o json: (8.9411477s)
functional_test.go:1365: Took "8.9413744s" to run "out/minikube-windows-amd64.exe profile list -o json"
functional_test.go:1373: (dbg) Run:  out/minikube-windows-amd64.exe profile list -o json --light
functional_test.go:1378: Took "297.3901ms" to run "out/minikube-windows-amd64.exe profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (9.24s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (14.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1458: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-806500 service list
E1218 12:09:02.407449   14928 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-922300\client.crt: The system cannot find the path specified.
functional_test.go:1458: (dbg) Done: out/minikube-windows-amd64.exe -p functional-806500 service list: (14.5861502s)
--- PASS: TestFunctional/parallel/ServiceCmd/List (14.59s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (14.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1488: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-806500 service list -o json
functional_test.go:1488: (dbg) Done: out/minikube-windows-amd64.exe -p functional-806500 service list -o json: (14.5477029s)
functional_test.go:1493: Took "14.5480918s" to run "out/minikube-windows-amd64.exe -p functional-806500 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (14.55s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (8.85s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-windows-amd64.exe -p functional-806500 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-windows-amd64.exe -p functional-806500 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-windows-amd64.exe -p functional-806500 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-windows-amd64.exe -p functional-806500 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 11292: OpenProcess: The parameter is incorrect.
helpers_test.go:508: unable to kill pid 5480: TerminateProcess: Access is denied.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (8.85s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-windows-amd64.exe -p functional-806500 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (15.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-806500 apply -f testdata\testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [12d31b83-b9af-4a3b-85f9-8722e0e1e549] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [12d31b83-b9af-4a3b-85f9-8722e0e1e549] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 15.0129413s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (15.59s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2255: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-806500 version --short
--- PASS: TestFunctional/parallel/Version/short (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (8.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2269: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-806500 version -o=json --components
functional_test.go:2269: (dbg) Done: out/minikube-windows-amd64.exe -p functional-806500 version -o=json --components: (8.3900745s)
--- PASS: TestFunctional/parallel/Version/components (8.39s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (8.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-806500 image ls --format short --alsologtostderr
functional_test.go:260: (dbg) Done: out/minikube-windows-amd64.exe -p functional-806500 image ls --format short --alsologtostderr: (8.1634642s)
functional_test.go:265: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-806500 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.28.4
registry.k8s.io/kube-proxy:v1.28.4
registry.k8s.io/kube-controller-manager:v1.28.4
registry.k8s.io/kube-apiserver:v1.28.4
registry.k8s.io/etcd:3.5.9-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.10.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/google-containers/addon-resizer:functional-806500
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/mysql:5.7
docker.io/library/minikube-local-cache-test:functional-806500
functional_test.go:268: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-806500 image ls --format short --alsologtostderr:
W1218 12:12:26.878786   11364 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
I1218 12:12:26.991325   11364 out.go:296] Setting OutFile to fd 992 ...
I1218 12:12:26.992348   11364 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1218 12:12:26.992411   11364 out.go:309] Setting ErrFile to fd 1020...
I1218 12:12:26.992411   11364 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1218 12:12:27.026755   11364 config.go:182] Loaded profile config "functional-806500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I1218 12:12:27.027775   11364 config.go:182] Loaded profile config "functional-806500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I1218 12:12:27.028756   11364 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-806500 ).state
I1218 12:12:29.477857   11364 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I1218 12:12:29.478158   11364 main.go:141] libmachine: [stderr =====>] : 
I1218 12:12:29.496669   11364 ssh_runner.go:195] Run: systemctl --version
I1218 12:12:29.496669   11364 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-806500 ).state
I1218 12:12:31.907828   11364 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I1218 12:12:31.907878   11364 main.go:141] libmachine: [stderr =====>] : 
I1218 12:12:31.907990   11364 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-806500 ).networkadapters[0]).ipaddresses[0]
I1218 12:12:34.650329   11364 main.go:141] libmachine: [stdout =====>] : 192.168.235.2

                                                
                                                
I1218 12:12:34.650403   11364 main.go:141] libmachine: [stderr =====>] : 
I1218 12:12:34.651174   11364 sshutil.go:53] new ssh client: &{IP:192.168.235.2 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\functional-806500\id_rsa Username:docker}
I1218 12:12:34.800111   11364 ssh_runner.go:235] Completed: systemctl --version: (5.3032803s)
I1218 12:12:34.813668   11364 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (8.16s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (7.9s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-806500 image ls --format table --alsologtostderr
functional_test.go:260: (dbg) Done: out/minikube-windows-amd64.exe -p functional-806500 image ls --format table --alsologtostderr: (7.9005873s)
functional_test.go:265: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-806500 image ls --format table --alsologtostderr:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| docker.io/library/minikube-local-cache-test | functional-806500 | f0bd0f9ab802d | 30B    |
| docker.io/library/nginx                     | latest            | a6bd71f48f683 | 187MB  |
| registry.k8s.io/kube-apiserver              | v1.28.4           | 7fe0e6f37db33 | 126MB  |
| registry.k8s.io/kube-scheduler              | v1.28.4           | e3db313c6dbc0 | 60.1MB |
| registry.k8s.io/coredns/coredns             | v1.10.1           | ead0a4a53df89 | 53.6MB |
| registry.k8s.io/pause                       | 3.1               | da86e6ba6ca19 | 742kB  |
| docker.io/library/mysql                     | 5.7               | 5107333e08a87 | 501MB  |
| registry.k8s.io/etcd                        | 3.5.9-0           | 73deb9a3f7025 | 294MB  |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | 6e38f40d628db | 31.5MB |
| registry.k8s.io/pause                       | 3.3               | 0184c1613d929 | 683kB  |
| registry.k8s.io/pause                       | latest            | 350b164e7ae1d | 240kB  |
| registry.k8s.io/kube-proxy                  | v1.28.4           | 83f6cc407eed8 | 73.2MB |
| registry.k8s.io/kube-controller-manager     | v1.28.4           | d058aa5ab969c | 122MB  |
| docker.io/library/nginx                     | alpine            | 01e5c69afaf63 | 42.6MB |
| registry.k8s.io/pause                       | 3.9               | e6f1816883972 | 744kB  |
| gcr.io/google-containers/addon-resizer      | functional-806500 | ffd4cfbbe753e | 32.9MB |
| registry.k8s.io/echoserver                  | 1.8               | 82e4c8a736a4f | 95.4MB |
|---------------------------------------------|-------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-806500 image ls --format table --alsologtostderr:
W1218 12:12:35.046805    7208 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
I1218 12:12:35.134772    7208 out.go:296] Setting OutFile to fd 968 ...
I1218 12:12:35.153523    7208 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1218 12:12:35.154519    7208 out.go:309] Setting ErrFile to fd 1016...
I1218 12:12:35.154519    7208 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1218 12:12:35.171539    7208 config.go:182] Loaded profile config "functional-806500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I1218 12:12:35.172111    7208 config.go:182] Loaded profile config "functional-806500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I1218 12:12:35.172111    7208 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-806500 ).state
I1218 12:12:37.426167    7208 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I1218 12:12:37.426356    7208 main.go:141] libmachine: [stderr =====>] : 
I1218 12:12:37.443412    7208 ssh_runner.go:195] Run: systemctl --version
I1218 12:12:37.443412    7208 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-806500 ).state
I1218 12:12:39.815009    7208 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I1218 12:12:39.815081    7208 main.go:141] libmachine: [stderr =====>] : 
I1218 12:12:39.815179    7208 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-806500 ).networkadapters[0]).ipaddresses[0]
I1218 12:12:42.612375    7208 main.go:141] libmachine: [stdout =====>] : 192.168.235.2

                                                
                                                
I1218 12:12:42.612443    7208 main.go:141] libmachine: [stderr =====>] : 
I1218 12:12:42.612443    7208 sshutil.go:53] new ssh client: &{IP:192.168.235.2 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\functional-806500\id_rsa Username:docker}
I1218 12:12:42.730757    7208 ssh_runner.go:235] Completed: systemctl --version: (5.2873347s)
I1218 12:12:42.740982    7208 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (7.90s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (7.89s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-806500 image ls --format json --alsologtostderr
functional_test.go:260: (dbg) Done: out/minikube-windows-amd64.exe -p functional-806500 image ls --format json --alsologtostderr: (7.8846948s)
functional_test.go:265: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-806500 image ls --format json --alsologtostderr:
[{"id":"a6bd71f48f6839d9faae1f29d3babef831e76bc213107682c5cc80f0cbb30866","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"187000000"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31500000"},{"id":"ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":[],"repoTags":["gcr.io/google-containers/addon-resizer:functional-806500"],"size":"32900000"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":[],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"95400000"},{"id":"f0bd0f9ab802dbaba29d518eff2e16ab9b4a51eb4d52de768b76f53bfcac19b5","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-806500"],"size":"30"},{"id":"83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.28.4"],"size":"73200000"},{"id":"0184c1613d
92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"683000"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"},{"id":"e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.9"],"size":"744000"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"742000"},{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":[],"repoTags":["docker.io/library/mysql:5.7"],"size":"501000000"},{"id":"01e5c69afaf635f66aab0b59404a0ac72db1e2e519c3f41a1ff53d37c35bba41","repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"],"size":"42600000"},{"id":"e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.2
8.4"],"size":"60100000"},{"id":"d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.28.4"],"size":"122000000"},{"id":"73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.9-0"],"size":"294000000"},{"id":"7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.28.4"],"size":"126000000"},{"id":"ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.10.1"],"size":"53600000"}]
functional_test.go:268: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-806500 image ls --format json --alsologtostderr:
W1218 12:12:34.944323   13848 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
I1218 12:12:35.029411   13848 out.go:296] Setting OutFile to fd 616 ...
I1218 12:12:35.030412   13848 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1218 12:12:35.030412   13848 out.go:309] Setting ErrFile to fd 864...
I1218 12:12:35.030464   13848 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1218 12:12:35.048410   13848 config.go:182] Loaded profile config "functional-806500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I1218 12:12:35.049490   13848 config.go:182] Loaded profile config "functional-806500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I1218 12:12:35.050206   13848 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-806500 ).state
I1218 12:12:37.363837   13848 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I1218 12:12:37.364049   13848 main.go:141] libmachine: [stderr =====>] : 
I1218 12:12:37.377136   13848 ssh_runner.go:195] Run: systemctl --version
I1218 12:12:37.378139   13848 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-806500 ).state
I1218 12:12:39.688745   13848 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I1218 12:12:39.688892   13848 main.go:141] libmachine: [stderr =====>] : 
I1218 12:12:39.689012   13848 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-806500 ).networkadapters[0]).ipaddresses[0]
I1218 12:12:42.503550   13848 main.go:141] libmachine: [stdout =====>] : 192.168.235.2

                                                
                                                
I1218 12:12:42.503550   13848 main.go:141] libmachine: [stderr =====>] : 
I1218 12:12:42.503550   13848 sshutil.go:53] new ssh client: &{IP:192.168.235.2 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\functional-806500\id_rsa Username:docker}
I1218 12:12:42.606934   13848 ssh_runner.go:235] Completed: systemctl --version: (5.2296954s)
I1218 12:12:42.621640   13848 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (7.89s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (8.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-806500 image ls --format yaml --alsologtostderr
functional_test.go:260: (dbg) Done: out/minikube-windows-amd64.exe -p functional-806500 image ls --format yaml --alsologtostderr: (8.070587s)
functional_test.go:265: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-806500 image ls --format yaml --alsologtostderr:
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "683000"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests: []
repoTags:
- registry.k8s.io/echoserver:1.8
size: "95400000"
- id: d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.28.4
size: "122000000"
- id: 73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.9-0
size: "294000000"
- id: 83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.28.4
size: "73200000"
- id: e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.9
size: "744000"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31500000"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests: []
repoTags:
- gcr.io/google-containers/addon-resizer:functional-806500
size: "32900000"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "742000"
- id: f0bd0f9ab802dbaba29d518eff2e16ab9b4a51eb4d52de768b76f53bfcac19b5
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-806500
size: "30"
- id: a6bd71f48f6839d9faae1f29d3babef831e76bc213107682c5cc80f0cbb30866
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "187000000"
- id: 7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.28.4
size: "126000000"
- id: e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.28.4
size: "60100000"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"
- id: 5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests: []
repoTags:
- docker.io/library/mysql:5.7
size: "501000000"
- id: 01e5c69afaf635f66aab0b59404a0ac72db1e2e519c3f41a1ff53d37c35bba41
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "42600000"
- id: ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.10.1
size: "53600000"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-806500 image ls --format yaml --alsologtostderr:
W1218 12:12:26.880788   10492 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
I1218 12:12:26.988183   10492 out.go:296] Setting OutFile to fd 784 ...
I1218 12:12:26.992519   10492 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1218 12:12:26.992519   10492 out.go:309] Setting ErrFile to fd 848...
I1218 12:12:26.992652   10492 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1218 12:12:27.013148   10492 config.go:182] Loaded profile config "functional-806500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I1218 12:12:27.013692   10492 config.go:182] Loaded profile config "functional-806500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I1218 12:12:27.013880   10492 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-806500 ).state
I1218 12:12:29.430853   10492 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I1218 12:12:29.430853   10492 main.go:141] libmachine: [stderr =====>] : 
I1218 12:12:29.446010   10492 ssh_runner.go:195] Run: systemctl --version
I1218 12:12:29.446121   10492 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-806500 ).state
I1218 12:12:31.815162   10492 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I1218 12:12:31.815231   10492 main.go:141] libmachine: [stderr =====>] : 
I1218 12:12:31.815295   10492 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-806500 ).networkadapters[0]).ipaddresses[0]
I1218 12:12:34.556958   10492 main.go:141] libmachine: [stdout =====>] : 192.168.235.2

                                                
                                                
I1218 12:12:34.556958   10492 main.go:141] libmachine: [stderr =====>] : 
I1218 12:12:34.557501   10492 sshutil.go:53] new ssh client: &{IP:192.168.235.2 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\functional-806500\id_rsa Username:docker}
I1218 12:12:34.713067   10492 ssh_runner.go:235] Completed: systemctl --version: (5.2670462s)
I1218 12:12:34.723951   10492 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (8.07s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (26.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-806500 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-806500 ssh pgrep buildkitd: exit status 1 (9.6470211s)

                                                
                                                
** stderr ** 
	W1218 12:12:39.149844    3596 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-806500 image build -t localhost/my-image:functional-806500 testdata\build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-windows-amd64.exe -p functional-806500 image build -t localhost/my-image:functional-806500 testdata\build --alsologtostderr: (9.3752096s)
functional_test.go:319: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-806500 image build -t localhost/my-image:functional-806500 testdata\build --alsologtostderr:
Sending build context to Docker daemon  3.072kB

Step 1/3 : FROM gcr.io/k8s-minikube/busybox
latest: Pulling from k8s-minikube/busybox
5cc84ad355aa: Pulling fs layer
5cc84ad355aa: Verifying Checksum
5cc84ad355aa: Download complete
5cc84ad355aa: Pull complete
Digest: sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:latest
---> beae173ccac6
Step 2/3 : RUN true
---> Running in c8921b8517e8
Removing intermediate container c8921b8517e8
---> c0648b697a2a
Step 3/3 : ADD content.txt /
---> c0a68afbd3dd
Successfully built c0a68afbd3dd
Successfully tagged localhost/my-image:functional-806500
functional_test.go:322: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-806500 image build -t localhost/my-image:functional-806500 testdata\build --alsologtostderr:
W1218 12:12:48.774563   10872 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
I1218 12:12:48.850524   10872 out.go:296] Setting OutFile to fd 808 ...
I1218 12:12:48.869762   10872 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1218 12:12:48.869762   10872 out.go:309] Setting ErrFile to fd 736...
I1218 12:12:48.869842   10872 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1218 12:12:48.886150   10872 config.go:182] Loaded profile config "functional-806500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I1218 12:12:48.904128   10872 config.go:182] Loaded profile config "functional-806500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I1218 12:12:48.905075   10872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-806500 ).state
I1218 12:12:51.026474   10872 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I1218 12:12:51.026675   10872 main.go:141] libmachine: [stderr =====>] : 
I1218 12:12:51.040088   10872 ssh_runner.go:195] Run: systemctl --version
I1218 12:12:51.040088   10872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-806500 ).state
I1218 12:12:53.174920   10872 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I1218 12:12:53.175156   10872 main.go:141] libmachine: [stderr =====>] : 
I1218 12:12:53.175297   10872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-806500 ).networkadapters[0]).ipaddresses[0]
I1218 12:12:55.704163   10872 main.go:141] libmachine: [stdout =====>] : 192.168.235.2

                                                
                                                
I1218 12:12:55.704417   10872 main.go:141] libmachine: [stderr =====>] : 
I1218 12:12:55.704417   10872 sshutil.go:53] new ssh client: &{IP:192.168.235.2 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\functional-806500\id_rsa Username:docker}
I1218 12:12:55.805227   10872 ssh_runner.go:235] Completed: systemctl --version: (4.765026s)
I1218 12:12:55.805313   10872 build_images.go:151] Building image from path: C:\Users\jenkins.minikube7\AppData\Local\Temp\build.351005802.tar
I1218 12:12:55.817676   10872 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1218 12:12:55.848250   10872 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.351005802.tar
I1218 12:12:55.855282   10872 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.351005802.tar: stat -c "%s %y" /var/lib/minikube/build/build.351005802.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.351005802.tar': No such file or directory
I1218 12:12:55.855488   10872 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\AppData\Local\Temp\build.351005802.tar --> /var/lib/minikube/build/build.351005802.tar (3072 bytes)
I1218 12:12:55.910766   10872 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.351005802
I1218 12:12:55.939746   10872 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.351005802 -xf /var/lib/minikube/build/build.351005802.tar
I1218 12:12:55.954089   10872 docker.go:346] Building image: /var/lib/minikube/build/build.351005802
I1218 12:12:55.963564   10872 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-806500 /var/lib/minikube/build/build.351005802
DEPRECATED: The legacy builder is deprecated and will be removed in a future release.
Install the buildx component to build images with BuildKit:
https://docs.docker.com/go/buildx/

                                                
                                                
I1218 12:12:57.926635   10872 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-806500 /var/lib/minikube/build/build.351005802: (1.9623587s)
I1218 12:12:57.942431   10872 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.351005802
I1218 12:12:57.972942   10872 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.351005802.tar
I1218 12:12:57.991432   10872 build_images.go:207] Built localhost/my-image:functional-806500 from C:\Users\jenkins.minikube7\AppData\Local\Temp\build.351005802.tar
I1218 12:12:57.991509   10872 build_images.go:123] succeeded building to: functional-806500
I1218 12:12:57.991509   10872 build_images.go:124] failed building to: 
functional_test.go:447: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-806500 image ls
functional_test.go:447: (dbg) Done: out/minikube-windows-amd64.exe -p functional-806500 image ls: (7.2725246s)
E1218 12:14:02.404001   14928 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-922300\client.crt: The system cannot find the path specified.
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (26.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (3.83s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
E1218 12:10:25.572011   14928 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-922300\client.crt: The system cannot find the path specified.
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (3.5852524s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-806500
--- PASS: TestFunctional/parallel/ImageCommands/Setup (3.83s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-windows-amd64.exe -p functional-806500 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 10416: TerminateProcess: Access is denied.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (22.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-806500 image load --daemon gcr.io/google-containers/addon-resizer:functional-806500 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-windows-amd64.exe -p functional-806500 image load --daemon gcr.io/google-containers/addon-resizer:functional-806500 --alsologtostderr: (14.6629438s)
functional_test.go:447: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-806500 image ls
functional_test.go:447: (dbg) Done: out/minikube-windows-amd64.exe -p functional-806500 image ls: (7.9028437s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (22.57s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (20.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-806500 image load --daemon gcr.io/google-containers/addon-resizer:functional-806500 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-windows-amd64.exe -p functional-806500 image load --daemon gcr.io/google-containers/addon-resizer:functional-806500 --alsologtostderr: (12.2778598s)
functional_test.go:447: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-806500 image ls
functional_test.go:447: (dbg) Done: out/minikube-windows-amd64.exe -p functional-806500 image ls: (8.1525395s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (20.43s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/powershell (45.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/powershell
functional_test.go:495: (dbg) Run:  powershell.exe -NoProfile -NonInteractive "out/minikube-windows-amd64.exe -p functional-806500 docker-env | Invoke-Expression ; out/minikube-windows-amd64.exe status -p functional-806500"
functional_test.go:495: (dbg) Done: powershell.exe -NoProfile -NonInteractive "out/minikube-windows-amd64.exe -p functional-806500 docker-env | Invoke-Expression ; out/minikube-windows-amd64.exe status -p functional-806500": (30.0306317s)
functional_test.go:518: (dbg) Run:  powershell.exe -NoProfile -NonInteractive "out/minikube-windows-amd64.exe -p functional-806500 docker-env | Invoke-Expression ; docker images"
functional_test.go:518: (dbg) Done: powershell.exe -NoProfile -NonInteractive "out/minikube-windows-amd64.exe -p functional-806500 docker-env | Invoke-Expression ; docker images": (15.0861202s)
--- PASS: TestFunctional/parallel/DockerEnv/powershell (45.13s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (26.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:234: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (3.9110432s)
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-806500
functional_test.go:244: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-806500 image load --daemon gcr.io/google-containers/addon-resizer:functional-806500 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-windows-amd64.exe -p functional-806500 image load --daemon gcr.io/google-containers/addon-resizer:functional-806500 --alsologtostderr: (14.4194744s)
functional_test.go:447: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-806500 image ls
functional_test.go:447: (dbg) Done: out/minikube-windows-amd64.exe -p functional-806500 image ls: (7.70133s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (26.30s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (2.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2118: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-806500 update-context --alsologtostderr -v=2
functional_test.go:2118: (dbg) Done: out/minikube-windows-amd64.exe -p functional-806500 update-context --alsologtostderr -v=2: (2.5779555s)
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (2.58s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (2.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2118: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-806500 update-context --alsologtostderr -v=2
functional_test.go:2118: (dbg) Done: out/minikube-windows-amd64.exe -p functional-806500 update-context --alsologtostderr -v=2: (2.5615511s)
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (2.56s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (2.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2118: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-806500 update-context --alsologtostderr -v=2
functional_test.go:2118: (dbg) Done: out/minikube-windows-amd64.exe -p functional-806500 update-context --alsologtostderr -v=2: (2.5745105s)
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (2.58s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (8.8s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-806500 image save gcr.io/google-containers/addon-resizer:functional-806500 C:\jenkins\workspace\Hyper-V_Windows_integration\addon-resizer-save.tar --alsologtostderr
functional_test.go:379: (dbg) Done: out/minikube-windows-amd64.exe -p functional-806500 image save gcr.io/google-containers/addon-resizer:functional-806500 C:\jenkins\workspace\Hyper-V_Windows_integration\addon-resizer-save.tar --alsologtostderr: (8.7994646s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (8.80s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (15.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-806500 image rm gcr.io/google-containers/addon-resizer:functional-806500 --alsologtostderr
functional_test.go:391: (dbg) Done: out/minikube-windows-amd64.exe -p functional-806500 image rm gcr.io/google-containers/addon-resizer:functional-806500 --alsologtostderr: (7.9076409s)
functional_test.go:447: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-806500 image ls
functional_test.go:447: (dbg) Done: out/minikube-windows-amd64.exe -p functional-806500 image ls: (7.6378839s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (15.55s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (17.84s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-806500 image load C:\jenkins\workspace\Hyper-V_Windows_integration\addon-resizer-save.tar --alsologtostderr
functional_test.go:408: (dbg) Done: out/minikube-windows-amd64.exe -p functional-806500 image load C:\jenkins\workspace\Hyper-V_Windows_integration\addon-resizer-save.tar --alsologtostderr: (10.110954s)
functional_test.go:447: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-806500 image ls
functional_test.go:447: (dbg) Done: out/minikube-windows-amd64.exe -p functional-806500 image ls: (7.7298897s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (17.84s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (9.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-806500
functional_test.go:423: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-806500 image save --daemon gcr.io/google-containers/addon-resizer:functional-806500 --alsologtostderr
functional_test.go:423: (dbg) Done: out/minikube-windows-amd64.exe -p functional-806500 image save --daemon gcr.io/google-containers/addon-resizer:functional-806500 --alsologtostderr: (9.0220925s)
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-806500
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (9.44s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.49s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-806500
--- PASS: TestFunctional/delete_addon-resizer_images (0.49s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.19s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-806500
--- PASS: TestFunctional/delete_my-image_image (0.19s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.2s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-806500
--- PASS: TestFunctional/delete_minikube_cached_images (0.20s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (187.57s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-windows-amd64.exe start -p image-076500 --driver=hyperv
image_test.go:69: (dbg) Done: out/minikube-windows-amd64.exe start -p image-076500 --driver=hyperv: (3m7.5666747s)
--- PASS: TestImageBuild/serial/Setup (187.57s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (9.04s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-windows-amd64.exe image build -t aaa:latest ./testdata/image-build/test-normal -p image-076500
image_test.go:78: (dbg) Done: out/minikube-windows-amd64.exe image build -t aaa:latest ./testdata/image-build/test-normal -p image-076500: (9.0411466s)
--- PASS: TestImageBuild/serial/NormalBuild (9.04s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (8.69s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-windows-amd64.exe image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-076500
image_test.go:99: (dbg) Done: out/minikube-windows-amd64.exe image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-076500: (8.6887649s)
--- PASS: TestImageBuild/serial/BuildWithBuildArg (8.69s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (7.53s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-076500
E1218 12:18:42.359202   14928 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-806500\client.crt: The system cannot find the path specified.
E1218 12:18:42.375039   14928 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-806500\client.crt: The system cannot find the path specified.
E1218 12:18:42.390391   14928 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-806500\client.crt: The system cannot find the path specified.
E1218 12:18:42.422170   14928 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-806500\client.crt: The system cannot find the path specified.
E1218 12:18:42.468389   14928 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-806500\client.crt: The system cannot find the path specified.
E1218 12:18:42.563147   14928 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-806500\client.crt: The system cannot find the path specified.
E1218 12:18:42.738682   14928 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-806500\client.crt: The system cannot find the path specified.
E1218 12:18:43.071652   14928 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-806500\client.crt: The system cannot find the path specified.
E1218 12:18:43.725722   14928 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-806500\client.crt: The system cannot find the path specified.
E1218 12:18:45.009639   14928 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-806500\client.crt: The system cannot find the path specified.
E1218 12:18:47.583625   14928 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-806500\client.crt: The system cannot find the path specified.
image_test.go:133: (dbg) Done: out/minikube-windows-amd64.exe image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-076500: (7.5273745s)
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (7.53s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (7.46s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-windows-amd64.exe image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-076500
E1218 12:18:52.713410   14928 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-806500\client.crt: The system cannot find the path specified.
image_test.go:88: (dbg) Done: out/minikube-windows-amd64.exe image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-076500: (7.4615212s)
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (7.46s)

                                                
                                    
x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (209.87s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run:  out/minikube-windows-amd64.exe start -p ingress-addon-legacy-134100 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=hyperv
E1218 12:20:04.413534   14928 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-806500\client.crt: The system cannot find the path specified.
E1218 12:21:26.334826   14928 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-806500\client.crt: The system cannot find the path specified.
ingress_addon_legacy_test.go:39: (dbg) Done: out/minikube-windows-amd64.exe start -p ingress-addon-legacy-134100 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=hyperv: (3m29.8676777s)
--- PASS: TestIngressAddonLegacy/StartLegacyK8sCluster (209.87s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (38.31s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:70: (dbg) Run:  out/minikube-windows-amd64.exe -p ingress-addon-legacy-134100 addons enable ingress --alsologtostderr -v=5
ingress_addon_legacy_test.go:70: (dbg) Done: out/minikube-windows-amd64.exe -p ingress-addon-legacy-134100 addons enable ingress --alsologtostderr -v=5: (38.3111097s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (38.31s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (14.48s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation
ingress_addon_legacy_test.go:79: (dbg) Run:  out/minikube-windows-amd64.exe -p ingress-addon-legacy-134100 addons enable ingress-dns --alsologtostderr -v=5
E1218 12:23:42.373138   14928 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-806500\client.crt: The system cannot find the path specified.
ingress_addon_legacy_test.go:79: (dbg) Done: out/minikube-windows-amd64.exe -p ingress-addon-legacy-134100 addons enable ingress-dns --alsologtostderr -v=5: (14.4776659s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (14.48s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddons (80.89s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddons
addons_test.go:206: (dbg) Run:  kubectl --context ingress-addon-legacy-134100 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:206: (dbg) Done: kubectl --context ingress-addon-legacy-134100 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s: (1.6211837s)
addons_test.go:231: (dbg) Run:  kubectl --context ingress-addon-legacy-134100 replace --force -f testdata\nginx-ingress-v1beta1.yaml
addons_test.go:244: (dbg) Run:  kubectl --context ingress-addon-legacy-134100 replace --force -f testdata\nginx-pod-svc.yaml
addons_test.go:249: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [f4c364a5-9ed9-4542-bdf9-ee672ac7725f] Pending
helpers_test.go:344: "nginx" [f4c364a5-9ed9-4542-bdf9-ee672ac7725f] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
E1218 12:24:02.404918   14928 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-922300\client.crt: The system cannot find the path specified.
E1218 12:24:10.178333   14928 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-806500\client.crt: The system cannot find the path specified.
helpers_test.go:344: "nginx" [f4c364a5-9ed9-4542-bdf9-ee672ac7725f] Running
addons_test.go:249: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: run=nginx healthy within 28.0077053s
addons_test.go:261: (dbg) Run:  out/minikube-windows-amd64.exe -p ingress-addon-legacy-134100 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:261: (dbg) Done: out/minikube-windows-amd64.exe -p ingress-addon-legacy-134100 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": (9.3068863s)
addons_test.go:268: debug: unexpected stderr for out/minikube-windows-amd64.exe -p ingress-addon-legacy-134100 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'":
W1218 12:24:27.332925   14932 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
addons_test.go:285: (dbg) Run:  kubectl --context ingress-addon-legacy-134100 replace --force -f testdata\ingress-dns-example-v1beta1.yaml
addons_test.go:290: (dbg) Run:  out/minikube-windows-amd64.exe -p ingress-addon-legacy-134100 ip
addons_test.go:290: (dbg) Done: out/minikube-windows-amd64.exe -p ingress-addon-legacy-134100 ip: (2.4864404s)
addons_test.go:296: (dbg) Run:  nslookup hello-john.test 192.168.233.204
addons_test.go:305: (dbg) Run:  out/minikube-windows-amd64.exe -p ingress-addon-legacy-134100 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:305: (dbg) Done: out/minikube-windows-amd64.exe -p ingress-addon-legacy-134100 addons disable ingress-dns --alsologtostderr -v=1: (16.2879521s)
addons_test.go:310: (dbg) Run:  out/minikube-windows-amd64.exe -p ingress-addon-legacy-134100 addons disable ingress --alsologtostderr -v=1
addons_test.go:310: (dbg) Done: out/minikube-windows-amd64.exe -p ingress-addon-legacy-134100 addons disable ingress --alsologtostderr -v=1: (21.4247472s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddons (80.89s)

                                                
                                    
x
+
TestJSONOutput/start/Command (232.78s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe start -p json-output-560600 --output=json --user=testUser --memory=2200 --wait=true --driver=hyperv
E1218 12:27:05.578683   14928 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-922300\client.crt: The system cannot find the path specified.
E1218 12:28:42.372362   14928 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-806500\client.crt: The system cannot find the path specified.
E1218 12:28:56.560796   14928 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ingress-addon-legacy-134100\client.crt: The system cannot find the path specified.
E1218 12:28:56.576021   14928 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ingress-addon-legacy-134100\client.crt: The system cannot find the path specified.
E1218 12:28:56.592209   14928 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ingress-addon-legacy-134100\client.crt: The system cannot find the path specified.
E1218 12:28:56.623845   14928 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ingress-addon-legacy-134100\client.crt: The system cannot find the path specified.
E1218 12:28:56.671584   14928 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ingress-addon-legacy-134100\client.crt: The system cannot find the path specified.
E1218 12:28:56.765258   14928 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ingress-addon-legacy-134100\client.crt: The system cannot find the path specified.
E1218 12:28:56.939163   14928 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ingress-addon-legacy-134100\client.crt: The system cannot find the path specified.
E1218 12:28:57.270289   14928 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ingress-addon-legacy-134100\client.crt: The system cannot find the path specified.
E1218 12:28:57.920111   14928 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ingress-addon-legacy-134100\client.crt: The system cannot find the path specified.
E1218 12:28:59.204433   14928 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ingress-addon-legacy-134100\client.crt: The system cannot find the path specified.
E1218 12:29:01.773776   14928 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ingress-addon-legacy-134100\client.crt: The system cannot find the path specified.
E1218 12:29:02.409518   14928 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-922300\client.crt: The system cannot find the path specified.
E1218 12:29:06.902003   14928 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ingress-addon-legacy-134100\client.crt: The system cannot find the path specified.
E1218 12:29:17.155776   14928 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ingress-addon-legacy-134100\client.crt: The system cannot find the path specified.
E1218 12:29:37.646112   14928 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ingress-addon-legacy-134100\client.crt: The system cannot find the path specified.
json_output_test.go:63: (dbg) Done: out/minikube-windows-amd64.exe start -p json-output-560600 --output=json --user=testUser --memory=2200 --wait=true --driver=hyperv: (3m52.7761807s)
--- PASS: TestJSONOutput/start/Command (232.78s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (7.82s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe pause -p json-output-560600 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-windows-amd64.exe pause -p json-output-560600 --output=json --user=testUser: (7.8183697s)
--- PASS: TestJSONOutput/pause/Command (7.82s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (7.73s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe unpause -p json-output-560600 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-windows-amd64.exe unpause -p json-output-560600 --output=json --user=testUser: (7.7340637s)
--- PASS: TestJSONOutput/unpause/Command (7.73s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (28.5s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe stop -p json-output-560600 --output=json --user=testUser
E1218 12:30:18.612709   14928 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ingress-addon-legacy-134100\client.crt: The system cannot find the path specified.
json_output_test.go:63: (dbg) Done: out/minikube-windows-amd64.exe stop -p json-output-560600 --output=json --user=testUser: (28.4978063s)
--- PASS: TestJSONOutput/stop/Command (28.50s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (1.51s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-windows-amd64.exe start -p json-output-error-230300 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p json-output-error-230300 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (264.5139ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"042993ee-d144-404a-80f4-909fd6b20b32","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-230300] minikube v1.32.0 on Microsoft Windows 10 Enterprise N 10.0.19045.3803 Build 19045.3803","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"1c291c85-aeb4-4013-ac23-cc761e53fd42","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=C:\\Users\\jenkins.minikube7\\minikube-integration\\kubeconfig"}}
	{"specversion":"1.0","id":"64c8b27c-eaf6-4165-8e92-6b4edcb4aac3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"ca5aa03d-0cbb-4370-be40-a23a4757bcba","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube"}}
	{"specversion":"1.0","id":"8daaa722-b721-4103-aa10-9cf00748de82","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=17824"}}
	{"specversion":"1.0","id":"94617597-8409-498e-a4d8-221a84f35811","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"613fd7c3-b0b0-4f6e-920a-2718040e414a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on windows/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
** stderr ** 
	W1218 12:30:59.191806    8152 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "json-output-error-230300" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p json-output-error-230300
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p json-output-error-230300: (1.2459029s)
--- PASS: TestErrorJSONOutput (1.51s)

                                                
                                    
x
+
TestMainNoArgs (0.26s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-windows-amd64.exe
--- PASS: TestMainNoArgs (0.26s)

                                                
                                    
x
+
TestMinikubeProfile (491.42s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-windows-amd64.exe start -p first-002000 --driver=hyperv
E1218 12:31:40.535412   14928 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ingress-addon-legacy-134100\client.crt: The system cannot find the path specified.
E1218 12:33:42.372511   14928 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-806500\client.crt: The system cannot find the path specified.
E1218 12:33:56.549519   14928 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ingress-addon-legacy-134100\client.crt: The system cannot find the path specified.
E1218 12:34:02.406808   14928 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-922300\client.crt: The system cannot find the path specified.
minikube_profile_test.go:44: (dbg) Done: out/minikube-windows-amd64.exe start -p first-002000 --driver=hyperv: (3m7.1602534s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-windows-amd64.exe start -p second-002000 --driver=hyperv
E1218 12:34:24.378940   14928 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ingress-addon-legacy-134100\client.crt: The system cannot find the path specified.
E1218 12:35:05.542632   14928 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-806500\client.crt: The system cannot find the path specified.
minikube_profile_test.go:44: (dbg) Done: out/minikube-windows-amd64.exe start -p second-002000 --driver=hyperv: (3m9.9836193s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-windows-amd64.exe profile first-002000
minikube_profile_test.go:55: (dbg) Run:  out/minikube-windows-amd64.exe profile list -ojson
minikube_profile_test.go:55: (dbg) Done: out/minikube-windows-amd64.exe profile list -ojson: (14.5879837s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-windows-amd64.exe profile second-002000
minikube_profile_test.go:55: (dbg) Run:  out/minikube-windows-amd64.exe profile list -ojson
minikube_profile_test.go:55: (dbg) Done: out/minikube-windows-amd64.exe profile list -ojson: (14.6541219s)
helpers_test.go:175: Cleaning up "second-002000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p second-002000
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p second-002000: (42.617121s)
helpers_test.go:175: Cleaning up "first-002000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p first-002000
E1218 12:38:42.371923   14928 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-806500\client.crt: The system cannot find the path specified.
E1218 12:38:56.551438   14928 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ingress-addon-legacy-134100\client.crt: The system cannot find the path specified.
E1218 12:39:02.412489   14928 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-922300\client.crt: The system cannot find the path specified.
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p first-002000: (41.4729719s)
--- PASS: TestMinikubeProfile (491.42s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (146.98s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-windows-amd64.exe start -p mount-start-1-926400 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=hyperv
mount_start_test.go:98: (dbg) Done: out/minikube-windows-amd64.exe start -p mount-start-1-926400 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=hyperv: (2m25.9832958s)
--- PASS: TestMountStart/serial/StartWithMountFirst (146.98s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (9.44s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-windows-amd64.exe -p mount-start-1-926400 ssh -- ls /minikube-host
mount_start_test.go:114: (dbg) Done: out/minikube-windows-amd64.exe -p mount-start-1-926400 ssh -- ls /minikube-host: (9.4402401s)
--- PASS: TestMountStart/serial/VerifyMountFirst (9.44s)

                                                
                                    
x
+
TestPreload (492.84s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-windows-amd64.exe start -p test-preload-836900 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=hyperv --kubernetes-version=v1.24.4
E1218 13:13:42.374259   14928 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-806500\client.crt: The system cannot find the path specified.
E1218 13:13:56.563600   14928 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ingress-addon-legacy-134100\client.crt: The system cannot find the path specified.
E1218 13:14:02.417967   14928 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-922300\client.crt: The system cannot find the path specified.
preload_test.go:44: (dbg) Done: out/minikube-windows-amd64.exe start -p test-preload-836900 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=hyperv --kubernetes-version=v1.24.4: (4m17.6775519s)
preload_test.go:52: (dbg) Run:  out/minikube-windows-amd64.exe -p test-preload-836900 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-windows-amd64.exe -p test-preload-836900 image pull gcr.io/k8s-minikube/busybox: (8.4916181s)
preload_test.go:58: (dbg) Run:  out/minikube-windows-amd64.exe stop -p test-preload-836900
preload_test.go:58: (dbg) Done: out/minikube-windows-amd64.exe stop -p test-preload-836900: (33.6999471s)
preload_test.go:66: (dbg) Run:  out/minikube-windows-amd64.exe start -p test-preload-836900 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=hyperv
E1218 13:17:05.619925   14928 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-922300\client.crt: The system cannot find the path specified.
E1218 13:18:39.769826   14928 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ingress-addon-legacy-134100\client.crt: The system cannot find the path specified.
E1218 13:18:42.377192   14928 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-806500\client.crt: The system cannot find the path specified.
E1218 13:18:56.558493   14928 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ingress-addon-legacy-134100\client.crt: The system cannot find the path specified.
preload_test.go:66: (dbg) Done: out/minikube-windows-amd64.exe start -p test-preload-836900 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=hyperv: (2m28.2786562s)
preload_test.go:71: (dbg) Run:  out/minikube-windows-amd64.exe -p test-preload-836900 image list
E1218 13:19:02.418429   14928 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-922300\client.crt: The system cannot find the path specified.
preload_test.go:71: (dbg) Done: out/minikube-windows-amd64.exe -p test-preload-836900 image list: (7.3614075s)
helpers_test.go:175: Cleaning up "test-preload-836900" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p test-preload-836900
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p test-preload-836900: (37.3317679s)
--- PASS: TestPreload (492.84s)

                                                
                                    
x
+
TestScheduledStopWindows (325.23s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-windows-amd64.exe start -p scheduled-stop-605800 --memory=2048 --driver=hyperv
scheduled_stop_test.go:128: (dbg) Done: out/minikube-windows-amd64.exe start -p scheduled-stop-605800 --memory=2048 --driver=hyperv: (3m12.115626s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-windows-amd64.exe stop -p scheduled-stop-605800 --schedule 5m
scheduled_stop_test.go:137: (dbg) Done: out/minikube-windows-amd64.exe stop -p scheduled-stop-605800 --schedule 5m: (10.7597747s)
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.TimeToStop}} -p scheduled-stop-605800 -n scheduled-stop-605800
scheduled_stop_test.go:191: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.TimeToStop}} -p scheduled-stop-605800 -n scheduled-stop-605800: exit status 1 (10.0508424s)

                                                
                                                
** stderr ** 
	W1218 13:23:04.327297    2692 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
scheduled_stop_test.go:191: status error: exit status 1 (may be ok)
scheduled_stop_test.go:54: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p scheduled-stop-605800 -- sudo systemctl show minikube-scheduled-stop --no-page
scheduled_stop_test.go:54: (dbg) Done: out/minikube-windows-amd64.exe ssh -p scheduled-stop-605800 -- sudo systemctl show minikube-scheduled-stop --no-page: (9.7141966s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-windows-amd64.exe stop -p scheduled-stop-605800 --schedule 5s
scheduled_stop_test.go:137: (dbg) Done: out/minikube-windows-amd64.exe stop -p scheduled-stop-605800 --schedule 5s: (10.6478408s)
E1218 13:23:42.379784   14928 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-806500\client.crt: The system cannot find the path specified.
E1218 13:23:56.559851   14928 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ingress-addon-legacy-134100\client.crt: The system cannot find the path specified.
E1218 13:24:02.428496   14928 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-922300\client.crt: The system cannot find the path specified.
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-windows-amd64.exe status -p scheduled-stop-605800
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status -p scheduled-stop-605800: exit status 7 (2.4267389s)

                                                
                                                
-- stdout --
	scheduled-stop-605800
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	W1218 13:24:34.750914   13764 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p scheduled-stop-605800 -n scheduled-stop-605800
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p scheduled-stop-605800 -n scheduled-stop-605800: exit status 7 (2.4300386s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
** stderr ** 
	W1218 13:24:37.190495    2444 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-605800" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p scheduled-stop-605800
E1218 13:25:05.565071   14928 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-806500\client.crt: The system cannot find the path specified.
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p scheduled-stop-605800: (27.0723686s)
--- PASS: TestScheduledStopWindows (325.23s)

                                                
                                    
x
+
TestKubernetesUpgrade (879.72s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:235: (dbg) Run:  out/minikube-windows-amd64.exe start -p kubernetes-upgrade-719500 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=hyperv
version_upgrade_test.go:235: (dbg) Done: out/minikube-windows-amd64.exe start -p kubernetes-upgrade-719500 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=hyperv: (3m38.5731785s)
version_upgrade_test.go:240: (dbg) Run:  out/minikube-windows-amd64.exe stop -p kubernetes-upgrade-719500
E1218 13:28:56.564536   14928 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ingress-addon-legacy-134100\client.crt: The system cannot find the path specified.
E1218 13:29:02.416299   14928 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-922300\client.crt: The system cannot find the path specified.
version_upgrade_test.go:240: (dbg) Done: out/minikube-windows-amd64.exe stop -p kubernetes-upgrade-719500: (38.8235715s)
version_upgrade_test.go:245: (dbg) Run:  out/minikube-windows-amd64.exe -p kubernetes-upgrade-719500 status --format={{.Host}}
version_upgrade_test.go:245: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p kubernetes-upgrade-719500 status --format={{.Host}}: exit status 7 (2.7224595s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
** stderr ** 
	W1218 13:29:24.091112   14192 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
version_upgrade_test.go:247: status error: exit status 7 (may be ok)
version_upgrade_test.go:256: (dbg) Run:  out/minikube-windows-amd64.exe start -p kubernetes-upgrade-719500 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=hyperv
version_upgrade_test.go:256: (dbg) Done: out/minikube-windows-amd64.exe start -p kubernetes-upgrade-719500 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=hyperv: (3m25.6919594s)
version_upgrade_test.go:261: (dbg) Run:  kubectl --context kubernetes-upgrade-719500 version --output=json
version_upgrade_test.go:280: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:282: (dbg) Run:  out/minikube-windows-amd64.exe start -p kubernetes-upgrade-719500 --memory=2200 --kubernetes-version=v1.16.0 --driver=hyperv
version_upgrade_test.go:282: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p kubernetes-upgrade-719500 --memory=2200 --kubernetes-version=v1.16.0 --driver=hyperv: exit status 106 (313.0637ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-719500] minikube v1.32.0 on Microsoft Windows 10 Enterprise N 10.0.19045.3803 Build 19045.3803
	  - KUBECONFIG=C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube7\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=17824
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	W1218 13:32:52.720754    3996 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.29.0-rc.2 cluster to v1.16.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.16.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-719500
	    minikube start -p kubernetes-upgrade-719500 --kubernetes-version=v1.16.0
	    
	    2) Create a second cluster with Kubernetes 1.16.0, by running:
	    
	    minikube start -p kubernetes-upgrade-7195002 --kubernetes-version=v1.16.0
	    
	    3) Use the existing cluster at version Kubernetes 1.29.0-rc.2, by running:
	    
	    minikube start -p kubernetes-upgrade-719500 --kubernetes-version=v1.29.0-rc.2
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:286: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:288: (dbg) Run:  out/minikube-windows-amd64.exe start -p kubernetes-upgrade-719500 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=hyperv
version_upgrade_test.go:288: (dbg) Done: out/minikube-windows-amd64.exe start -p kubernetes-upgrade-719500 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=hyperv: (6m10.4598011s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-719500" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p kubernetes-upgrade-719500
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p kubernetes-upgrade-719500: (42.9297412s)
--- PASS: TestKubernetesUpgrade (879.72s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.53s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.53s)

                                                
                                    
x
+
TestPause/serial/Start (342.67s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-windows-amd64.exe start -p pause-984000 --memory=2048 --install-addons=false --wait=all --driver=hyperv
pause_test.go:80: (dbg) Done: out/minikube-windows-amd64.exe start -p pause-984000 --memory=2048 --install-addons=false --wait=all --driver=hyperv: (5m42.6656447s)
--- PASS: TestPause/serial/Start (342.67s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.29s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-windows-amd64.exe start -p NoKubernetes-137000 --no-kubernetes --kubernetes-version=1.20 --driver=hyperv
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p NoKubernetes-137000 --no-kubernetes --kubernetes-version=1.20 --driver=hyperv: exit status 14 (294.4421ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-137000] minikube v1.32.0 on Microsoft Windows 10 Enterprise N 10.0.19045.3803 Build 19045.3803
	  - KUBECONFIG=C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube7\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=17824
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	W1218 13:34:24.944971    2204 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.29s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (10.41s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:219: (dbg) Run:  out/minikube-windows-amd64.exe logs -p stopped-upgrade-592200
version_upgrade_test.go:219: (dbg) Done: out/minikube-windows-amd64.exe logs -p stopped-upgrade-592200: (10.4142893s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (10.41s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (332.14s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-windows-amd64.exe start -p pause-984000 --alsologtostderr -v=1 --driver=hyperv
E1218 13:38:42.376046   14928 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-806500\client.crt: The system cannot find the path specified.
E1218 13:38:56.569312   14928 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ingress-addon-legacy-134100\client.crt: The system cannot find the path specified.
E1218 13:39:02.429636   14928 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-922300\client.crt: The system cannot find the path specified.
pause_test.go:92: (dbg) Done: out/minikube-windows-amd64.exe start -p pause-984000 --alsologtostderr -v=1 --driver=hyperv: (5m32.1148951s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (332.14s)

                                                
                                    
x
+
TestPause/serial/Pause (8.45s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-windows-amd64.exe pause -p pause-984000 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-windows-amd64.exe pause -p pause-984000 --alsologtostderr -v=5: (8.4465294s)
--- PASS: TestPause/serial/Pause (8.45s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (12.92s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-windows-amd64.exe status -p pause-984000 --output=json --layout=cluster
E1218 13:43:56.577243   14928 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ingress-addon-legacy-134100\client.crt: The system cannot find the path specified.
E1218 13:44:02.423041   14928 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-922300\client.crt: The system cannot find the path specified.
status_test.go:76: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status -p pause-984000 --output=json --layout=cluster: exit status 2 (12.9238069s)

                                                
                                                
-- stdout --
	{"Name":"pause-984000","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 12 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.32.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-984000","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	W1218 13:43:52.038404    5928 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
--- PASS: TestPause/serial/VerifyStatus (12.92s)

                                                
                                    
x
+
TestPause/serial/Unpause (8.05s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-windows-amd64.exe unpause -p pause-984000 --alsologtostderr -v=5
pause_test.go:121: (dbg) Done: out/minikube-windows-amd64.exe unpause -p pause-984000 --alsologtostderr -v=5: (8.0513015s)
--- PASS: TestPause/serial/Unpause (8.05s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (8.18s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-windows-amd64.exe pause -p pause-984000 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-windows-amd64.exe pause -p pause-984000 --alsologtostderr -v=5: (8.1801332s)
--- PASS: TestPause/serial/PauseAgain (8.18s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (42.52s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-windows-amd64.exe delete -p pause-984000 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-windows-amd64.exe delete -p pause-984000 --alsologtostderr -v=5: (42.5197335s)
--- PASS: TestPause/serial/DeletePaused (42.52s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (18.26s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
pause_test.go:142: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (18.2590387s)
--- PASS: TestPause/serial/VerifyDeletedResources (18.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (431.63s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-windows-amd64.exe start -p auto-353700 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=hyperv
E1218 13:48:42.377118   14928 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-806500\client.crt: The system cannot find the path specified.
E1218 13:48:56.574310   14928 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ingress-addon-legacy-134100\client.crt: The system cannot find the path specified.
E1218 13:49:02.424032   14928 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-922300\client.crt: The system cannot find the path specified.
net_test.go:112: (dbg) Done: out/minikube-windows-amd64.exe start -p auto-353700 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=hyperv: (7m11.6343287s)
--- PASS: TestNetworkPlugins/group/auto/Start (431.63s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (422.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-windows-amd64.exe start -p kindnet-353700 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=hyperv
E1218 13:51:59.802369   14928 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ingress-addon-legacy-134100\client.crt: The system cannot find the path specified.
net_test.go:112: (dbg) Done: out/minikube-windows-amd64.exe start -p kindnet-353700 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=hyperv: (7m2.0162116s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (422.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (505.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-windows-amd64.exe start -p calico-353700 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=hyperv
E1218 13:53:42.392117   14928 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-806500\client.crt: The system cannot find the path specified.
E1218 13:53:56.568438   14928 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ingress-addon-legacy-134100\client.crt: The system cannot find the path specified.
E1218 13:54:02.433715   14928 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-922300\client.crt: The system cannot find the path specified.
net_test.go:112: (dbg) Done: out/minikube-windows-amd64.exe start -p calico-353700 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=hyperv: (8m25.2497187s)
--- PASS: TestNetworkPlugins/group/calico/Start (505.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (10.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p auto-353700 "pgrep -a kubelet"
net_test.go:133: (dbg) Done: out/minikube-windows-amd64.exe ssh -p auto-353700 "pgrep -a kubelet": (10.0997197s)
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (10.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (16.49s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-353700 replace --force -f testdata\netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-cvmc4" [9dd6cfe2-2010-4286-888c-19a6c05511d1] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-cvmc4" [9dd6cfe2-2010-4286-888c-19a6c05511d1] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 16.00779s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (16.49s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-353700 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-353700 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-353700 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-n8nw9" [68f3b441-c053-4d4b-926a-716f1baf9efc] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.0107671s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (11.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p kindnet-353700 "pgrep -a kubelet"
net_test.go:133: (dbg) Done: out/minikube-windows-amd64.exe ssh -p kindnet-353700 "pgrep -a kubelet": (11.1467225s)
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (11.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (17.53s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-353700 replace --force -f testdata\netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-9xd8h" [3723c2e9-a100-455e-89c7-da9138b1ec42] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-9xd8h" [3723c2e9-a100-455e-89c7-da9138b1ec42] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 17.0161157s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (17.53s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.54s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-353700 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.54s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-353700 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.44s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-353700 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (306.62s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-windows-amd64.exe start -p custom-flannel-353700 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata\kube-flannel.yaml --driver=hyperv
net_test.go:112: (dbg) Done: out/minikube-windows-amd64.exe start -p custom-flannel-353700 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata\kube-flannel.yaml --driver=hyperv: (5m6.6178788s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (306.62s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.03s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-h26cl" [4592c9bb-664a-4265-9345-87818e4b02c1] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.0237862s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.03s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (12.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p calico-353700 "pgrep -a kubelet"
net_test.go:133: (dbg) Done: out/minikube-windows-amd64.exe ssh -p calico-353700 "pgrep -a kubelet": (12.2650854s)
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (12.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (17.59s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-353700 replace --force -f testdata\netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-kgf7t" [e326e6bd-aafe-4d56-9e36-9df84d8de0a7] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-kgf7t" [e326e6bd-aafe-4d56-9e36-9df84d8de0a7] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 17.012804s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (17.59s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-353700 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-353700 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-353700 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (252.6s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-windows-amd64.exe start -p false-353700 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=hyperv
E1218 14:03:42.385985   14928 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-806500\client.crt: The system cannot find the path specified.
E1218 14:03:56.581864   14928 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ingress-addon-legacy-134100\client.crt: The system cannot find the path specified.
E1218 14:04:02.429983   14928 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-922300\client.crt: The system cannot find the path specified.
net_test.go:112: (dbg) Done: out/minikube-windows-amd64.exe start -p false-353700 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=hyperv: (4m12.6044388s)
--- PASS: TestNetworkPlugins/group/false/Start (252.60s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (12.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p custom-flannel-353700 "pgrep -a kubelet"
net_test.go:133: (dbg) Done: out/minikube-windows-amd64.exe ssh -p custom-flannel-353700 "pgrep -a kubelet": (12.3765277s)
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (12.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (16.66s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-353700 replace --force -f testdata\netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-smqv8" [b20eaf13-c389-4935-a99b-ce13afff424b] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-smqv8" [b20eaf13-c389-4935-a99b-ce13afff424b] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 16.0121887s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (16.66s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.66s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-353700 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.66s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-353700 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.44s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-353700 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (275.8s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-windows-amd64.exe start -p enable-default-cni-353700 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=hyperv
E1218 14:06:00.149386   14928 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\auto-353700\client.crt: The system cannot find the path specified.
E1218 14:06:10.402206   14928 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\auto-353700\client.crt: The system cannot find the path specified.
E1218 14:06:30.895555   14928 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\auto-353700\client.crt: The system cannot find the path specified.
E1218 14:07:05.661542   14928 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-922300\client.crt: The system cannot find the path specified.
E1218 14:07:11.859377   14928 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\auto-353700\client.crt: The system cannot find the path specified.
E1218 14:07:40.803839   14928 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\kindnet-353700\client.crt: The system cannot find the path specified.
E1218 14:07:40.819506   14928 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\kindnet-353700\client.crt: The system cannot find the path specified.
E1218 14:07:40.834647   14928 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\kindnet-353700\client.crt: The system cannot find the path specified.
E1218 14:07:40.866349   14928 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\kindnet-353700\client.crt: The system cannot find the path specified.
E1218 14:07:40.913743   14928 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\kindnet-353700\client.crt: The system cannot find the path specified.
E1218 14:07:41.008697   14928 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\kindnet-353700\client.crt: The system cannot find the path specified.
E1218 14:07:41.182279   14928 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\kindnet-353700\client.crt: The system cannot find the path specified.
E1218 14:07:41.518868   14928 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\kindnet-353700\client.crt: The system cannot find the path specified.
E1218 14:07:42.167298   14928 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\kindnet-353700\client.crt: The system cannot find the path specified.
E1218 14:07:43.447712   14928 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\kindnet-353700\client.crt: The system cannot find the path specified.
net_test.go:112: (dbg) Done: out/minikube-windows-amd64.exe start -p enable-default-cni-353700 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=hyperv: (4m35.7975833s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (275.80s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/KubeletFlags (12.91s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p false-353700 "pgrep -a kubelet"
E1218 14:07:46.015035   14928 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\kindnet-353700\client.crt: The system cannot find the path specified.
E1218 14:07:51.149584   14928 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\kindnet-353700\client.crt: The system cannot find the path specified.
net_test.go:133: (dbg) Done: out/minikube-windows-amd64.exe ssh -p false-353700 "pgrep -a kubelet": (12.9126482s)
--- PASS: TestNetworkPlugins/group/false/KubeletFlags (12.91s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/NetCatPod (17.6s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context false-353700 replace --force -f testdata\netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-f9ff7" [d718d7b7-82a6-4042-bd0e-40bbc990f91f] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1218 14:08:01.396547   14928 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\kindnet-353700\client.crt: The system cannot find the path specified.
helpers_test.go:344: "netcat-56589dfd74-f9ff7" [d718d7b7-82a6-4042-bd0e-40bbc990f91f] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: app=netcat healthy within 17.0146719s
--- PASS: TestNetworkPlugins/group/false/NetCatPod (17.60s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/DNS (0.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/DNS
net_test.go:175: (dbg) Run:  kubectl --context false-353700 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/false/DNS (0.44s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Localhost (0.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Localhost
net_test.go:194: (dbg) Run:  kubectl --context false-353700 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/false/Localhost (0.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/HairPin (0.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/HairPin
net_test.go:264: (dbg) Run:  kubectl --context false-353700 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/false/HairPin (0.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (12.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p enable-default-cni-353700 "pgrep -a kubelet"
net_test.go:133: (dbg) Done: out/minikube-windows-amd64.exe ssh -p enable-default-cni-353700 "pgrep -a kubelet": (12.1481233s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (12.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (255.61s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-windows-amd64.exe start -p flannel-353700 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=hyperv
net_test.go:112: (dbg) Done: out/minikube-windows-amd64.exe start -p flannel-353700 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=hyperv: (4m15.610638s)
--- PASS: TestNetworkPlugins/group/flannel/Start (255.61s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (16.69s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-353700 replace --force -f testdata\netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-7jbf7" [472d69cd-c8d1-4898-bc40-e31db4ecfd29] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1218 14:10:49.801682   14928 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\auto-353700\client.crt: The system cannot find the path specified.
helpers_test.go:344: "netcat-56589dfd74-7jbf7" [472d69cd-c8d1-4898-bc40-e31db4ecfd29] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 16.0206027s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (16.69s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-353700 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-353700 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-353700 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.03s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-67p92" [9fd798aa-ee55-488c-85cd-b7629e8257ae] Running
E1218 14:15:05.602235   14928 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-806500\client.crt: The system cannot find the path specified.
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.0247036s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.03s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (12.48s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p flannel-353700 "pgrep -a kubelet"
E1218 14:15:07.120406   14928 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\custom-flannel-353700\client.crt: The system cannot find the path specified.
net_test.go:133: (dbg) Done: out/minikube-windows-amd64.exe ssh -p flannel-353700 "pgrep -a kubelet": (12.4779655s)
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (12.48s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (17.58s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-353700 replace --force -f testdata\netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-hrgg5" [8f06980b-a545-4d1d-8c92-757b91c01a9f] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-hrgg5" [8f06980b-a545-4d1d-8c92-757b91c01a9f] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 17.0130554s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (17.58s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-353700 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-353700 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-353700 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.34s)

                                                
                                    

Test skip (33/252)

x
+
TestDownloadOnly/v1.16.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.16.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/binaries
aaa_download_only_test.go:139: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.16.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.4/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/binaries
aaa_download_only_test.go:139: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.4/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.29.0-rc.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/binaries
aaa_download_only_test.go:139: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.29.0-rc.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:213: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:497: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker false windows amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:41: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (300.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-windows-amd64.exe dashboard --url --port 36195 -p functional-806500 --alsologtostderr -v=1]
functional_test.go:912: output didn't produce a URL
functional_test.go:906: (dbg) stopping [out/minikube-windows-amd64.exe dashboard --url --port 36195 -p functional-806500 --alsologtostderr -v=1] ...
helpers_test.go:502: unable to terminate pid 13828: Access is denied.
--- SKIP: TestFunctional/parallel/DashboardCmd (300.02s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (5.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-806500 --dry-run --memory 250MB --alsologtostderr --driver=hyperv
functional_test.go:970: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p functional-806500 --dry-run --memory 250MB --alsologtostderr --driver=hyperv: exit status 1 (5.0594228s)

                                                
                                                
-- stdout --
	* [functional-806500] minikube v1.32.0 on Microsoft Windows 10 Enterprise N 10.0.19045.3803 Build 19045.3803
	  - KUBECONFIG=C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube7\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=17824
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true

                                                
                                                
-- /stdout --
** stderr ** 
	W1218 12:09:14.335294    5944 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I1218 12:09:14.451497    5944 out.go:296] Setting OutFile to fd 804 ...
	I1218 12:09:14.451497    5944 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1218 12:09:14.451497    5944 out.go:309] Setting ErrFile to fd 868...
	I1218 12:09:14.452492    5944 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1218 12:09:14.499007    5944 out.go:303] Setting JSON to false
	I1218 12:09:14.504950    5944 start.go:128] hostinfo: {"hostname":"minikube7","uptime":1829,"bootTime":1702899525,"procs":204,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.3803 Build 19045.3803","kernelVersion":"10.0.19045.3803 Build 19045.3803","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f66ed2ea-6c04-4a6b-8eea-b2fb0953e990"}
	W1218 12:09:14.505083    5944 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1218 12:09:14.505745    5944 out.go:177] * [functional-806500] minikube v1.32.0 on Microsoft Windows 10 Enterprise N 10.0.19045.3803 Build 19045.3803
	I1218 12:09:14.506845    5944 notify.go:220] Checking for updates...
	I1218 12:09:14.507707    5944 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	I1218 12:09:14.508725    5944 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1218 12:09:14.509721    5944 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube7\minikube-integration\.minikube
	I1218 12:09:14.510699    5944 out.go:177]   - MINIKUBE_LOCATION=17824
	I1218 12:09:14.511705    5944 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1218 12:09:14.513710    5944 config.go:182] Loaded profile config "functional-806500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1218 12:09:14.514706    5944 driver.go:392] Setting default libvirt URI to qemu:///system

                                                
                                                
** /stderr **
functional_test.go:976: skipping this error on HyperV till this issue is solved https://github.com/kubernetes/minikube/issues/9785
--- SKIP: TestFunctional/parallel/DryRun (5.06s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (5.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-806500 --dry-run --memory 250MB --alsologtostderr --driver=hyperv
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p functional-806500 --dry-run --memory 250MB --alsologtostderr --driver=hyperv: exit status 1 (5.0539598s)

                                                
                                                
-- stdout --
	* [functional-806500] minikube v1.32.0 sur Microsoft Windows 10 Enterprise N 10.0.19045.3803 Build 19045.3803
	  - KUBECONFIG=C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube7\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=17824
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true

                                                
                                                
-- /stdout --
** stderr ** 
	W1218 12:09:09.312823    7236 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I1218 12:09:09.389950    7236 out.go:296] Setting OutFile to fd 484 ...
	I1218 12:09:09.389950    7236 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1218 12:09:09.389950    7236 out.go:309] Setting ErrFile to fd 892...
	I1218 12:09:09.389950    7236 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1218 12:09:09.414413    7236 out.go:303] Setting JSON to false
	I1218 12:09:09.418974    7236 start.go:128] hostinfo: {"hostname":"minikube7","uptime":1824,"bootTime":1702899525,"procs":203,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.3803 Build 19045.3803","kernelVersion":"10.0.19045.3803 Build 19045.3803","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f66ed2ea-6c04-4a6b-8eea-b2fb0953e990"}
	W1218 12:09:09.419160    7236 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1218 12:09:09.420514    7236 out.go:177] * [functional-806500] minikube v1.32.0 sur Microsoft Windows 10 Enterprise N 10.0.19045.3803 Build 19045.3803
	I1218 12:09:09.421481    7236 notify.go:220] Checking for updates...
	I1218 12:09:09.422228    7236 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	I1218 12:09:09.423123    7236 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1218 12:09:09.423844    7236 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube7\minikube-integration\.minikube
	I1218 12:09:09.424808    7236 out.go:177]   - MINIKUBE_LOCATION=17824
	I1218 12:09:09.425200    7236 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1218 12:09:09.427004    7236 config.go:182] Loaded profile config "functional-806500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1218 12:09:09.427004    7236 driver.go:392] Setting default libvirt URI to qemu:///system

                                                
                                                
** /stderr **
functional_test.go:1021: skipping this error on HyperV till this issue is solved https://github.com/kubernetes/minikube/issues/9785
--- SKIP: TestFunctional/parallel/InternationalLanguage (5.05s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd
=== PAUSE TestFunctional/parallel/MountCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd
functional_test_mount_test.go:57: skipping: mount broken on hyperv: https://github.com/kubernetes/minikube/issues/5029
--- SKIP: TestFunctional/parallel/MountCmd (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:230: The test WaitService/IngressIP is broken on hyperv https://github.com/kubernetes/minikube/issues/8381
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:258: skipping: access direct test is broken on windows: https://github.com/kubernetes/minikube/issues/8304
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestScheduledStopUnix (0s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:76: test only runs on unix
--- SKIP: TestScheduledStopUnix (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:39: skipping due to https://github.com/kubernetes/minikube/issues/14232
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:297: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (17.79s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:523: 
----------------------- debugLogs start: cilium-353700 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-353700

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-353700

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-353700

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-353700

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-353700

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-353700

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-353700

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-353700

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-353700

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-353700

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
W1218 13:25:08.503007    9888 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
* Profile "cilium-353700" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-353700"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
W1218 13:25:08.794424    4012 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
* Profile "cilium-353700" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-353700"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
W1218 13:25:09.089950   14364 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
* Profile "cilium-353700" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-353700"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-353700

                                                
                                                

                                                
                                                
>>> host: crictl pods:
W1218 13:25:09.588401   12400 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
* Profile "cilium-353700" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-353700"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
W1218 13:25:09.910313    2400 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
* Profile "cilium-353700" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-353700"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-353700" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-353700" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-353700" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-353700" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-353700" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-353700" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-353700" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-353700" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
W1218 13:25:11.651952   10180 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
* Profile "cilium-353700" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-353700"

                                                
                                                

                                                
                                                
>>> host: ip a s:
W1218 13:25:11.977038    1668 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
* Profile "cilium-353700" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-353700"

                                                
                                                

                                                
                                                
>>> host: ip r s:
W1218 13:25:12.306523    9332 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
* Profile "cilium-353700" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-353700"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
W1218 13:25:12.607840   14900 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
* Profile "cilium-353700" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-353700"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
W1218 13:25:12.894440    7296 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
* Profile "cilium-353700" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-353700"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-353700

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-353700

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-353700" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-353700" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-353700

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-353700

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-353700" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-353700" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-353700" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-353700" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-353700" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
W1218 13:25:15.013828   14288 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
* Profile "cilium-353700" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-353700"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
W1218 13:25:15.311475    1080 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
* Profile "cilium-353700" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-353700"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
W1218 13:25:15.620141   10916 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
* Profile "cilium-353700" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-353700"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
W1218 13:25:15.911141    3476 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
* Profile "cilium-353700" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-353700"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
W1218 13:25:16.212061   15264 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
* Profile "cilium-353700" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-353700"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.crt
extensions:
- extension:
last-update: Mon, 18 Dec 2023 13:06:24 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: cluster_info
server: https://192.168.237.59:8443
name: multinode-015900-m01
contexts:
- context:
cluster: multinode-015900-m01
extensions:
- extension:
last-update: Mon, 18 Dec 2023 13:06:24 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: context_info
namespace: default
user: multinode-015900-m01
name: multinode-015900-m01
current-context: ""
kind: Config
preferences: {}
users:
- name: multinode-015900-m01
user:
client-certificate: C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-015900-m01\client.crt
client-key: C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-015900-m01\client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-353700

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
W1218 13:25:16.816765    4064 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
* Profile "cilium-353700" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-353700"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
W1218 13:25:17.094567    3288 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
* Profile "cilium-353700" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-353700"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
W1218 13:25:17.367812   12808 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
* Profile "cilium-353700" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-353700"

                                                
                                                

                                                
                                                
>>> host: docker system info:
W1218 13:25:17.643644    6256 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
* Profile "cilium-353700" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-353700"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
W1218 13:25:17.945737    6148 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
* Profile "cilium-353700" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-353700"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
W1218 13:25:18.504935    4424 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
* Profile "cilium-353700" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-353700"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
W1218 13:25:18.791523   11096 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
* Profile "cilium-353700" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-353700"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
W1218 13:25:19.077740    5116 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
* Profile "cilium-353700" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-353700"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
W1218 13:25:19.558973   10704 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
* Profile "cilium-353700" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-353700"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
W1218 13:25:19.908533    4580 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
* Profile "cilium-353700" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-353700"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
W1218 13:25:20.291999    7308 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
* Profile "cilium-353700" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-353700"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
W1218 13:25:20.836413   14596 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
* Profile "cilium-353700" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-353700"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
W1218 13:25:21.206070   14532 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
* Profile "cilium-353700" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-353700"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
W1218 13:25:21.540311    8032 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
* Profile "cilium-353700" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-353700"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
W1218 13:25:21.923215   14552 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
* Profile "cilium-353700" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-353700"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
W1218 13:25:22.202379   10732 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
* Profile "cilium-353700" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-353700"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
W1218 13:25:22.517879    3168 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
* Profile "cilium-353700" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-353700"

                                                
                                                

                                                
                                                
>>> host: crio config:
W1218 13:25:22.816158   14468 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
* Profile "cilium-353700" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-353700"

                                                
                                                
----------------------- debugLogs end: cilium-353700 [took: 16.4096162s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-353700" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p cilium-353700
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p cilium-353700: (1.3840606s)
--- SKIP: TestNetworkPlugins/group/cilium (17.79s)

                                                
                                    
Copied to clipboard