Test Report: Hyper-V_Windows 17907

                    
                      7ea9a0daea14a922bd9e219098252b67b1b782a8:2024-01-08:32610
                    
                

Test fail (14/208)

x
+
TestAddons/parallel/Registry (70.35s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:330: registry stabilized in 50.5928ms
addons_test.go:332: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-zc588" [d5d622ee-1176-4489-805b-fb8dfc8707d8] Running
addons_test.go:332: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.012685s
addons_test.go:335: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-8zjs2" [10d71de8-568e-4807-93da-2d1e6026570a] Running
addons_test.go:335: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.0150079s
addons_test.go:340: (dbg) Run:  kubectl --context addons-084500 delete po -l run=registry-test --now
addons_test.go:345: (dbg) Run:  kubectl --context addons-084500 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:345: (dbg) Done: kubectl --context addons-084500 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.9822417s)
addons_test.go:359: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-084500 ip
addons_test.go:359: (dbg) Done: out/minikube-windows-amd64.exe -p addons-084500 ip: (2.8734029s)
addons_test.go:364: expected stderr to be -empty- but got: *"W0108 20:18:07.606590    7328 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube7\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified.\n"* .  args "out/minikube-windows-amd64.exe -p addons-084500 ip"
2024/01/08 20:18:10 [DEBUG] GET http://172.29.100.38:5000
addons_test.go:388: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-084500 addons disable registry --alsologtostderr -v=1
addons_test.go:388: (dbg) Done: out/minikube-windows-amd64.exe -p addons-084500 addons disable registry --alsologtostderr -v=1: (15.8336597s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p addons-084500 -n addons-084500
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p addons-084500 -n addons-084500: (13.3689912s)
helpers_test.go:244: <<< TestAddons/parallel/Registry FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Registry]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-084500 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p addons-084500 logs -n 25: (8.8147434s)
helpers_test.go:252: TestAddons/parallel/Registry logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |       User        | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	| start   | -o=json --download-only                                                                     | download-only-145800 | minikube7\jenkins | v1.32.0 | 08 Jan 24 20:10 UTC |                     |
	|         | -p download-only-145800                                                                     |                      |                   |         |                     |                     |
	|         | --force --alsologtostderr                                                                   |                      |                   |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                                                                |                      |                   |         |                     |                     |
	|         | --container-runtime=docker                                                                  |                      |                   |         |                     |                     |
	|         | --driver=hyperv                                                                             |                      |                   |         |                     |                     |
	| start   | -o=json --download-only                                                                     | download-only-145800 | minikube7\jenkins | v1.32.0 | 08 Jan 24 20:10 UTC |                     |
	|         | -p download-only-145800                                                                     |                      |                   |         |                     |                     |
	|         | --force --alsologtostderr                                                                   |                      |                   |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                                                                |                      |                   |         |                     |                     |
	|         | --container-runtime=docker                                                                  |                      |                   |         |                     |                     |
	|         | --driver=hyperv                                                                             |                      |                   |         |                     |                     |
	| start   | -o=json --download-only                                                                     | download-only-145800 | minikube7\jenkins | v1.32.0 | 08 Jan 24 20:11 UTC |                     |
	|         | -p download-only-145800                                                                     |                      |                   |         |                     |                     |
	|         | --force --alsologtostderr                                                                   |                      |                   |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                                                           |                      |                   |         |                     |                     |
	|         | --container-runtime=docker                                                                  |                      |                   |         |                     |                     |
	|         | --driver=hyperv                                                                             |                      |                   |         |                     |                     |
	| delete  | --all                                                                                       | minikube             | minikube7\jenkins | v1.32.0 | 08 Jan 24 20:11 UTC | 08 Jan 24 20:11 UTC |
	| delete  | -p download-only-145800                                                                     | download-only-145800 | minikube7\jenkins | v1.32.0 | 08 Jan 24 20:11 UTC | 08 Jan 24 20:11 UTC |
	| delete  | -p download-only-145800                                                                     | download-only-145800 | minikube7\jenkins | v1.32.0 | 08 Jan 24 20:11 UTC | 08 Jan 24 20:11 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-038100 | minikube7\jenkins | v1.32.0 | 08 Jan 24 20:11 UTC |                     |
	|         | binary-mirror-038100                                                                        |                      |                   |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |                   |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |                   |         |                     |                     |
	|         | http://127.0.0.1:50621                                                                      |                      |                   |         |                     |                     |
	|         | --driver=hyperv                                                                             |                      |                   |         |                     |                     |
	| delete  | -p binary-mirror-038100                                                                     | binary-mirror-038100 | minikube7\jenkins | v1.32.0 | 08 Jan 24 20:11 UTC | 08 Jan 24 20:11 UTC |
	| addons  | enable dashboard -p                                                                         | addons-084500        | minikube7\jenkins | v1.32.0 | 08 Jan 24 20:11 UTC |                     |
	|         | addons-084500                                                                               |                      |                   |         |                     |                     |
	| addons  | disable dashboard -p                                                                        | addons-084500        | minikube7\jenkins | v1.32.0 | 08 Jan 24 20:11 UTC |                     |
	|         | addons-084500                                                                               |                      |                   |         |                     |                     |
	| start   | -p addons-084500 --wait=true                                                                | addons-084500        | minikube7\jenkins | v1.32.0 | 08 Jan 24 20:11 UTC | 08 Jan 24 20:17 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |                   |         |                     |                     |
	|         | --addons=registry                                                                           |                      |                   |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |                   |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |                   |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |                   |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |                   |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |                   |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |                   |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |                   |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |                   |         |                     |                     |
	|         | --addons=yakd --driver=hyperv                                                               |                      |                   |         |                     |                     |
	|         | --addons=ingress                                                                            |                      |                   |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |                   |         |                     |                     |
	|         | --addons=helm-tiller                                                                        |                      |                   |         |                     |                     |
	| addons  | addons-084500 addons                                                                        | addons-084500        | minikube7\jenkins | v1.32.0 | 08 Jan 24 20:17 UTC | 08 Jan 24 20:18 UTC |
	|         | disable metrics-server                                                                      |                      |                   |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |                   |         |                     |                     |
	| ssh     | addons-084500 ssh cat                                                                       | addons-084500        | minikube7\jenkins | v1.32.0 | 08 Jan 24 20:18 UTC | 08 Jan 24 20:18 UTC |
	|         | /opt/local-path-provisioner/pvc-b343249b-7af6-4a98-9f32-9a613e622e0b_default_test-pvc/file1 |                      |                   |         |                     |                     |
	| ip      | addons-084500 ip                                                                            | addons-084500        | minikube7\jenkins | v1.32.0 | 08 Jan 24 20:18 UTC | 08 Jan 24 20:18 UTC |
	| addons  | addons-084500 addons disable                                                                | addons-084500        | minikube7\jenkins | v1.32.0 | 08 Jan 24 20:18 UTC | 08 Jan 24 20:18 UTC |
	|         | registry --alsologtostderr                                                                  |                      |                   |         |                     |                     |
	|         | -v=1                                                                                        |                      |                   |         |                     |                     |
	| addons  | addons-084500 addons disable                                                                | addons-084500        | minikube7\jenkins | v1.32.0 | 08 Jan 24 20:18 UTC |                     |
	|         | storage-provisioner-rancher                                                                 |                      |                   |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |                   |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-084500        | minikube7\jenkins | v1.32.0 | 08 Jan 24 20:18 UTC | 08 Jan 24 20:18 UTC |
	|         | addons-084500                                                                               |                      |                   |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-084500        | minikube7\jenkins | v1.32.0 | 08 Jan 24 20:18 UTC |                     |
	|         | -p addons-084500                                                                            |                      |                   |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |                   |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/01/08 20:11:36
	Running on machine: minikube7
	Binary: Built with gc go1.21.5 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0108 20:11:36.272083    4724 out.go:296] Setting OutFile to fd 840 ...
	I0108 20:11:36.272804    4724 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 20:11:36.272879    4724 out.go:309] Setting ErrFile to fd 856...
	I0108 20:11:36.272879    4724 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 20:11:36.290640    4724 out.go:303] Setting JSON to false
	I0108 20:11:36.297129    4724 start.go:128] hostinfo: {"hostname":"minikube7","uptime":22638,"bootTime":1704722057,"procs":192,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.3803 Build 19045.3803","kernelVersion":"10.0.19045.3803 Build 19045.3803","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f66ed2ea-6c04-4a6b-8eea-b2fb0953e990"}
	W0108 20:11:36.297266    4724 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0108 20:11:36.298416    4724 out.go:177] * [addons-084500] minikube v1.32.0 on Microsoft Windows 10 Enterprise N 10.0.19045.3803 Build 19045.3803
	I0108 20:11:36.299271    4724 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	I0108 20:11:36.300004    4724 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0108 20:11:36.299271    4724 notify.go:220] Checking for updates...
	I0108 20:11:36.301341    4724 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube7\minikube-integration\.minikube
	I0108 20:11:36.301668    4724 out.go:177]   - MINIKUBE_LOCATION=17907
	I0108 20:11:36.302455    4724 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0108 20:11:36.304037    4724 driver.go:392] Setting default libvirt URI to qemu:///system
	I0108 20:11:41.763970    4724 out.go:177] * Using the hyperv driver based on user configuration
	I0108 20:11:41.764296    4724 start.go:298] selected driver: hyperv
	I0108 20:11:41.764845    4724 start.go:902] validating driver "hyperv" against <nil>
	I0108 20:11:41.764845    4724 start.go:913] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0108 20:11:41.817088    4724 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I0108 20:11:41.818162    4724 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0108 20:11:41.818162    4724 cni.go:84] Creating CNI manager for ""
	I0108 20:11:41.818162    4724 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0108 20:11:41.818162    4724 start_flags.go:318] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0108 20:11:41.818162    4724 start_flags.go:323] config:
	{Name:addons-084500 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:addons-084500 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker C
RISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I0108 20:11:41.818781    4724 iso.go:125] acquiring lock: {Name:mk1869c5b5033c1301af33a7fa364ec43dd63efe Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 20:11:41.820146    4724 out.go:177] * Starting control plane node addons-084500 in cluster addons-084500
	I0108 20:11:41.820870    4724 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0108 20:11:41.820870    4724 preload.go:148] Found local preload: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4
	I0108 20:11:41.820870    4724 cache.go:56] Caching tarball of preloaded images
	I0108 20:11:41.820870    4724 preload.go:174] Found C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0108 20:11:41.820870    4724 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on docker
	I0108 20:11:41.822118    4724 profile.go:148] Saving config to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-084500\config.json ...
	I0108 20:11:41.822492    4724 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-084500\config.json: {Name:mkde0b46016ded37de704e74754a0a301a3e24f6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 20:11:41.823637    4724 start.go:365] acquiring machines lock for addons-084500: {Name:mk814f158b6187cc9297257c36fdbe0d2871c950 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0108 20:11:41.823730    4724 start.go:369] acquired machines lock for "addons-084500" in 0s
	I0108 20:11:41.823955    4724 start.go:93] Provisioning new machine with config: &{Name:addons-084500 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17806/minikube-v1.32.1-1702708929-17806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.28.4 ClusterName:addons-084500 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Mount
MSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0108 20:11:41.823955    4724 start.go:125] createHost starting for "" (driver="hyperv")
	I0108 20:11:41.824596    4724 out.go:204] * Creating hyperv VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0108 20:11:41.824968    4724 start.go:159] libmachine.API.Create for "addons-084500" (driver="hyperv")
	I0108 20:11:41.825224    4724 client.go:168] LocalClient.Create starting
	I0108 20:11:41.825931    4724 main.go:141] libmachine: Creating CA: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem
	I0108 20:11:41.947844    4724 main.go:141] libmachine: Creating client certificate: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem
	I0108 20:11:42.036168    4724 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0108 20:11:44.181537    4724 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0108 20:11:44.181537    4724 main.go:141] libmachine: [stderr =====>] : 
	I0108 20:11:44.181537    4724 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0108 20:11:45.930834    4724 main.go:141] libmachine: [stdout =====>] : False
	
	I0108 20:11:45.930834    4724 main.go:141] libmachine: [stderr =====>] : 
	I0108 20:11:45.930925    4724 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0108 20:11:47.430866    4724 main.go:141] libmachine: [stdout =====>] : True
	
	I0108 20:11:47.430866    4724 main.go:141] libmachine: [stderr =====>] : 
	I0108 20:11:47.431165    4724 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0108 20:11:51.286767    4724 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0108 20:11:51.287709    4724 main.go:141] libmachine: [stderr =====>] : 
	I0108 20:11:51.290010    4724 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube7/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.32.1-1702708929-17806-amd64.iso...
	I0108 20:11:51.741180    4724 main.go:141] libmachine: Creating SSH key...
	I0108 20:11:51.972803    4724 main.go:141] libmachine: Creating VM...
	I0108 20:11:51.972803    4724 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0108 20:11:54.812915    4724 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0108 20:11:54.813105    4724 main.go:141] libmachine: [stderr =====>] : 
	I0108 20:11:54.813221    4724 main.go:141] libmachine: Using switch "Default Switch"
	I0108 20:11:54.813221    4724 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0108 20:11:56.588996    4724 main.go:141] libmachine: [stdout =====>] : True
	
	I0108 20:11:56.589299    4724 main.go:141] libmachine: [stderr =====>] : 
	I0108 20:11:56.589299    4724 main.go:141] libmachine: Creating VHD
	I0108 20:11:56.589396    4724 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\addons-084500\fixed.vhd' -SizeBytes 10MB -Fixed
	I0108 20:12:00.325532    4724 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube7
	Path                    : C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\addons-084500\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : F6DB9CD0-D4A2-4756-86C3-EC1B2A4CC748
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0108 20:12:00.325532    4724 main.go:141] libmachine: [stderr =====>] : 
	I0108 20:12:00.325532    4724 main.go:141] libmachine: Writing magic tar header
	I0108 20:12:00.325532    4724 main.go:141] libmachine: Writing SSH key tar header
	I0108 20:12:00.336864    4724 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\addons-084500\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\addons-084500\disk.vhd' -VHDType Dynamic -DeleteSource
	I0108 20:12:03.535633    4724 main.go:141] libmachine: [stdout =====>] : 
	I0108 20:12:03.535633    4724 main.go:141] libmachine: [stderr =====>] : 
	I0108 20:12:03.535633    4724 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\addons-084500\disk.vhd' -SizeBytes 20000MB
	I0108 20:12:06.030125    4724 main.go:141] libmachine: [stdout =====>] : 
	I0108 20:12:06.030292    4724 main.go:141] libmachine: [stderr =====>] : 
	I0108 20:12:06.030357    4724 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM addons-084500 -Path 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\addons-084500' -SwitchName 'Default Switch' -MemoryStartupBytes 4000MB
	I0108 20:12:09.567180    4724 main.go:141] libmachine: [stdout =====>] : 
	Name          State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----          ----- ----------- ----------------- ------   ------             -------
	addons-084500 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0108 20:12:09.567180    4724 main.go:141] libmachine: [stderr =====>] : 
	I0108 20:12:09.567180    4724 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName addons-084500 -DynamicMemoryEnabled $false
	I0108 20:12:11.795748    4724 main.go:141] libmachine: [stdout =====>] : 
	I0108 20:12:11.795897    4724 main.go:141] libmachine: [stderr =====>] : 
	I0108 20:12:11.795963    4724 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor addons-084500 -Count 2
	I0108 20:12:13.896058    4724 main.go:141] libmachine: [stdout =====>] : 
	I0108 20:12:13.896276    4724 main.go:141] libmachine: [stderr =====>] : 
	I0108 20:12:13.896312    4724 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName addons-084500 -Path 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\addons-084500\boot2docker.iso'
	I0108 20:12:16.456852    4724 main.go:141] libmachine: [stdout =====>] : 
	I0108 20:12:16.457120    4724 main.go:141] libmachine: [stderr =====>] : 
	I0108 20:12:16.457298    4724 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName addons-084500 -Path 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\addons-084500\disk.vhd'
	I0108 20:12:19.016299    4724 main.go:141] libmachine: [stdout =====>] : 
	I0108 20:12:19.016436    4724 main.go:141] libmachine: [stderr =====>] : 
	I0108 20:12:19.016436    4724 main.go:141] libmachine: Starting VM...
	I0108 20:12:19.016436    4724 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM addons-084500
	I0108 20:12:21.964260    4724 main.go:141] libmachine: [stdout =====>] : 
	I0108 20:12:21.964260    4724 main.go:141] libmachine: [stderr =====>] : 
	I0108 20:12:21.964260    4724 main.go:141] libmachine: Waiting for host to start...
	I0108 20:12:21.964362    4724 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-084500 ).state
	I0108 20:12:24.272371    4724 main.go:141] libmachine: [stdout =====>] : Running
	
	I0108 20:12:24.272371    4724 main.go:141] libmachine: [stderr =====>] : 
	I0108 20:12:24.272502    4724 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-084500 ).networkadapters[0]).ipaddresses[0]
	I0108 20:12:26.809297    4724 main.go:141] libmachine: [stdout =====>] : 
	I0108 20:12:26.809297    4724 main.go:141] libmachine: [stderr =====>] : 
	I0108 20:12:27.822181    4724 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-084500 ).state
	I0108 20:12:30.050721    4724 main.go:141] libmachine: [stdout =====>] : Running
	
	I0108 20:12:30.050721    4724 main.go:141] libmachine: [stderr =====>] : 
	I0108 20:12:30.050721    4724 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-084500 ).networkadapters[0]).ipaddresses[0]
	I0108 20:12:32.592990    4724 main.go:141] libmachine: [stdout =====>] : 
	I0108 20:12:32.593143    4724 main.go:141] libmachine: [stderr =====>] : 
	I0108 20:12:33.597322    4724 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-084500 ).state
	I0108 20:12:35.774966    4724 main.go:141] libmachine: [stdout =====>] : Running
	
	I0108 20:12:35.775024    4724 main.go:141] libmachine: [stderr =====>] : 
	I0108 20:12:35.775024    4724 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-084500 ).networkadapters[0]).ipaddresses[0]
	I0108 20:12:38.290278    4724 main.go:141] libmachine: [stdout =====>] : 
	I0108 20:12:38.290338    4724 main.go:141] libmachine: [stderr =====>] : 
	I0108 20:12:39.298151    4724 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-084500 ).state
	I0108 20:12:41.494256    4724 main.go:141] libmachine: [stdout =====>] : Running
	
	I0108 20:12:41.494256    4724 main.go:141] libmachine: [stderr =====>] : 
	I0108 20:12:41.494256    4724 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-084500 ).networkadapters[0]).ipaddresses[0]
	I0108 20:12:43.992018    4724 main.go:141] libmachine: [stdout =====>] : 
	I0108 20:12:43.992018    4724 main.go:141] libmachine: [stderr =====>] : 
	I0108 20:12:44.992923    4724 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-084500 ).state
	I0108 20:12:47.189571    4724 main.go:141] libmachine: [stdout =====>] : Running
	
	I0108 20:12:47.189571    4724 main.go:141] libmachine: [stderr =====>] : 
	I0108 20:12:47.189816    4724 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-084500 ).networkadapters[0]).ipaddresses[0]
	I0108 20:12:49.697816    4724 main.go:141] libmachine: [stdout =====>] : 172.29.100.38
	
	I0108 20:12:49.697816    4724 main.go:141] libmachine: [stderr =====>] : 
	I0108 20:12:49.697913    4724 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-084500 ).state
	I0108 20:12:51.774091    4724 main.go:141] libmachine: [stdout =====>] : Running
	
	I0108 20:12:51.774237    4724 main.go:141] libmachine: [stderr =====>] : 
	I0108 20:12:51.774237    4724 machine.go:88] provisioning docker machine ...
	I0108 20:12:51.774237    4724 buildroot.go:166] provisioning hostname "addons-084500"
	I0108 20:12:51.774237    4724 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-084500 ).state
	I0108 20:12:53.893718    4724 main.go:141] libmachine: [stdout =====>] : Running
	
	I0108 20:12:53.893718    4724 main.go:141] libmachine: [stderr =====>] : 
	I0108 20:12:53.893844    4724 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-084500 ).networkadapters[0]).ipaddresses[0]
	I0108 20:12:56.431819    4724 main.go:141] libmachine: [stdout =====>] : 172.29.100.38
	
	I0108 20:12:56.432275    4724 main.go:141] libmachine: [stderr =====>] : 
	I0108 20:12:56.439581    4724 main.go:141] libmachine: Using SSH client type: native
	I0108 20:12:56.448720    4724 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1056120] 0x1058c60 <nil>  [] 0s} 172.29.100.38 22 <nil> <nil>}
	I0108 20:12:56.448720    4724 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-084500 && echo "addons-084500" | sudo tee /etc/hostname
	I0108 20:12:56.599389    4724 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-084500
	
	I0108 20:12:56.600355    4724 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-084500 ).state
	I0108 20:12:58.673493    4724 main.go:141] libmachine: [stdout =====>] : Running
	
	I0108 20:12:58.673751    4724 main.go:141] libmachine: [stderr =====>] : 
	I0108 20:12:58.673751    4724 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-084500 ).networkadapters[0]).ipaddresses[0]
	I0108 20:13:01.154099    4724 main.go:141] libmachine: [stdout =====>] : 172.29.100.38
	
	I0108 20:13:01.154099    4724 main.go:141] libmachine: [stderr =====>] : 
	I0108 20:13:01.158976    4724 main.go:141] libmachine: Using SSH client type: native
	I0108 20:13:01.159731    4724 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1056120] 0x1058c60 <nil>  [] 0s} 172.29.100.38 22 <nil> <nil>}
	I0108 20:13:01.159731    4724 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-084500' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-084500/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-084500' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0108 20:13:01.311090    4724 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0108 20:13:01.311090    4724 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube7\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube7\minikube-integration\.minikube}
	I0108 20:13:01.311090    4724 buildroot.go:174] setting up certificates
	I0108 20:13:01.311090    4724 provision.go:83] configureAuth start
	I0108 20:13:01.311830    4724 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-084500 ).state
	I0108 20:13:03.417616    4724 main.go:141] libmachine: [stdout =====>] : Running
	
	I0108 20:13:03.417616    4724 main.go:141] libmachine: [stderr =====>] : 
	I0108 20:13:03.417835    4724 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-084500 ).networkadapters[0]).ipaddresses[0]
	I0108 20:13:05.925402    4724 main.go:141] libmachine: [stdout =====>] : 172.29.100.38
	
	I0108 20:13:05.925402    4724 main.go:141] libmachine: [stderr =====>] : 
	I0108 20:13:05.925527    4724 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-084500 ).state
	I0108 20:13:08.009619    4724 main.go:141] libmachine: [stdout =====>] : Running
	
	I0108 20:13:08.009818    4724 main.go:141] libmachine: [stderr =====>] : 
	I0108 20:13:08.009818    4724 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-084500 ).networkadapters[0]).ipaddresses[0]
	I0108 20:13:10.487377    4724 main.go:141] libmachine: [stdout =====>] : 172.29.100.38
	
	I0108 20:13:10.487729    4724 main.go:141] libmachine: [stderr =====>] : 
	I0108 20:13:10.487729    4724 provision.go:138] copyHostCerts
	I0108 20:13:10.488451    4724 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0108 20:13:10.490246    4724 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0108 20:13:10.491553    4724 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/key.pem (1679 bytes)
	I0108 20:13:10.492724    4724 provision.go:112] generating server cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.addons-084500 san=[172.29.100.38 172.29.100.38 localhost 127.0.0.1 minikube addons-084500]
	I0108 20:13:10.772324    4724 provision.go:172] copyRemoteCerts
	I0108 20:13:10.784309    4724 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0108 20:13:10.784309    4724 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-084500 ).state
	I0108 20:13:12.861220    4724 main.go:141] libmachine: [stdout =====>] : Running
	
	I0108 20:13:12.861281    4724 main.go:141] libmachine: [stderr =====>] : 
	I0108 20:13:12.861281    4724 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-084500 ).networkadapters[0]).ipaddresses[0]
	I0108 20:13:15.342605    4724 main.go:141] libmachine: [stdout =====>] : 172.29.100.38
	
	I0108 20:13:15.342605    4724 main.go:141] libmachine: [stderr =====>] : 
	I0108 20:13:15.342756    4724 sshutil.go:53] new ssh client: &{IP:172.29.100.38 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\addons-084500\id_rsa Username:docker}
	I0108 20:13:15.455103    4724 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.6707689s)
	I0108 20:13:15.455784    4724 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0108 20:13:15.498215    4724 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1216 bytes)
	I0108 20:13:15.539118    4724 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0108 20:13:15.580993    4724 provision.go:86] duration metric: configureAuth took 14.2697429s
	I0108 20:13:15.581062    4724 buildroot.go:189] setting minikube options for container-runtime
	I0108 20:13:15.581759    4724 config.go:182] Loaded profile config "addons-084500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0108 20:13:15.581855    4724 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-084500 ).state
	I0108 20:13:17.688980    4724 main.go:141] libmachine: [stdout =====>] : Running
	
	I0108 20:13:17.688980    4724 main.go:141] libmachine: [stderr =====>] : 
	I0108 20:13:17.689078    4724 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-084500 ).networkadapters[0]).ipaddresses[0]
	I0108 20:13:20.196089    4724 main.go:141] libmachine: [stdout =====>] : 172.29.100.38
	
	I0108 20:13:20.196275    4724 main.go:141] libmachine: [stderr =====>] : 
	I0108 20:13:20.203628    4724 main.go:141] libmachine: Using SSH client type: native
	I0108 20:13:20.204551    4724 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1056120] 0x1058c60 <nil>  [] 0s} 172.29.100.38 22 <nil> <nil>}
	I0108 20:13:20.204551    4724 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0108 20:13:20.343533    4724 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0108 20:13:20.343533    4724 buildroot.go:70] root file system type: tmpfs
	I0108 20:13:20.343781    4724 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0108 20:13:20.343889    4724 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-084500 ).state
	I0108 20:13:22.482733    4724 main.go:141] libmachine: [stdout =====>] : Running
	
	I0108 20:13:22.482733    4724 main.go:141] libmachine: [stderr =====>] : 
	I0108 20:13:22.482831    4724 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-084500 ).networkadapters[0]).ipaddresses[0]
	I0108 20:13:25.059637    4724 main.go:141] libmachine: [stdout =====>] : 172.29.100.38
	
	I0108 20:13:25.059807    4724 main.go:141] libmachine: [stderr =====>] : 
	I0108 20:13:25.064548    4724 main.go:141] libmachine: Using SSH client type: native
	I0108 20:13:25.065352    4724 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1056120] 0x1058c60 <nil>  [] 0s} 172.29.100.38 22 <nil> <nil>}
	I0108 20:13:25.065352    4724 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0108 20:13:25.229115    4724 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0108 20:13:25.229866    4724 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-084500 ).state
	I0108 20:13:27.349131    4724 main.go:141] libmachine: [stdout =====>] : Running
	
	I0108 20:13:27.349131    4724 main.go:141] libmachine: [stderr =====>] : 
	I0108 20:13:27.349131    4724 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-084500 ).networkadapters[0]).ipaddresses[0]
	I0108 20:13:29.876524    4724 main.go:141] libmachine: [stdout =====>] : 172.29.100.38
	
	I0108 20:13:29.876524    4724 main.go:141] libmachine: [stderr =====>] : 
	I0108 20:13:29.884227    4724 main.go:141] libmachine: Using SSH client type: native
	I0108 20:13:29.885084    4724 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1056120] 0x1058c60 <nil>  [] 0s} 172.29.100.38 22 <nil> <nil>}
	I0108 20:13:29.885182    4724 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0108 20:13:30.846583    4724 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0108 20:13:30.846583    4724 machine.go:91] provisioned docker machine in 39.0721394s
	I0108 20:13:30.846689    4724 client.go:171] LocalClient.Create took 1m49.0208844s
	I0108 20:13:30.846733    4724 start.go:167] duration metric: libmachine.API.Create for "addons-084500" took 1m49.0211841s
	I0108 20:13:30.846827    4724 start.go:300] post-start starting for "addons-084500" (driver="hyperv")
	I0108 20:13:30.846827    4724 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0108 20:13:30.863233    4724 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0108 20:13:30.863233    4724 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-084500 ).state
	I0108 20:13:32.987992    4724 main.go:141] libmachine: [stdout =====>] : Running
	
	I0108 20:13:32.988222    4724 main.go:141] libmachine: [stderr =====>] : 
	I0108 20:13:32.988222    4724 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-084500 ).networkadapters[0]).ipaddresses[0]
	I0108 20:13:35.481502    4724 main.go:141] libmachine: [stdout =====>] : 172.29.100.38
	
	I0108 20:13:35.481804    4724 main.go:141] libmachine: [stderr =====>] : 
	I0108 20:13:35.482104    4724 sshutil.go:53] new ssh client: &{IP:172.29.100.38 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\addons-084500\id_rsa Username:docker}
	I0108 20:13:35.591520    4724 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.7278042s)
	I0108 20:13:35.607521    4724 ssh_runner.go:195] Run: cat /etc/os-release
	I0108 20:13:35.613527    4724 info.go:137] Remote host: Buildroot 2021.02.12
	I0108 20:13:35.613527    4724 filesync.go:126] Scanning C:\Users\jenkins.minikube7\minikube-integration\.minikube\addons for local assets ...
	I0108 20:13:35.614271    4724 filesync.go:126] Scanning C:\Users\jenkins.minikube7\minikube-integration\.minikube\files for local assets ...
	I0108 20:13:35.614271    4724 start.go:303] post-start completed in 4.7674192s
	I0108 20:13:35.618260    4724 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-084500 ).state
	I0108 20:13:37.771572    4724 main.go:141] libmachine: [stdout =====>] : Running
	
	I0108 20:13:37.771572    4724 main.go:141] libmachine: [stderr =====>] : 
	I0108 20:13:37.771843    4724 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-084500 ).networkadapters[0]).ipaddresses[0]
	I0108 20:13:40.320201    4724 main.go:141] libmachine: [stdout =====>] : 172.29.100.38
	
	I0108 20:13:40.320201    4724 main.go:141] libmachine: [stderr =====>] : 
	I0108 20:13:40.320614    4724 profile.go:148] Saving config to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-084500\config.json ...
	I0108 20:13:40.323611    4724 start.go:128] duration metric: createHost completed in 1m58.4989492s
	I0108 20:13:40.323706    4724 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-084500 ).state
	I0108 20:13:42.412479    4724 main.go:141] libmachine: [stdout =====>] : Running
	
	I0108 20:13:42.412479    4724 main.go:141] libmachine: [stderr =====>] : 
	I0108 20:13:42.412570    4724 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-084500 ).networkadapters[0]).ipaddresses[0]
	I0108 20:13:44.942052    4724 main.go:141] libmachine: [stdout =====>] : 172.29.100.38
	
	I0108 20:13:44.942277    4724 main.go:141] libmachine: [stderr =====>] : 
	I0108 20:13:44.947904    4724 main.go:141] libmachine: Using SSH client type: native
	I0108 20:13:44.948636    4724 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1056120] 0x1058c60 <nil>  [] 0s} 172.29.100.38 22 <nil> <nil>}
	I0108 20:13:44.948636    4724 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0108 20:13:45.076539    4724 main.go:141] libmachine: SSH cmd err, output: <nil>: 1704744825.078485202
	
	I0108 20:13:45.076539    4724 fix.go:206] guest clock: 1704744825.078485202
	I0108 20:13:45.076539    4724 fix.go:219] Guest: 2024-01-08 20:13:45.078485202 +0000 UTC Remote: 2024-01-08 20:13:40.3237067 +0000 UTC m=+124.225434501 (delta=4.754778502s)
	I0108 20:13:45.076659    4724 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-084500 ).state
	I0108 20:13:47.179352    4724 main.go:141] libmachine: [stdout =====>] : Running
	
	I0108 20:13:47.179352    4724 main.go:141] libmachine: [stderr =====>] : 
	I0108 20:13:47.179352    4724 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-084500 ).networkadapters[0]).ipaddresses[0]
	I0108 20:13:49.690822    4724 main.go:141] libmachine: [stdout =====>] : 172.29.100.38
	
	I0108 20:13:49.691069    4724 main.go:141] libmachine: [stderr =====>] : 
	I0108 20:13:49.696786    4724 main.go:141] libmachine: Using SSH client type: native
	I0108 20:13:49.697456    4724 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1056120] 0x1058c60 <nil>  [] 0s} 172.29.100.38 22 <nil> <nil>}
	I0108 20:13:49.697456    4724 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1704744825
	I0108 20:13:49.851640    4724 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Jan  8 20:13:45 UTC 2024
	
	I0108 20:13:49.851640    4724 fix.go:226] clock set: Mon Jan  8 20:13:45 UTC 2024
	 (err=<nil>)
	I0108 20:13:49.851640    4724 start.go:83] releasing machines lock for "addons-084500", held for 2m8.0272278s
	I0108 20:13:49.851640    4724 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-084500 ).state
	I0108 20:13:51.963115    4724 main.go:141] libmachine: [stdout =====>] : Running
	
	I0108 20:13:51.963115    4724 main.go:141] libmachine: [stderr =====>] : 
	I0108 20:13:51.963243    4724 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-084500 ).networkadapters[0]).ipaddresses[0]
	I0108 20:13:54.520906    4724 main.go:141] libmachine: [stdout =====>] : 172.29.100.38
	
	I0108 20:13:54.520906    4724 main.go:141] libmachine: [stderr =====>] : 
	I0108 20:13:54.525165    4724 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0108 20:13:54.525269    4724 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-084500 ).state
	I0108 20:13:54.536078    4724 ssh_runner.go:195] Run: cat /version.json
	I0108 20:13:54.536078    4724 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-084500 ).state
	I0108 20:13:56.741498    4724 main.go:141] libmachine: [stdout =====>] : Running
	
	I0108 20:13:56.741498    4724 main.go:141] libmachine: [stderr =====>] : 
	I0108 20:13:56.741498    4724 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-084500 ).networkadapters[0]).ipaddresses[0]
	I0108 20:13:56.761532    4724 main.go:141] libmachine: [stdout =====>] : Running
	
	I0108 20:13:56.761532    4724 main.go:141] libmachine: [stderr =====>] : 
	I0108 20:13:56.761709    4724 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-084500 ).networkadapters[0]).ipaddresses[0]
	I0108 20:13:59.330418    4724 main.go:141] libmachine: [stdout =====>] : 172.29.100.38
	
	I0108 20:13:59.330628    4724 main.go:141] libmachine: [stderr =====>] : 
	I0108 20:13:59.331151    4724 sshutil.go:53] new ssh client: &{IP:172.29.100.38 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\addons-084500\id_rsa Username:docker}
	I0108 20:13:59.350235    4724 main.go:141] libmachine: [stdout =====>] : 172.29.100.38
	
	I0108 20:13:59.350235    4724 main.go:141] libmachine: [stderr =====>] : 
	I0108 20:13:59.350235    4724 sshutil.go:53] new ssh client: &{IP:172.29.100.38 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\addons-084500\id_rsa Username:docker}
	I0108 20:13:59.567414    4724 ssh_runner.go:235] Completed: cat /version.json: (5.0313093s)
	I0108 20:13:59.567548    4724 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (5.0423566s)
	I0108 20:13:59.580482    4724 ssh_runner.go:195] Run: systemctl --version
	I0108 20:13:59.603853    4724 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0108 20:13:59.611194    4724 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0108 20:13:59.626796    4724 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0108 20:13:59.649326    4724 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0108 20:13:59.649326    4724 start.go:475] detecting cgroup driver to use...
	I0108 20:13:59.649876    4724 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0108 20:13:59.690276    4724 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0108 20:13:59.718356    4724 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0108 20:13:59.733844    4724 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0108 20:13:59.746902    4724 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0108 20:13:59.773734    4724 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0108 20:13:59.801694    4724 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0108 20:13:59.828080    4724 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0108 20:13:59.857828    4724 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0108 20:13:59.889118    4724 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0108 20:13:59.921781    4724 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0108 20:13:59.949919    4724 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0108 20:13:59.978455    4724 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0108 20:14:00.141305    4724 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0108 20:14:00.169645    4724 start.go:475] detecting cgroup driver to use...
	I0108 20:14:00.183330    4724 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0108 20:14:00.214014    4724 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0108 20:14:00.250900    4724 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0108 20:14:00.289255    4724 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0108 20:14:00.320822    4724 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0108 20:14:00.351815    4724 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0108 20:14:00.403786    4724 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0108 20:14:00.423813    4724 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0108 20:14:00.463827    4724 ssh_runner.go:195] Run: which cri-dockerd
	I0108 20:14:00.481792    4724 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0108 20:14:00.498336    4724 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0108 20:14:00.538577    4724 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0108 20:14:00.709616    4724 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0108 20:14:00.879793    4724 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0108 20:14:00.879793    4724 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0108 20:14:00.925091    4724 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0108 20:14:01.100011    4724 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0108 20:14:02.590857    4724 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.4907135s)
	I0108 20:14:02.605879    4724 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0108 20:14:02.641227    4724 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0108 20:14:02.676173    4724 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0108 20:14:02.848580    4724 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0108 20:14:03.023838    4724 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0108 20:14:03.195496    4724 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0108 20:14:03.235266    4724 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0108 20:14:03.269083    4724 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0108 20:14:03.447088    4724 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0108 20:14:03.550393    4724 start.go:522] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0108 20:14:03.563972    4724 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0108 20:14:03.572536    4724 start.go:543] Will wait 60s for crictl version
	I0108 20:14:03.586349    4724 ssh_runner.go:195] Run: which crictl
	I0108 20:14:03.605434    4724 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0108 20:14:03.678496    4724 start.go:559] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  24.0.7
	RuntimeApiVersion:  v1
	I0108 20:14:03.689216    4724 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0108 20:14:03.731132    4724 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0108 20:14:03.763341    4724 out.go:204] * Preparing Kubernetes v1.28.4 on Docker 24.0.7 ...
	I0108 20:14:03.763507    4724 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0108 20:14:03.768585    4724 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0108 20:14:03.768585    4724 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0108 20:14:03.768585    4724 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0108 20:14:03.768585    4724 ip.go:207] Found interface: {Index:10 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:21:91:d6 Flags:up|broadcast|multicast|running}
	I0108 20:14:03.771613    4724 ip.go:210] interface addr: fe80::2be2:9d7a:5cc4:f25c/64
	I0108 20:14:03.771613    4724 ip.go:210] interface addr: 172.29.96.1/20
	I0108 20:14:03.787148    4724 ssh_runner.go:195] Run: grep 172.29.96.1	host.minikube.internal$ /etc/hosts
	I0108 20:14:03.792618    4724 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.29.96.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0108 20:14:03.809858    4724 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0108 20:14:03.820924    4724 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0108 20:14:03.843420    4724 docker.go:685] Got preloaded images: 
	I0108 20:14:03.843420    4724 docker.go:691] registry.k8s.io/kube-apiserver:v1.28.4 wasn't preloaded
	I0108 20:14:03.861044    4724 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0108 20:14:03.891569    4724 ssh_runner.go:195] Run: which lz4
	I0108 20:14:03.910358    4724 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0108 20:14:03.916332    4724 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0108 20:14:03.916612    4724 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (423165415 bytes)
	I0108 20:14:06.194576    4724 docker.go:649] Took 2.297601 seconds to copy over tarball
	I0108 20:14:06.210898    4724 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0108 20:14:11.879221    4724 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (5.6682938s)
	I0108 20:14:11.879221    4724 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0108 20:14:11.946269    4724 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0108 20:14:11.962032    4724 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2629 bytes)
	I0108 20:14:12.002147    4724 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0108 20:14:12.177182    4724 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0108 20:14:18.517557    4724 ssh_runner.go:235] Completed: sudo systemctl restart docker: (6.3401783s)
	I0108 20:14:18.527857    4724 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0108 20:14:18.553142    4724 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.28.4
	registry.k8s.io/kube-proxy:v1.28.4
	registry.k8s.io/kube-controller-manager:v1.28.4
	registry.k8s.io/kube-scheduler:v1.28.4
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0108 20:14:18.553142    4724 cache_images.go:84] Images are preloaded, skipping loading
	I0108 20:14:18.562680    4724 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0108 20:14:18.597931    4724 cni.go:84] Creating CNI manager for ""
	I0108 20:14:18.598354    4724 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0108 20:14:18.598420    4724 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0108 20:14:18.598420    4724 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.29.100.38 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-084500 NodeName:addons-084500 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.29.100.38"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.29.100.38 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/ku
bernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0108 20:14:18.598610    4724 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.29.100.38
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "addons-084500"
	  kubeletExtraArgs:
	    node-ip: 172.29.100.38
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.29.100.38"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0108 20:14:18.598610    4724 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=addons-084500 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.29.100.38
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:addons-084500 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0108 20:14:18.613076    4724 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0108 20:14:18.626390    4724 binaries.go:44] Found k8s binaries, skipping transfer
	I0108 20:14:18.640291    4724 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0108 20:14:18.654331    4724 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (375 bytes)
	I0108 20:14:18.678702    4724 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0108 20:14:18.702661    4724 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2099 bytes)
	I0108 20:14:18.740696    4724 ssh_runner.go:195] Run: grep 172.29.100.38	control-plane.minikube.internal$ /etc/hosts
	I0108 20:14:18.745022    4724 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.29.100.38	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0108 20:14:18.762411    4724 certs.go:56] Setting up C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-084500 for IP: 172.29.100.38
	I0108 20:14:18.762545    4724 certs.go:190] acquiring lock for shared ca certs: {Name:mk3bd6e475aadf28590677101b36f61c132d4290 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 20:14:18.762905    4724 certs.go:204] generating minikubeCA CA: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.key
	I0108 20:14:18.997633    4724 crypto.go:156] Writing cert to C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.crt ...
	I0108 20:14:18.997633    4724 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.crt: {Name:mkfaab427ca81a644dd8158f14f3f807f65e8ec2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 20:14:18.999792    4724 crypto.go:164] Writing key to C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.key ...
	I0108 20:14:18.999792    4724 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.key: {Name:mke77f92a4900f4ba92d06a20a85ddb2e967d43b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 20:14:19.001295    4724 certs.go:204] generating proxyClientCA CA: C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.key
	I0108 20:14:19.137591    4724 crypto.go:156] Writing cert to C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.crt ...
	I0108 20:14:19.137591    4724 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.crt: {Name:mk06242bb3e648e29b1f160fecc7578d1c3ccbe2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 20:14:19.139722    4724 crypto.go:164] Writing key to C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.key ...
	I0108 20:14:19.139722    4724 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.key: {Name:mk9dbfc690f0c353aa1a789ba901364f0646dd1e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 20:14:19.140733    4724 certs.go:319] generating minikube-user signed cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-084500\client.key
	I0108 20:14:19.141722    4724 crypto.go:68] Generating cert C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-084500\client.crt with IP's: []
	I0108 20:14:19.270358    4724 crypto.go:156] Writing cert to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-084500\client.crt ...
	I0108 20:14:19.270358    4724 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-084500\client.crt: {Name:mkba76a398ec6684e1afb52584897379065f0259 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 20:14:19.271427    4724 crypto.go:164] Writing key to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-084500\client.key ...
	I0108 20:14:19.271427    4724 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-084500\client.key: {Name:mk0985f5b74f3292d71dd51dfe63240a1f664766 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 20:14:19.272424    4724 certs.go:319] generating minikube signed cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-084500\apiserver.key.eb1dda97
	I0108 20:14:19.273450    4724 crypto.go:68] Generating cert C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-084500\apiserver.crt.eb1dda97 with IP's: [172.29.100.38 10.96.0.1 127.0.0.1 10.0.0.1]
	I0108 20:14:19.364353    4724 crypto.go:156] Writing cert to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-084500\apiserver.crt.eb1dda97 ...
	I0108 20:14:19.364353    4724 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-084500\apiserver.crt.eb1dda97: {Name:mk41488454581025c227181a62fa19fb1759ed77 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 20:14:19.365377    4724 crypto.go:164] Writing key to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-084500\apiserver.key.eb1dda97 ...
	I0108 20:14:19.365377    4724 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-084500\apiserver.key.eb1dda97: {Name:mked3d80968db5c168d112d5bf72b7a3be8d5fb8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 20:14:19.366789    4724 certs.go:337] copying C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-084500\apiserver.crt.eb1dda97 -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-084500\apiserver.crt
	I0108 20:14:19.375884    4724 certs.go:341] copying C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-084500\apiserver.key.eb1dda97 -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-084500\apiserver.key
	I0108 20:14:19.381513    4724 certs.go:319] generating aggregator signed cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-084500\proxy-client.key
	I0108 20:14:19.381513    4724 crypto.go:68] Generating cert C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-084500\proxy-client.crt with IP's: []
	I0108 20:14:19.629210    4724 crypto.go:156] Writing cert to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-084500\proxy-client.crt ...
	I0108 20:14:19.629210    4724 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-084500\proxy-client.crt: {Name:mk2a7a99aa5b2915b77aecac2ad91a0887b275ec Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 20:14:19.630932    4724 crypto.go:164] Writing key to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-084500\proxy-client.key ...
	I0108 20:14:19.630932    4724 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-084500\proxy-client.key: {Name:mk9c1a9a69d4fd3adc4b5b0b846dc63dee7dbe41 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 20:14:19.642153    4724 certs.go:437] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I0108 20:14:19.642368    4724 certs.go:437] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0108 20:14:19.642368    4724 certs.go:437] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0108 20:14:19.642368    4724 certs.go:437] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0108 20:14:19.644254    4724 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-084500\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0108 20:14:19.686351    4724 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-084500\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0108 20:14:19.725413    4724 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-084500\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0108 20:14:19.768707    4724 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-084500\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0108 20:14:19.808328    4724 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0108 20:14:19.850784    4724 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0108 20:14:19.896368    4724 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0108 20:14:19.945735    4724 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0108 20:14:19.986955    4724 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0108 20:14:20.030350    4724 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0108 20:14:20.076409    4724 ssh_runner.go:195] Run: openssl version
	I0108 20:14:20.097089    4724 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0108 20:14:20.126348    4724 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0108 20:14:20.134999    4724 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan  8 20:14 /usr/share/ca-certificates/minikubeCA.pem
	I0108 20:14:20.150329    4724 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0108 20:14:20.174532    4724 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0108 20:14:20.202280    4724 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0108 20:14:20.208961    4724 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0108 20:14:20.209207    4724 kubeadm.go:404] StartCluster: {Name:addons-084500 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17806/minikube-v1.32.1-1702708929-17806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:
v1.28.4 ClusterName:addons-084500 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:172.29.100.38 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262
144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I0108 20:14:20.219261    4724 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0108 20:14:20.261146    4724 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0108 20:14:20.288248    4724 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0108 20:14:20.314914    4724 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0108 20:14:20.329535    4724 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0108 20:14:20.329661    4724 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0108 20:14:20.605149    4724 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0108 20:14:34.365819    4724 kubeadm.go:322] [init] Using Kubernetes version: v1.28.4
	I0108 20:14:34.366061    4724 kubeadm.go:322] [preflight] Running pre-flight checks
	I0108 20:14:34.366163    4724 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0108 20:14:34.366163    4724 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0108 20:14:34.366163    4724 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0108 20:14:34.366800    4724 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0108 20:14:34.367243    4724 out.go:204]   - Generating certificates and keys ...
	I0108 20:14:34.367243    4724 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0108 20:14:34.367838    4724 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0108 20:14:34.367838    4724 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0108 20:14:34.367838    4724 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0108 20:14:34.368478    4724 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0108 20:14:34.368589    4724 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0108 20:14:34.368801    4724 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0108 20:14:34.369143    4724 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [addons-084500 localhost] and IPs [172.29.100.38 127.0.0.1 ::1]
	I0108 20:14:34.369362    4724 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0108 20:14:34.369803    4724 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [addons-084500 localhost] and IPs [172.29.100.38 127.0.0.1 ::1]
	I0108 20:14:34.370056    4724 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0108 20:14:34.370125    4724 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0108 20:14:34.370125    4724 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0108 20:14:34.370125    4724 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0108 20:14:34.370125    4724 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0108 20:14:34.370910    4724 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0108 20:14:34.371015    4724 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0108 20:14:34.371015    4724 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0108 20:14:34.371015    4724 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0108 20:14:34.371015    4724 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0108 20:14:34.372153    4724 out.go:204]   - Booting up control plane ...
	I0108 20:14:34.372195    4724 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0108 20:14:34.372195    4724 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0108 20:14:34.372195    4724 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0108 20:14:34.373015    4724 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0108 20:14:34.373278    4724 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0108 20:14:34.373470    4724 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0108 20:14:34.373880    4724 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0108 20:14:34.374157    4724 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.005157 seconds
	I0108 20:14:34.374399    4724 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0108 20:14:34.374399    4724 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0108 20:14:34.374966    4724 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0108 20:14:34.375231    4724 kubeadm.go:322] [mark-control-plane] Marking the node addons-084500 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0108 20:14:34.375231    4724 kubeadm.go:322] [bootstrap-token] Using token: 0x4jel.abcrk0jpnxkjf47s
	I0108 20:14:34.376041    4724 out.go:204]   - Configuring RBAC rules ...
	I0108 20:14:34.376041    4724 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0108 20:14:34.376757    4724 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0108 20:14:34.376868    4724 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0108 20:14:34.376868    4724 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0108 20:14:34.377492    4724 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0108 20:14:34.377492    4724 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0108 20:14:34.378095    4724 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0108 20:14:34.378362    4724 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0108 20:14:34.378518    4724 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0108 20:14:34.378633    4724 kubeadm.go:322] 
	I0108 20:14:34.378687    4724 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0108 20:14:34.378687    4724 kubeadm.go:322] 
	I0108 20:14:34.378687    4724 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0108 20:14:34.378687    4724 kubeadm.go:322] 
	I0108 20:14:34.378687    4724 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0108 20:14:34.379236    4724 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0108 20:14:34.379385    4724 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0108 20:14:34.379385    4724 kubeadm.go:322] 
	I0108 20:14:34.379606    4724 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0108 20:14:34.379731    4724 kubeadm.go:322] 
	I0108 20:14:34.380016    4724 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0108 20:14:34.380119    4724 kubeadm.go:322] 
	I0108 20:14:34.380280    4724 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0108 20:14:34.380314    4724 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0108 20:14:34.380314    4724 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0108 20:14:34.380314    4724 kubeadm.go:322] 
	I0108 20:14:34.380927    4724 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0108 20:14:34.381173    4724 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0108 20:14:34.381173    4724 kubeadm.go:322] 
	I0108 20:14:34.381375    4724 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token 0x4jel.abcrk0jpnxkjf47s \
	I0108 20:14:34.381606    4724 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:5d4a576aad216c3b59d844299451f9173aefcd7f6ecd29b777d9935fab24b02c \
	I0108 20:14:34.381705    4724 kubeadm.go:322] 	--control-plane 
	I0108 20:14:34.381759    4724 kubeadm.go:322] 
	I0108 20:14:34.381988    4724 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0108 20:14:34.382047    4724 kubeadm.go:322] 
	I0108 20:14:34.382094    4724 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token 0x4jel.abcrk0jpnxkjf47s \
	I0108 20:14:34.382094    4724 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:5d4a576aad216c3b59d844299451f9173aefcd7f6ecd29b777d9935fab24b02c 
	I0108 20:14:34.382094    4724 cni.go:84] Creating CNI manager for ""
	I0108 20:14:34.382094    4724 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0108 20:14:34.382888    4724 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0108 20:14:34.396168    4724 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0108 20:14:34.419440    4724 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0108 20:14:34.456083    4724 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0108 20:14:34.472142    4724 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 20:14:34.474601    4724 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=255792ad75c0218cbe9d2c7121633a1b1d442f28 minikube.k8s.io/name=addons-084500 minikube.k8s.io/updated_at=2024_01_08T20_14_34_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 20:14:34.491801    4724 ops.go:34] apiserver oom_adj: -16
	I0108 20:14:34.829967    4724 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 20:14:35.341111    4724 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 20:14:35.842580    4724 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 20:14:36.340508    4724 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 20:14:36.843997    4724 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 20:14:37.333733    4724 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 20:14:37.840633    4724 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 20:14:38.331571    4724 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 20:14:38.833830    4724 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 20:14:39.347470    4724 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 20:14:39.850231    4724 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 20:14:40.336534    4724 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 20:14:40.856639    4724 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 20:14:41.334729    4724 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 20:14:41.833309    4724 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 20:14:42.340147    4724 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 20:14:42.836517    4724 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 20:14:43.337655    4724 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 20:14:43.833415    4724 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 20:14:44.353071    4724 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 20:14:44.840558    4724 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 20:14:45.342907    4724 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 20:14:45.834139    4724 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 20:14:46.333178    4724 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 20:14:46.483340    4724 kubeadm.go:1088] duration metric: took 12.027094s to wait for elevateKubeSystemPrivileges.
	I0108 20:14:46.483471    4724 kubeadm.go:406] StartCluster complete in 26.2741244s
	I0108 20:14:46.483581    4724 settings.go:142] acquiring lock: {Name:mk2f48f1c2db86c45c5c20d13312e07e9c171d09 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 20:14:46.483711    4724 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	I0108 20:14:46.484556    4724 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\kubeconfig: {Name:mk3b9816be6135fef42a073128e9ed356868417a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 20:14:46.486207    4724 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0108 20:14:46.486207    4724 addons.go:505] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volumesnapshots:true yakd:true]
	I0108 20:14:46.486756    4724 addons.go:69] Setting yakd=true in profile "addons-084500"
	I0108 20:14:46.486795    4724 addons.go:69] Setting gcp-auth=true in profile "addons-084500"
	I0108 20:14:46.486795    4724 mustload.go:65] Loading cluster: addons-084500
	I0108 20:14:46.486795    4724 addons.go:69] Setting ingress=true in profile "addons-084500"
	I0108 20:14:46.486795    4724 addons.go:237] Setting addon ingress=true in "addons-084500"
	I0108 20:14:46.486795    4724 config.go:182] Loaded profile config "addons-084500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0108 20:14:46.486795    4724 host.go:66] Checking if "addons-084500" exists ...
	I0108 20:14:46.486795    4724 addons.go:69] Setting cloud-spanner=true in profile "addons-084500"
	I0108 20:14:46.486795    4724 addons.go:237] Setting addon cloud-spanner=true in "addons-084500"
	I0108 20:14:46.486795    4724 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-084500"
	I0108 20:14:46.487333    4724 addons.go:237] Setting addon nvidia-device-plugin=true in "addons-084500"
	I0108 20:14:46.487333    4724 config.go:182] Loaded profile config "addons-084500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0108 20:14:46.487472    4724 host.go:66] Checking if "addons-084500" exists ...
	I0108 20:14:46.487472    4724 addons.go:69] Setting storage-provisioner=true in profile "addons-084500"
	I0108 20:14:46.487472    4724 addons.go:237] Setting addon storage-provisioner=true in "addons-084500"
	I0108 20:14:46.487472    4724 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-084500"
	I0108 20:14:46.487663    4724 host.go:66] Checking if "addons-084500" exists ...
	I0108 20:14:46.487778    4724 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-084500"
	I0108 20:14:46.486795    4724 addons.go:237] Setting addon yakd=true in "addons-084500"
	I0108 20:14:46.488119    4724 host.go:66] Checking if "addons-084500" exists ...
	I0108 20:14:46.488119    4724 addons.go:69] Setting registry=true in profile "addons-084500"
	I0108 20:14:46.488332    4724 addons.go:237] Setting addon registry=true in "addons-084500"
	I0108 20:14:46.488409    4724 addons.go:69] Setting ingress-dns=true in profile "addons-084500"
	I0108 20:14:46.488409    4724 addons.go:69] Setting default-storageclass=true in profile "addons-084500"
	I0108 20:14:46.488467    4724 addons.go:237] Setting addon ingress-dns=true in "addons-084500"
	I0108 20:14:46.488467    4724 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-084500"
	I0108 20:14:46.488539    4724 host.go:66] Checking if "addons-084500" exists ...
	I0108 20:14:46.487472    4724 host.go:66] Checking if "addons-084500" exists ...
	I0108 20:14:46.488539    4724 host.go:66] Checking if "addons-084500" exists ...
	I0108 20:14:46.486795    4724 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-084500"
	I0108 20:14:46.488539    4724 addons.go:69] Setting volumesnapshots=true in profile "addons-084500"
	I0108 20:14:46.489075    4724 addons.go:237] Setting addon volumesnapshots=true in "addons-084500"
	I0108 20:14:46.489075    4724 addons.go:237] Setting addon csi-hostpath-driver=true in "addons-084500"
	I0108 20:14:46.489183    4724 host.go:66] Checking if "addons-084500" exists ...
	I0108 20:14:46.486795    4724 addons.go:69] Setting helm-tiller=true in profile "addons-084500"
	I0108 20:14:46.489279    4724 host.go:66] Checking if "addons-084500" exists ...
	I0108 20:14:46.489279    4724 addons.go:237] Setting addon helm-tiller=true in "addons-084500"
	I0108 20:14:46.489450    4724 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-084500 ).state
	I0108 20:14:46.489548    4724 addons.go:69] Setting inspektor-gadget=true in profile "addons-084500"
	I0108 20:14:46.489668    4724 addons.go:237] Setting addon inspektor-gadget=true in "addons-084500"
	I0108 20:14:46.489548    4724 addons.go:69] Setting metrics-server=true in profile "addons-084500"
	I0108 20:14:46.489824    4724 host.go:66] Checking if "addons-084500" exists ...
	I0108 20:14:46.489824    4724 addons.go:237] Setting addon metrics-server=true in "addons-084500"
	I0108 20:14:46.490050    4724 host.go:66] Checking if "addons-084500" exists ...
	I0108 20:14:46.489450    4724 host.go:66] Checking if "addons-084500" exists ...
	I0108 20:14:46.490442    4724 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-084500 ).state
	I0108 20:14:46.491234    4724 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-084500 ).state
	I0108 20:14:46.491952    4724 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-084500 ).state
	I0108 20:14:46.493048    4724 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-084500 ).state
	I0108 20:14:46.494116    4724 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-084500 ).state
	I0108 20:14:46.494116    4724 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-084500 ).state
	I0108 20:14:46.494116    4724 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-084500 ).state
	I0108 20:14:46.494655    4724 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-084500 ).state
	I0108 20:14:46.494766    4724 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-084500 ).state
	I0108 20:14:46.494766    4724 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-084500 ).state
	I0108 20:14:46.495418    4724 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-084500 ).state
	I0108 20:14:46.495418    4724 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-084500 ).state
	I0108 20:14:46.495418    4724 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-084500 ).state
	I0108 20:14:46.495418    4724 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-084500 ).state
	I0108 20:14:47.139820    4724 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           172.29.96.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0108 20:14:47.216278    4724 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-084500" context rescaled to 1 replicas
	I0108 20:14:47.216278    4724 start.go:223] Will wait 6m0s for node &{Name: IP:172.29.100.38 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0108 20:14:47.219299    4724 out.go:177] * Verifying Kubernetes components...
	I0108 20:14:47.260358    4724 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0108 20:14:51.828866    4724 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           172.29.96.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (4.6890218s)
	I0108 20:14:51.828866    4724 start.go:929] {"host.minikube.internal": 172.29.96.1} host record injected into CoreDNS's ConfigMap
	I0108 20:14:51.828866    4724 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (4.5684838s)
	I0108 20:14:51.850012    4724 node_ready.go:35] waiting up to 6m0s for node "addons-084500" to be "Ready" ...
	I0108 20:14:51.895133    4724 node_ready.go:49] node "addons-084500" has status "Ready":"True"
	I0108 20:14:51.895133    4724 node_ready.go:38] duration metric: took 45.1209ms waiting for node "addons-084500" to be "Ready" ...
	I0108 20:14:51.895133    4724 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0108 20:14:52.002876    4724 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-bt568" in "kube-system" namespace to be "Ready" ...
	I0108 20:14:52.717397    4724 main.go:141] libmachine: [stdout =====>] : Running
	
	I0108 20:14:52.717451    4724 main.go:141] libmachine: [stderr =====>] : 
	I0108 20:14:52.721470    4724 main.go:141] libmachine: [stdout =====>] : Running
	
	I0108 20:14:52.721470    4724 main.go:141] libmachine: [stderr =====>] : 
	I0108 20:14:52.722962    4724 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	I0108 20:14:52.722581    4724 addons.go:237] Setting addon default-storageclass=true in "addons-084500"
	I0108 20:14:52.723931    4724 addons.go:429] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0108 20:14:52.724006    4724 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0108 20:14:52.724006    4724 host.go:66] Checking if "addons-084500" exists ...
	I0108 20:14:52.724068    4724 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-084500 ).state
	I0108 20:14:52.726133    4724 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-084500 ).state
	I0108 20:14:52.729367    4724 main.go:141] libmachine: [stdout =====>] : Running
	
	I0108 20:14:52.729367    4724 main.go:141] libmachine: [stderr =====>] : 
	I0108 20:14:52.731722    4724 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.4
	I0108 20:14:52.732268    4724 addons.go:429] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0108 20:14:52.732268    4724 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0108 20:14:52.732268    4724 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-084500 ).state
	I0108 20:14:52.737964    4724 main.go:141] libmachine: [stdout =====>] : Running
	
	I0108 20:14:52.737964    4724 main.go:141] libmachine: [stderr =====>] : 
	I0108 20:14:52.746336    4724 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.13
	I0108 20:14:52.747727    4724 addons.go:429] installing /etc/kubernetes/addons/deployment.yaml
	I0108 20:14:52.747727    4724 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0108 20:14:52.747727    4724 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-084500 ).state
	I0108 20:14:52.796637    4724 main.go:141] libmachine: [stdout =====>] : Running
	
	I0108 20:14:52.796637    4724 main.go:141] libmachine: [stderr =====>] : 
	I0108 20:14:52.796637    4724 host.go:66] Checking if "addons-084500" exists ...
	I0108 20:14:52.856221    4724 main.go:141] libmachine: [stdout =====>] : Running
	
	I0108 20:14:52.859599    4724 main.go:141] libmachine: [stderr =====>] : 
	I0108 20:14:52.866087    4724 out.go:177]   - Using image docker.io/registry:2.8.3
	I0108 20:14:52.874706    4724 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.5
	I0108 20:14:52.882791    4724 addons.go:429] installing /etc/kubernetes/addons/registry-rc.yaml
	I0108 20:14:52.882791    4724 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (798 bytes)
	I0108 20:14:52.882791    4724 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-084500 ).state
	I0108 20:14:52.946617    4724 main.go:141] libmachine: [stdout =====>] : Running
	
	I0108 20:14:52.946617    4724 main.go:141] libmachine: [stderr =====>] : 
	I0108 20:14:52.948444    4724 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0108 20:14:52.949133    4724 addons.go:429] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0108 20:14:52.949133    4724 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0108 20:14:52.949133    4724 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-084500 ).state
	I0108 20:14:52.989101    4724 main.go:141] libmachine: [stdout =====>] : Running
	
	I0108 20:14:52.989101    4724 main.go:141] libmachine: [stderr =====>] : 
	I0108 20:14:52.992305    4724 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.14.3
	I0108 20:14:52.994470    4724 addons.go:429] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0108 20:14:52.994470    4724 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0108 20:14:52.994470    4724 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-084500 ).state
	I0108 20:14:52.989101    4724 main.go:141] libmachine: [stdout =====>] : Running
	
	I0108 20:14:52.994470    4724 main.go:141] libmachine: [stderr =====>] : 
	I0108 20:14:52.989101    4724 main.go:141] libmachine: [stdout =====>] : Running
	
	I0108 20:14:52.998168    4724 main.go:141] libmachine: [stderr =====>] : 
	I0108 20:14:53.014220    4724 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0108 20:14:52.998168    4724 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.9.5
	I0108 20:14:53.030370    4724 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
	I0108 20:14:53.029254    4724 addons.go:429] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0108 20:14:53.029254    4724 main.go:141] libmachine: [stdout =====>] : Running
	
	I0108 20:14:53.037262    4724 main.go:141] libmachine: [stderr =====>] : 
	I0108 20:14:53.051517    4724 addons.go:237] Setting addon storage-provisioner-rancher=true in "addons-084500"
	I0108 20:14:53.051517    4724 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0108 20:14:53.076954    4724 host.go:66] Checking if "addons-084500" exists ...
	I0108 20:14:53.076954    4724 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-084500 ).state
	I0108 20:14:53.076954    4724 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-084500 ).state
	I0108 20:14:53.079112    4724 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
	I0108 20:14:53.081163    4724 addons.go:429] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0108 20:14:53.081163    4724 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16103 bytes)
	I0108 20:14:53.081163    4724 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-084500 ).state
	I0108 20:14:53.090471    4724 main.go:141] libmachine: [stdout =====>] : Running
	
	I0108 20:14:53.090471    4724 main.go:141] libmachine: [stderr =====>] : 
	I0108 20:14:53.094074    4724 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.23.1
	I0108 20:14:53.122880    4724 addons.go:429] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0108 20:14:53.122880    4724 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0108 20:14:53.122880    4724 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-084500 ).state
	I0108 20:14:53.448175    4724 main.go:141] libmachine: [stdout =====>] : Running
	
	I0108 20:14:53.448175    4724 main.go:141] libmachine: [stderr =====>] : 
	I0108 20:14:53.449475    4724 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.6.4
	I0108 20:14:53.449782    4724 addons.go:429] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0108 20:14:53.449782    4724 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0108 20:14:53.449782    4724 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-084500 ).state
	I0108 20:14:53.631394    4724 main.go:141] libmachine: [stdout =====>] : Running
	
	I0108 20:14:53.631394    4724 main.go:141] libmachine: [stderr =====>] : 
	I0108 20:14:53.666350    4724 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0108 20:14:53.668357    4724 addons.go:429] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0108 20:14:53.668432    4724 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0108 20:14:53.668500    4724 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-084500 ).state
	I0108 20:14:53.719385    4724 main.go:141] libmachine: [stdout =====>] : Running
	
	I0108 20:14:53.719385    4724 main.go:141] libmachine: [stderr =====>] : 
	I0108 20:14:53.730215    4724 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0108 20:14:53.736221    4724 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0108 20:14:53.737897    4724 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0108 20:14:53.742790    4724 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0108 20:14:53.744047    4724 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0108 20:14:53.745178    4724 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0108 20:14:53.746299    4724 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0108 20:14:53.746781    4724 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0108 20:14:53.747675    4724 addons.go:429] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0108 20:14:53.747675    4724 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0108 20:14:53.747675    4724 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-084500 ).state
	I0108 20:14:54.057654    4724 pod_ready.go:102] pod "coredns-5dd5756b68-bt568" in "kube-system" namespace has status "Ready":"False"
	I0108 20:14:56.425110    4724 pod_ready.go:102] pod "coredns-5dd5756b68-bt568" in "kube-system" namespace has status "Ready":"False"
	I0108 20:14:58.276305    4724 main.go:141] libmachine: [stdout =====>] : Running
	
	I0108 20:14:58.276305    4724 main.go:141] libmachine: [stderr =====>] : 
	I0108 20:14:58.276305    4724 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-084500 ).networkadapters[0]).ipaddresses[0]
	I0108 20:14:58.380569    4724 main.go:141] libmachine: [stdout =====>] : Running
	
	I0108 20:14:58.380569    4724 main.go:141] libmachine: [stderr =====>] : 
	I0108 20:14:58.380569    4724 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-084500 ).networkadapters[0]).ipaddresses[0]
	I0108 20:14:58.402467    4724 main.go:141] libmachine: [stdout =====>] : Running
	
	I0108 20:14:58.402467    4724 main.go:141] libmachine: [stderr =====>] : 
	I0108 20:14:58.403031    4724 addons.go:429] installing /etc/kubernetes/addons/storageclass.yaml
	I0108 20:14:58.403031    4724 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0108 20:14:58.403031    4724 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-084500 ).state
	I0108 20:14:58.443031    4724 main.go:141] libmachine: [stdout =====>] : Running
	
	I0108 20:14:58.443031    4724 main.go:141] libmachine: [stderr =====>] : 
	I0108 20:14:58.443031    4724 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-084500 ).networkadapters[0]).ipaddresses[0]
	I0108 20:14:58.610716    4724 pod_ready.go:102] pod "coredns-5dd5756b68-bt568" in "kube-system" namespace has status "Ready":"False"
	I0108 20:14:58.631575    4724 main.go:141] libmachine: [stdout =====>] : Running
	
	I0108 20:14:58.631575    4724 main.go:141] libmachine: [stderr =====>] : 
	I0108 20:14:58.631575    4724 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-084500 ).networkadapters[0]).ipaddresses[0]
	I0108 20:14:58.728882    4724 main.go:141] libmachine: [stdout =====>] : Running
	
	I0108 20:14:58.728882    4724 main.go:141] libmachine: [stderr =====>] : 
	I0108 20:14:58.733626    4724 out.go:177]   - Using image docker.io/busybox:stable
	I0108 20:14:58.734011    4724 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0108 20:14:58.735831    4724 addons.go:429] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0108 20:14:58.735831    4724 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0108 20:14:58.735831    4724 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-084500 ).state
	I0108 20:14:58.869378    4724 main.go:141] libmachine: [stdout =====>] : Running
	
	I0108 20:14:58.869378    4724 main.go:141] libmachine: [stderr =====>] : 
	I0108 20:14:58.869378    4724 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-084500 ).networkadapters[0]).ipaddresses[0]
	I0108 20:14:58.884781    4724 main.go:141] libmachine: [stdout =====>] : Running
	
	I0108 20:14:58.884781    4724 main.go:141] libmachine: [stderr =====>] : 
	I0108 20:14:58.884781    4724 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-084500 ).networkadapters[0]).ipaddresses[0]
	I0108 20:14:58.894710    4724 main.go:141] libmachine: [stdout =====>] : Running
	
	I0108 20:14:58.894710    4724 main.go:141] libmachine: [stderr =====>] : 
	I0108 20:14:58.894710    4724 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-084500 ).networkadapters[0]).ipaddresses[0]
	I0108 20:14:59.609491    4724 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0108 20:14:59.609491    4724 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-084500 ).state
	I0108 20:14:59.802435    4724 main.go:141] libmachine: [stdout =====>] : Running
	
	I0108 20:14:59.802435    4724 main.go:141] libmachine: [stderr =====>] : 
	I0108 20:14:59.802435    4724 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-084500 ).networkadapters[0]).ipaddresses[0]
	I0108 20:14:59.807602    4724 main.go:141] libmachine: [stdout =====>] : Running
	
	I0108 20:14:59.807602    4724 main.go:141] libmachine: [stderr =====>] : 
	I0108 20:14:59.808173    4724 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-084500 ).networkadapters[0]).ipaddresses[0]
	I0108 20:15:00.184815    4724 pod_ready.go:92] pod "coredns-5dd5756b68-bt568" in "kube-system" namespace has status "Ready":"True"
	I0108 20:15:00.184815    4724 pod_ready.go:81] duration metric: took 8.1818952s waiting for pod "coredns-5dd5756b68-bt568" in "kube-system" namespace to be "Ready" ...
	I0108 20:15:00.184815    4724 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-084500" in "kube-system" namespace to be "Ready" ...
	I0108 20:15:00.280527    4724 pod_ready.go:92] pod "etcd-addons-084500" in "kube-system" namespace has status "Ready":"True"
	I0108 20:15:00.280527    4724 pod_ready.go:81] duration metric: took 95.7116ms waiting for pod "etcd-addons-084500" in "kube-system" namespace to be "Ready" ...
	I0108 20:15:00.280527    4724 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-084500" in "kube-system" namespace to be "Ready" ...
	I0108 20:15:00.324785    4724 pod_ready.go:92] pod "kube-apiserver-addons-084500" in "kube-system" namespace has status "Ready":"True"
	I0108 20:15:00.324785    4724 pod_ready.go:81] duration metric: took 44.2581ms waiting for pod "kube-apiserver-addons-084500" in "kube-system" namespace to be "Ready" ...
	I0108 20:15:00.324785    4724 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-084500" in "kube-system" namespace to be "Ready" ...
	I0108 20:15:00.351919    4724 pod_ready.go:92] pod "kube-controller-manager-addons-084500" in "kube-system" namespace has status "Ready":"True"
	I0108 20:15:00.351919    4724 pod_ready.go:81] duration metric: took 27.1342ms waiting for pod "kube-controller-manager-addons-084500" in "kube-system" namespace to be "Ready" ...
	I0108 20:15:00.351919    4724 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-lddb9" in "kube-system" namespace to be "Ready" ...
	I0108 20:15:00.376109    4724 main.go:141] libmachine: [stdout =====>] : Running
	
	I0108 20:15:00.384287    4724 main.go:141] libmachine: [stderr =====>] : 
	I0108 20:15:00.384287    4724 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-084500 ).networkadapters[0]).ipaddresses[0]
	I0108 20:15:00.422164    4724 pod_ready.go:92] pod "kube-proxy-lddb9" in "kube-system" namespace has status "Ready":"True"
	I0108 20:15:00.422164    4724 pod_ready.go:81] duration metric: took 70.2445ms waiting for pod "kube-proxy-lddb9" in "kube-system" namespace to be "Ready" ...
	I0108 20:15:00.422164    4724 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-084500" in "kube-system" namespace to be "Ready" ...
	I0108 20:15:00.462937    4724 main.go:141] libmachine: [stdout =====>] : Running
	
	I0108 20:15:00.462937    4724 main.go:141] libmachine: [stderr =====>] : 
	I0108 20:15:00.462937    4724 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-084500 ).networkadapters[0]).ipaddresses[0]
	I0108 20:15:00.503523    4724 pod_ready.go:92] pod "kube-scheduler-addons-084500" in "kube-system" namespace has status "Ready":"True"
	I0108 20:15:00.503523    4724 pod_ready.go:81] duration metric: took 81.3586ms waiting for pod "kube-scheduler-addons-084500" in "kube-system" namespace to be "Ready" ...
	I0108 20:15:00.503523    4724 pod_ready.go:38] duration metric: took 8.6083442s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0108 20:15:00.503523    4724 api_server.go:52] waiting for apiserver process to appear ...
	I0108 20:15:00.529657    4724 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 20:15:01.063510    4724 api_server.go:72] duration metric: took 13.8471585s to wait for apiserver process to appear ...
	I0108 20:15:01.063510    4724 api_server.go:88] waiting for apiserver healthz status ...
	I0108 20:15:01.063510    4724 api_server.go:253] Checking apiserver healthz at https://172.29.100.38:8443/healthz ...
	I0108 20:15:01.285618    4724 api_server.go:279] https://172.29.100.38:8443/healthz returned 200:
	ok
	I0108 20:15:01.366642    4724 api_server.go:141] control plane version: v1.28.4
	I0108 20:15:01.366642    4724 api_server.go:131] duration metric: took 303.1304ms to wait for apiserver health ...
	I0108 20:15:01.366642    4724 system_pods.go:43] waiting for kube-system pods to appear ...
	I0108 20:15:01.431643    4724 system_pods.go:59] 6 kube-system pods found
	I0108 20:15:01.431643    4724 system_pods.go:61] "coredns-5dd5756b68-bt568" [1033d286-d867-4d93-ac04-8da943ed021d] Running
	I0108 20:15:01.431643    4724 system_pods.go:61] "etcd-addons-084500" [7f930169-b306-43f5-aac7-53a9aa238788] Running
	I0108 20:15:01.431643    4724 system_pods.go:61] "kube-apiserver-addons-084500" [7763a6a3-b9ae-4795-9b3e-f66ab6e23b9c] Running
	I0108 20:15:01.431643    4724 system_pods.go:61] "kube-controller-manager-addons-084500" [51f536b1-3661-4e11-a3f4-646a754ef3b1] Running
	I0108 20:15:01.431643    4724 system_pods.go:61] "kube-proxy-lddb9" [49e339c3-01ca-4070-8fb7-5515896296f3] Running
	I0108 20:15:01.431643    4724 system_pods.go:61] "kube-scheduler-addons-084500" [e642800e-6956-49a8-8d88-3e2c77df4af9] Running
	I0108 20:15:01.431643    4724 system_pods.go:74] duration metric: took 65.0007ms to wait for pod list to return data ...
	I0108 20:15:01.431643    4724 default_sa.go:34] waiting for default service account to be created ...
	I0108 20:15:01.490544    4724 default_sa.go:45] found service account: "default"
	I0108 20:15:01.490544    4724 default_sa.go:55] duration metric: took 58.9009ms for default service account to be created ...
	I0108 20:15:01.490544    4724 system_pods.go:116] waiting for k8s-apps to be running ...
	I0108 20:15:01.567660    4724 system_pods.go:86] 6 kube-system pods found
	I0108 20:15:01.567660    4724 system_pods.go:89] "coredns-5dd5756b68-bt568" [1033d286-d867-4d93-ac04-8da943ed021d] Running
	I0108 20:15:01.567660    4724 system_pods.go:89] "etcd-addons-084500" [7f930169-b306-43f5-aac7-53a9aa238788] Running
	I0108 20:15:01.567660    4724 system_pods.go:89] "kube-apiserver-addons-084500" [7763a6a3-b9ae-4795-9b3e-f66ab6e23b9c] Running
	I0108 20:15:01.567660    4724 system_pods.go:89] "kube-controller-manager-addons-084500" [51f536b1-3661-4e11-a3f4-646a754ef3b1] Running
	I0108 20:15:01.567660    4724 system_pods.go:89] "kube-proxy-lddb9" [49e339c3-01ca-4070-8fb7-5515896296f3] Running
	I0108 20:15:01.567660    4724 system_pods.go:89] "kube-scheduler-addons-084500" [e642800e-6956-49a8-8d88-3e2c77df4af9] Running
	I0108 20:15:01.567660    4724 system_pods.go:126] duration metric: took 77.1155ms to wait for k8s-apps to be running ...
	I0108 20:15:01.567660    4724 system_svc.go:44] waiting for kubelet service to be running ....
	I0108 20:15:01.583489    4724 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0108 20:15:01.715314    4724 system_svc.go:56] duration metric: took 147.653ms WaitForService to wait for kubelet.
	I0108 20:15:01.715314    4724 kubeadm.go:581] duration metric: took 14.498959s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0108 20:15:01.715314    4724 node_conditions.go:102] verifying NodePressure condition ...
	I0108 20:15:01.812147    4724 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0108 20:15:01.812147    4724 node_conditions.go:123] node cpu capacity is 2
	I0108 20:15:01.812147    4724 node_conditions.go:105] duration metric: took 96.8331ms to run NodePressure ...
	I0108 20:15:01.812147    4724 start.go:228] waiting for startup goroutines ...
	I0108 20:15:02.077029    4724 main.go:141] libmachine: [stdout =====>] : Running
	
	I0108 20:15:02.077029    4724 main.go:141] libmachine: [stderr =====>] : 
	I0108 20:15:02.077029    4724 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-084500 ).networkadapters[0]).ipaddresses[0]
	I0108 20:15:04.291075    4724 main.go:141] libmachine: [stdout =====>] : Running
	
	I0108 20:15:04.291075    4724 main.go:141] libmachine: [stderr =====>] : 
	I0108 20:15:04.291075    4724 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-084500 ).networkadapters[0]).ipaddresses[0]
	I0108 20:15:04.684930    4724 main.go:141] libmachine: [stdout =====>] : Running
	
	I0108 20:15:04.684930    4724 main.go:141] libmachine: [stderr =====>] : 
	I0108 20:15:04.684930    4724 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-084500 ).networkadapters[0]).ipaddresses[0]
	I0108 20:15:04.883660    4724 main.go:141] libmachine: [stdout =====>] : Running
	
	I0108 20:15:04.883660    4724 main.go:141] libmachine: [stderr =====>] : 
	I0108 20:15:04.883660    4724 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-084500 ).networkadapters[0]).ipaddresses[0]
	I0108 20:15:05.376085    4724 main.go:141] libmachine: [stdout =====>] : 172.29.100.38
	
	I0108 20:15:05.376085    4724 main.go:141] libmachine: [stderr =====>] : 
	I0108 20:15:05.382813    4724 sshutil.go:53] new ssh client: &{IP:172.29.100.38 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\addons-084500\id_rsa Username:docker}
	I0108 20:15:05.425027    4724 main.go:141] libmachine: [stdout =====>] : 172.29.100.38
	
	I0108 20:15:05.425027    4724 main.go:141] libmachine: [stderr =====>] : 
	I0108 20:15:05.425027    4724 sshutil.go:53] new ssh client: &{IP:172.29.100.38 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\addons-084500\id_rsa Username:docker}
	I0108 20:15:05.504807    4724 main.go:141] libmachine: [stdout =====>] : 172.29.100.38
	
	I0108 20:15:05.504807    4724 main.go:141] libmachine: [stderr =====>] : 
	I0108 20:15:05.506841    4724 sshutil.go:53] new ssh client: &{IP:172.29.100.38 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\addons-084500\id_rsa Username:docker}
	I0108 20:15:05.789538    4724 main.go:141] libmachine: [stdout =====>] : 172.29.100.38
	
	I0108 20:15:05.789538    4724 main.go:141] libmachine: [stderr =====>] : 
	I0108 20:15:05.789538    4724 sshutil.go:53] new ssh client: &{IP:172.29.100.38 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\addons-084500\id_rsa Username:docker}
	I0108 20:15:05.912119    4724 main.go:141] libmachine: [stdout =====>] : 172.29.100.38
	
	I0108 20:15:05.912182    4724 main.go:141] libmachine: [stderr =====>] : 
	I0108 20:15:05.912572    4724 sshutil.go:53] new ssh client: &{IP:172.29.100.38 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\addons-084500\id_rsa Username:docker}
	I0108 20:15:05.934955    4724 main.go:141] libmachine: [stdout =====>] : 172.29.100.38
	
	I0108 20:15:05.936655    4724 main.go:141] libmachine: [stderr =====>] : 
	I0108 20:15:05.937527    4724 sshutil.go:53] new ssh client: &{IP:172.29.100.38 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\addons-084500\id_rsa Username:docker}
	I0108 20:15:06.010251    4724 main.go:141] libmachine: [stdout =====>] : 172.29.100.38
	
	I0108 20:15:06.010404    4724 main.go:141] libmachine: [stderr =====>] : 
	I0108 20:15:06.011829    4724 sshutil.go:53] new ssh client: &{IP:172.29.100.38 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\addons-084500\id_rsa Username:docker}
	I0108 20:15:06.026231    4724 addons.go:429] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0108 20:15:06.026341    4724 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0108 20:15:06.032247    4724 addons.go:429] installing /etc/kubernetes/addons/registry-svc.yaml
	I0108 20:15:06.032247    4724 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0108 20:15:06.069179    4724 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0108 20:15:06.087358    4724 main.go:141] libmachine: [stdout =====>] : 172.29.100.38
	
	I0108 20:15:06.087358    4724 main.go:141] libmachine: [stderr =====>] : 
	I0108 20:15:06.087358    4724 sshutil.go:53] new ssh client: &{IP:172.29.100.38 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\addons-084500\id_rsa Username:docker}
	I0108 20:15:06.171898    4724 main.go:141] libmachine: [stdout =====>] : 172.29.100.38
	
	I0108 20:15:06.172011    4724 main.go:141] libmachine: [stderr =====>] : 
	I0108 20:15:06.173110    4724 sshutil.go:53] new ssh client: &{IP:172.29.100.38 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\addons-084500\id_rsa Username:docker}
	I0108 20:15:06.231584    4724 addons.go:429] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0108 20:15:06.231685    4724 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0108 20:15:06.288035    4724 addons.go:429] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0108 20:15:06.288035    4724 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0108 20:15:06.473769    4724 addons.go:429] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0108 20:15:06.473850    4724 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0108 20:15:06.516846    4724 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0108 20:15:06.517834    4724 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0108 20:15:06.536503    4724 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0108 20:15:06.567223    4724 addons.go:429] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0108 20:15:06.567223    4724 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0108 20:15:06.625382    4724 addons.go:429] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0108 20:15:06.625468    4724 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0108 20:15:06.696004    4724 addons.go:429] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0108 20:15:06.696004    4724 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0108 20:15:06.709437    4724 addons.go:429] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0108 20:15:06.709653    4724 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0108 20:15:06.743684    4724 addons.go:429] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0108 20:15:06.743773    4724 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0108 20:15:06.823971    4724 main.go:141] libmachine: [stdout =====>] : 172.29.100.38
	
	I0108 20:15:06.824022    4724 main.go:141] libmachine: [stderr =====>] : 
	I0108 20:15:06.824936    4724 sshutil.go:53] new ssh client: &{IP:172.29.100.38 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\addons-084500\id_rsa Username:docker}
	I0108 20:15:06.849589    4724 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0108 20:15:06.875541    4724 main.go:141] libmachine: [stdout =====>] : 172.29.100.38
	
	I0108 20:15:06.875696    4724 main.go:141] libmachine: [stderr =====>] : 
	I0108 20:15:06.876702    4724 sshutil.go:53] new ssh client: &{IP:172.29.100.38 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\addons-084500\id_rsa Username:docker}
	I0108 20:15:06.916874    4724 addons.go:429] installing /etc/kubernetes/addons/ig-role.yaml
	I0108 20:15:06.916874    4724 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0108 20:15:06.959189    4724 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0108 20:15:06.983720    4724 addons.go:429] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0108 20:15:06.983828    4724 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0108 20:15:07.014452    4724 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0108 20:15:07.041705    4724 addons.go:429] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0108 20:15:07.041830    4724 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0108 20:15:07.139453    4724 addons.go:429] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0108 20:15:07.139535    4724 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0108 20:15:07.165235    4724 addons.go:429] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0108 20:15:07.165379    4724 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0108 20:15:07.220710    4724 addons.go:429] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0108 20:15:07.223311    4724 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0108 20:15:07.262126    4724 addons.go:429] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0108 20:15:07.262126    4724 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0108 20:15:07.343398    4724 addons.go:429] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0108 20:15:07.343398    4724 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0108 20:15:07.366072    4724 addons.go:429] installing /etc/kubernetes/addons/ig-crd.yaml
	I0108 20:15:07.366072    4724 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0108 20:15:07.366147    4724 addons.go:429] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0108 20:15:07.366216    4724 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0108 20:15:07.422648    4724 main.go:141] libmachine: [stdout =====>] : 172.29.100.38
	
	I0108 20:15:07.422821    4724 main.go:141] libmachine: [stderr =====>] : 
	I0108 20:15:07.423554    4724 sshutil.go:53] new ssh client: &{IP:172.29.100.38 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\addons-084500\id_rsa Username:docker}
	I0108 20:15:07.480889    4724 addons.go:429] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0108 20:15:07.480889    4724 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0108 20:15:07.506762    4724 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0108 20:15:07.524442    4724 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0108 20:15:07.590417    4724 addons.go:429] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0108 20:15:07.590489    4724 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0108 20:15:07.781141    4724 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0108 20:15:07.810016    4724 addons.go:429] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0108 20:15:07.810146    4724 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0108 20:15:08.071784    4724 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0108 20:15:08.119416    4724 addons.go:429] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0108 20:15:08.119416    4724 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0108 20:15:08.185237    4724 main.go:141] libmachine: [stdout =====>] : 172.29.100.38
	
	I0108 20:15:08.185379    4724 main.go:141] libmachine: [stderr =====>] : 
	I0108 20:15:08.186074    4724 sshutil.go:53] new ssh client: &{IP:172.29.100.38 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\addons-084500\id_rsa Username:docker}
	I0108 20:15:08.261935    4724 main.go:141] libmachine: [stdout =====>] : 172.29.100.38
	
	I0108 20:15:08.261998    4724 main.go:141] libmachine: [stderr =====>] : 
	I0108 20:15:08.261998    4724 sshutil.go:53] new ssh client: &{IP:172.29.100.38 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\addons-084500\id_rsa Username:docker}
	I0108 20:15:08.323058    4724 addons.go:429] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0108 20:15:08.323058    4724 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0108 20:15:08.364440    4724 main.go:141] libmachine: [stdout =====>] : 172.29.100.38
	
	I0108 20:15:08.364440    4724 main.go:141] libmachine: [stderr =====>] : 
	I0108 20:15:08.365156    4724 sshutil.go:53] new ssh client: &{IP:172.29.100.38 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\addons-084500\id_rsa Username:docker}
	I0108 20:15:08.487237    4724 addons.go:429] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0108 20:15:08.487279    4724 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0108 20:15:08.682369    4724 addons.go:429] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0108 20:15:08.682452    4724 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0108 20:15:08.754524    4724 addons.go:429] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0108 20:15:08.754524    4724 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0108 20:15:08.825201    4724 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0108 20:15:08.850017    4724 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0108 20:15:08.919574    4724 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0108 20:15:08.936708    4724 addons.go:429] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0108 20:15:08.936818    4724 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0108 20:15:09.122760    4724 addons.go:429] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0108 20:15:09.122840    4724 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0108 20:15:09.240848    4724 addons.go:237] Setting addon gcp-auth=true in "addons-084500"
	I0108 20:15:09.240941    4724 host.go:66] Checking if "addons-084500" exists ...
	I0108 20:15:09.242366    4724 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-084500 ).state
	I0108 20:15:09.358972    4724 addons.go:429] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0108 20:15:09.358972    4724 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0108 20:15:09.669637    4724 addons.go:429] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0108 20:15:09.669637    4724 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0108 20:15:09.959114    4724 addons.go:429] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0108 20:15:09.959114    4724 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0108 20:15:10.083522    4724 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (3.5656277s)
	I0108 20:15:10.083639    4724 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (4.0143821s)
	I0108 20:15:10.399627    4724 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0108 20:15:10.620373    4724 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (4.1025166s)
	I0108 20:15:10.620373    4724 addons.go:473] Verifying addon registry=true in "addons-084500"
	I0108 20:15:10.621119    4724 out.go:177] * Verifying registry addon...
	I0108 20:15:10.624182    4724 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0108 20:15:10.640359    4724 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0108 20:15:10.640429    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:15:11.133097    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:15:11.390811    4724 main.go:141] libmachine: [stdout =====>] : Running
	
	I0108 20:15:11.391047    4724 main.go:141] libmachine: [stderr =====>] : 
	I0108 20:15:11.403706    4724 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0108 20:15:11.403706    4724 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-084500 ).state
	I0108 20:15:11.645876    4724 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (5.1093461s)
	I0108 20:15:11.646032    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:15:12.138600    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:15:12.637434    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:15:13.163439    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:15:13.595124    4724 main.go:141] libmachine: [stdout =====>] : Running
	
	I0108 20:15:13.595124    4724 main.go:141] libmachine: [stderr =====>] : 
	I0108 20:15:13.595124    4724 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-084500 ).networkadapters[0]).ipaddresses[0]
	I0108 20:15:13.642658    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:15:14.132391    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:15:14.643374    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:15:15.138671    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:15:15.916717    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:15:16.151664    4724 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (9.3020251s)
	I0108 20:15:16.153037    4724 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-084500 service yakd-dashboard -n yakd-dashboard
	
	
	I0108 20:15:16.232853    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:15:16.447964    4724 main.go:141] libmachine: [stdout =====>] : 172.29.100.38
	
	I0108 20:15:16.447964    4724 main.go:141] libmachine: [stderr =====>] : 
	I0108 20:15:16.448790    4724 sshutil.go:53] new ssh client: &{IP:172.29.100.38 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\addons-084500\id_rsa Username:docker}
	I0108 20:15:16.648087    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:15:17.147339    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:15:17.640428    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:15:18.133815    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:15:18.640256    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:15:19.162024    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:15:19.290720    4724 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (12.3314663s)
	I0108 20:15:19.290720    4724 addons.go:473] Verifying addon ingress=true in "addons-084500"
	I0108 20:15:19.290720    4724 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (12.2762036s)
	I0108 20:15:19.291529    4724 out.go:177] * Verifying ingress addon...
	I0108 20:15:19.290720    4724 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (11.7838255s)
	I0108 20:15:19.290720    4724 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (11.7659568s)
	W0108 20:15:19.292230    4724 addons.go:455] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0108 20:15:19.291341    4724 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (11.5101387s)
	I0108 20:15:19.291499    4724 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (11.2196003s)
	I0108 20:15:19.292230    4724 addons.go:473] Verifying addon metrics-server=true in "addons-084500"
	I0108 20:15:19.291529    4724 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (10.4662724s)
	I0108 20:15:19.291529    4724 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (10.4414565s)
	I0108 20:15:19.292230    4724 retry.go:31] will retry after 155.598318ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0108 20:15:19.295009    4724 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0108 20:15:19.300743    4724 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0108 20:15:19.300743    4724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W0108 20:15:19.313970    4724 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0108 20:15:19.463192    4724 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0108 20:15:19.633897    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:15:19.812680    4724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:15:20.151635    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:15:20.339313    4724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:15:20.641537    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:15:20.817554    4724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:15:21.136281    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:15:21.314706    4724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:15:21.658661    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:15:21.822795    4724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:15:21.887495    4724 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (10.4837011s)
	I0108 20:15:21.887970    4724 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.0
	I0108 20:15:21.889332    4724 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
	I0108 20:15:21.890192    4724 addons.go:429] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0108 20:15:21.890234    4724 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0108 20:15:21.912608    4724 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (11.5129203s)
	I0108 20:15:21.912731    4724 addons.go:473] Verifying addon csi-hostpath-driver=true in "addons-084500"
	I0108 20:15:21.913449    4724 out.go:177] * Verifying csi-hostpath-driver addon...
	I0108 20:15:21.915772    4724 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0108 20:15:21.927009    4724 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0108 20:15:21.927371    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:15:22.133955    4724 addons.go:429] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0108 20:15:22.133955    4724 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0108 20:15:22.145644    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:15:22.208216    4724 addons.go:429] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0108 20:15:22.208216    4724 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5432 bytes)
	I0108 20:15:22.306192    4724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:15:22.437222    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:15:22.512947    4724 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0108 20:15:22.639947    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:15:22.811277    4724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:15:22.934331    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:15:23.073332    4724 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (3.6101209s)
	I0108 20:15:23.172900    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:15:23.307623    4724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:15:23.442188    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:15:23.638289    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:15:23.811617    4724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:15:23.936451    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:15:24.148710    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:15:24.311421    4724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:15:24.435749    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:15:24.633338    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:15:24.836158    4724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:15:24.856721    4724 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (2.3364262s)
	I0108 20:15:24.868386    4724 addons.go:473] Verifying addon gcp-auth=true in "addons-084500"
	I0108 20:15:24.869090    4724 out.go:177] * Verifying gcp-auth addon...
	I0108 20:15:24.872053    4724 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0108 20:15:24.883769    4724 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0108 20:15:24.883769    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:15:24.933130    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:15:25.142728    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:15:25.305930    4724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:15:25.392333    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:15:25.431379    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:15:25.645452    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:15:25.802294    4724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:15:25.892056    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:15:25.929778    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:15:26.145191    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:15:26.304217    4724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:15:26.391767    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:15:26.430896    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:15:26.639373    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:15:26.817518    4724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:15:26.889051    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:15:26.926451    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:15:27.138241    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:15:27.311271    4724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:15:27.392831    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:15:27.438242    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:15:27.645273    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:15:27.808971    4724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:15:27.882666    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:15:27.942498    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:15:28.143927    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:15:28.316641    4724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:15:28.385616    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:15:28.443696    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:15:28.644155    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:15:28.810638    4724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:15:28.882443    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:15:28.939057    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:15:29.142245    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:15:29.308525    4724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:15:29.394252    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:15:29.434874    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:15:29.631659    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:15:29.814754    4724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:15:29.891776    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:15:29.943633    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:15:30.141791    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:15:30.313165    4724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:15:30.394422    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:15:30.431650    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:15:30.632576    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:15:30.816593    4724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:15:30.893938    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:15:30.934754    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:15:31.134991    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:15:31.309023    4724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:15:31.383912    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:15:31.427261    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:15:31.651349    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:15:31.815569    4724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:15:31.893317    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:15:31.930656    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:15:32.141928    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:15:32.310623    4724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:15:32.387321    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:15:32.427514    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:15:32.641113    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:15:32.807157    4724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:15:32.886667    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:15:32.932645    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:15:33.132281    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:15:33.307564    4724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:15:33.389525    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:15:33.432484    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:15:33.630540    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:15:33.809284    4724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:15:33.888429    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:15:33.940554    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:15:34.158078    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:15:34.302740    4724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:15:34.403316    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:15:34.423717    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:15:34.655591    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:15:34.810481    4724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:15:34.892300    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:15:34.934955    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:15:35.148210    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:15:35.304145    4724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:15:35.398558    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:15:35.441379    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:15:35.631639    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:15:35.813384    4724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:15:35.879247    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:15:35.936430    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:15:36.136636    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:15:36.307612    4724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:15:36.429154    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:15:36.631290    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:15:36.635891    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:15:36.804323    4724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:15:36.884170    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:15:36.942295    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:15:37.146260    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:15:37.317668    4724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:15:37.385071    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:15:37.428135    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:15:37.650727    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:15:37.815310    4724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:15:37.881844    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:15:37.927890    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:15:38.143061    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:15:38.316707    4724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:15:38.392560    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:15:38.426254    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:15:38.635310    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:15:38.802327    4724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:15:38.878709    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:15:38.942185    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:15:39.144279    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:15:39.314972    4724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:15:39.397511    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:15:39.435985    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:15:39.649707    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:15:39.821101    4724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:15:39.904212    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:15:39.936314    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:15:40.146145    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:15:40.323956    4724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:15:40.388398    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:15:40.440124    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:15:40.642898    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:15:40.807629    4724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:15:40.882873    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:15:40.941952    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:15:41.153030    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:15:41.307702    4724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:15:41.378897    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:15:41.441439    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:15:41.651538    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:15:41.802671    4724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:15:41.880062    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:15:41.938299    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:15:42.144785    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:15:42.302412    4724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:15:42.383605    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:15:42.438862    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:15:42.634567    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:15:42.801912    4724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:15:42.896291    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:15:42.931283    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:15:43.228781    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:15:43.305477    4724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:15:43.390821    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:15:43.433392    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:15:43.634274    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:15:43.816124    4724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:15:43.887379    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:15:43.931617    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:15:44.203847    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:15:44.322031    4724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:15:44.398282    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:15:44.442126    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:15:44.639202    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:15:44.819290    4724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:15:44.902500    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:15:44.938509    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:15:45.138597    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:15:45.321156    4724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:15:45.393666    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:15:45.435430    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:15:45.646648    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:15:45.816947    4724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:15:45.894963    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:15:45.933504    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:15:46.148286    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:15:46.314493    4724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:15:46.382668    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:15:46.442874    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:15:46.645231    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:15:46.802181    4724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:15:46.888986    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:15:46.941867    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:15:47.144012    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:15:47.304584    4724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:15:47.387376    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:15:47.431580    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:15:47.648980    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:15:47.804351    4724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:15:47.892548    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:15:47.933385    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:15:48.131654    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:15:48.313911    4724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:15:48.382784    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:15:48.433139    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:15:48.643879    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:15:48.810460    4724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:15:48.891771    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:15:48.931317    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:15:49.137676    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:15:49.311655    4724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:15:49.380797    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:15:49.442232    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:15:49.646052    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:15:49.814778    4724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:15:49.895011    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:15:49.928154    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:15:50.133524    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:15:50.312728    4724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:15:50.392694    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:15:50.434669    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:15:50.641505    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:15:50.813618    4724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:15:50.885329    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:15:50.945285    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:15:51.143321    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:15:51.314224    4724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:15:51.378312    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:15:51.435879    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:15:51.639286    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:15:51.815799    4724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:15:51.889647    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:15:51.933399    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:15:52.150231    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:15:52.308727    4724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:15:52.386702    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:15:52.439172    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:15:52.637168    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:15:52.811103    4724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:15:52.877993    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:15:52.930043    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:15:53.140298    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:15:53.306952    4724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:15:53.387336    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:15:53.434083    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:15:53.643194    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:15:53.810548    4724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:15:53.887696    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:15:53.927251    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:15:54.136883    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:15:54.311828    4724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:15:54.384545    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:15:54.440796    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:15:54.642115    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:15:54.810967    4724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:15:54.885360    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:15:54.943409    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:15:55.133092    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:15:55.309165    4724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:15:55.386371    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:15:55.426390    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:15:55.644810    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:15:55.803101    4724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:15:55.884224    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:15:55.950119    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:15:56.146871    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:15:56.318129    4724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:15:56.395674    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:15:56.437480    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:15:56.633966    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:15:56.809261    4724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:15:56.881286    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:15:56.938384    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:15:57.151879    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:15:57.306650    4724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:15:57.397595    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:15:57.438904    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:15:57.635250    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:15:57.813622    4724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:15:57.877077    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:15:57.939437    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:15:58.143815    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:15:58.304073    4724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:15:58.382800    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:15:58.431015    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:15:58.878378    4724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:15:58.878378    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:15:58.882402    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:15:58.939101    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:15:59.155829    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:15:59.306736    4724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:15:59.394200    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:15:59.438710    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:15:59.631413    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:15:59.814892    4724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:15:59.878455    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:15:59.943376    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:16:00.159385    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:16:00.305066    4724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:16:00.394509    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:16:00.428709    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:16:00.632455    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:16:00.805995    4724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:16:01.077230    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:16:01.095638    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:16:01.140424    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:16:01.303827    4724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:16:01.396306    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:16:01.436772    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:16:01.651724    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:16:01.805868    4724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:16:01.888905    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:16:01.931171    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:16:02.131431    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:16:02.310322    4724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:16:02.378301    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:16:02.434336    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:16:02.634011    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:16:02.805150    4724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:16:02.890066    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:16:02.927368    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:16:03.175808    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:16:03.303731    4724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:16:03.387610    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:16:03.431491    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:16:03.635141    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:16:03.816610    4724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:16:03.880927    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:16:03.935617    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:16:04.134346    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:16:04.317646    4724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:16:04.389304    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:16:04.433085    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:16:04.640528    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:16:04.803454    4724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:16:04.890959    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:16:04.926334    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:16:05.139764    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:16:05.322192    4724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:16:05.392064    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:16:05.427928    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:16:05.633204    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:16:05.804195    4724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:16:05.885658    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:16:05.925883    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:16:06.148757    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:16:06.387423    4724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:16:06.422699    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:16:06.439494    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:16:06.677981    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:16:06.801768    4724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:16:06.895034    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:16:06.930204    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:16:07.140551    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:16:07.308040    4724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:16:07.534689    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:16:07.535583    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:16:07.649781    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:16:07.819564    4724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:16:07.888427    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:16:07.931927    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:16:08.142540    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:16:08.309309    4724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:16:08.388590    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:16:08.434132    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:16:08.634870    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:16:08.805502    4724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:16:08.881625    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:16:08.942530    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:16:09.144514    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:16:09.305299    4724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:16:09.380990    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:16:09.431242    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:16:09.631615    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:16:09.814510    4724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:16:09.895011    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:16:09.931462    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:16:10.148171    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:16:10.309815    4724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:16:10.391472    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:16:10.429295    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:16:10.633274    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:16:10.804658    4724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:16:10.880708    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:16:10.932928    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:16:11.149628    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:16:11.313918    4724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:16:11.383382    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:16:11.440680    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:16:11.635868    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:16:11.808602    4724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:16:11.883990    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:16:11.929927    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:16:12.137480    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:16:12.316265    4724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:16:12.386222    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:16:12.450848    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:16:12.633396    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:16:12.812032    4724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:16:12.892886    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:16:12.928003    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:16:13.146056    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:16:13.301414    4724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:16:13.393678    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:16:13.433065    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:16:13.643416    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:16:13.816596    4724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:16:13.894429    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:16:13.933057    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:16:14.147301    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:16:14.309813    4724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:16:14.387018    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:16:14.425707    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:16:14.634338    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:16:14.805615    4724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:16:14.886023    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:16:14.925055    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:16:15.151297    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:16:15.314176    4724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:16:15.382129    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:16:15.448384    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:16:15.643166    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:16:15.807131    4724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:16:15.894262    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:16:15.935563    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:16:16.136070    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:16:16.301945    4724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:16:16.404606    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:16:16.442600    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:16:16.647172    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:16:16.827837    4724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:16:16.992424    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:16:16.999284    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:16:17.146063    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:16:17.324130    4724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:16:17.382306    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:16:17.442813    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:16:17.647049    4724 kapi.go:107] duration metric: took 1m7.0225174s to wait for kubernetes.io/minikube-addons=registry ...
	I0108 20:16:17.816950    4724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:16:17.896036    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:16:17.939231    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:16:18.315171    4724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:16:18.394901    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:16:18.439068    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:16:18.818992    4724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:16:18.893077    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:16:18.941510    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:16:19.306391    4724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:16:19.377699    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:16:19.437187    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:16:19.817562    4724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:16:19.877994    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:16:19.928832    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:16:20.423181    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:16:20.423431    4724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:16:20.433098    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:16:20.803160    4724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:16:20.887849    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:16:20.935810    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:16:21.322024    4724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:16:21.393197    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:16:21.431610    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:16:21.805802    4724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:16:21.892099    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:16:21.936917    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:16:22.332867    4724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:16:22.395755    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:16:22.435679    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:16:22.810743    4724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:16:22.882287    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:16:22.941478    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:16:23.330728    4724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:16:23.399556    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:16:23.432964    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:16:23.802899    4724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:16:23.974078    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:16:23.976635    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:16:24.313750    4724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:16:24.389995    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:16:24.435446    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:16:24.820318    4724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:16:24.884952    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:16:24.945184    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:16:25.311601    4724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:16:25.385154    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:16:25.442380    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:16:25.813272    4724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:16:25.890807    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:16:25.930901    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:16:26.313244    4724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:16:26.385968    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:16:26.432840    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:16:26.813398    4724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:16:26.892738    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:16:26.944255    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:16:27.313093    4724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:16:27.525936    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:16:27.526926    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:16:27.811516    4724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:16:27.884285    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:16:27.938963    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:16:28.321782    4724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:16:28.380196    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:16:28.442892    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:16:28.805426    4724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:16:28.894151    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:16:28.931369    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:16:29.311650    4724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:16:29.386576    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:16:29.429339    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:16:29.817056    4724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:16:29.896824    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:16:29.961213    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:16:30.316037    4724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:16:30.396116    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:16:30.435367    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:16:30.815517    4724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:16:30.883513    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:16:30.931993    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:16:31.306920    4724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:16:31.385747    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:16:31.440033    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:16:31.929210    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:16:31.929843    4724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:16:31.937046    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:16:32.302953    4724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:16:32.392297    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:16:32.436571    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:16:32.807649    4724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:16:32.895641    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:16:32.931720    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:16:33.312798    4724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:16:33.390925    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:16:33.433666    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:16:33.806618    4724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:16:33.886855    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:16:33.937529    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:16:34.315174    4724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:16:34.393163    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:16:34.432435    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:16:34.809121    4724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:16:34.892109    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:16:34.926266    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:16:35.320539    4724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:16:35.382404    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:16:35.437248    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:16:35.806300    4724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:16:35.881993    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:16:35.924115    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:16:36.304468    4724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:16:36.398762    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:16:36.443320    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:16:36.803506    4724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:16:36.894988    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:16:36.940132    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:16:37.308750    4724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:16:37.384538    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:16:37.446149    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:16:37.817225    4724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:16:37.892798    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:16:37.944470    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:16:38.313937    4724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:16:38.385953    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:16:38.436412    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:16:38.813775    4724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:16:38.887961    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:16:38.924530    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:16:39.320343    4724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:16:39.390573    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:16:39.439427    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:16:39.801845    4724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:16:39.894769    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:16:39.935611    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:16:40.309572    4724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:16:40.383737    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:16:40.429257    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:16:40.815069    4724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:16:40.893214    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:16:40.937114    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:16:41.316719    4724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:16:41.402482    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:16:41.428865    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:16:41.805959    4724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:16:41.879636    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:16:41.938057    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:16:42.305196    4724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:16:42.377921    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:16:42.441154    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:16:42.811859    4724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:16:42.884836    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:16:42.931327    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:16:43.315534    4724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:16:43.382211    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:16:43.440984    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:16:43.819619    4724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:16:43.892358    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:16:43.933042    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:16:44.320542    4724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:16:44.376470    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:16:44.436569    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:16:44.812829    4724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:16:44.883846    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:16:44.946071    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:16:45.308363    4724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:16:45.391303    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:16:45.432405    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:16:45.829993    4724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:16:45.882140    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:16:46.026553    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:16:46.315711    4724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:16:46.388309    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:16:46.430574    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:16:46.823220    4724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:16:46.898522    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:16:46.934882    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:16:47.339046    4724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:16:47.394749    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:16:47.435985    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:16:47.801103    4724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:16:47.884588    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:16:47.936508    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:16:48.320117    4724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:16:48.378908    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:16:48.437630    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:16:48.801799    4724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:16:48.883402    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:16:48.926623    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:16:49.309049    4724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:16:49.388310    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:16:49.445980    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:16:49.811826    4724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:16:49.893166    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:16:49.936136    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:16:50.312408    4724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:16:50.378145    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:16:50.436489    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:16:50.824107    4724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:16:50.893772    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:16:50.931063    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:16:51.309078    4724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:16:51.386354    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:16:51.430309    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:16:51.817046    4724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:16:51.891281    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:16:51.943900    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:16:52.309365    4724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:16:52.396382    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:16:52.439726    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:16:52.812481    4724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:16:52.892876    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:16:52.932314    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:16:53.317606    4724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:16:53.397818    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:16:53.452586    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:16:53.816140    4724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:16:53.894132    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:16:53.937352    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:16:54.316387    4724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:16:54.387204    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:16:54.432544    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:16:54.816908    4724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:16:54.883945    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:16:54.930893    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:16:55.314199    4724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:16:55.395614    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:16:55.435814    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:16:55.823187    4724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:16:55.889824    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:16:55.929565    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:16:56.330884    4724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:16:56.387254    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:16:56.427935    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:16:56.809884    4724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:16:56.897876    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:16:56.937054    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:16:57.306026    4724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:16:57.382726    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:16:57.426059    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:16:57.810779    4724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:16:57.883965    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:16:57.923472    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:16:58.400594    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:16:58.427489    4724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:16:58.469053    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:16:58.803009    4724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:16:58.896012    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:16:58.934939    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:16:59.304108    4724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:16:59.387677    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:16:59.429678    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:16:59.830015    4724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:16:59.884691    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:16:59.934745    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:17:00.305855    4724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:17:00.385941    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:17:00.427746    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:17:00.813715    4724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:17:00.888859    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:17:00.925245    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:17:01.302507    4724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:17:01.378809    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:17:01.434235    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:17:01.804978    4724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:17:01.881491    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:17:01.942373    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:17:02.306000    4724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:17:02.381836    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:17:02.440089    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:17:02.823006    4724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:17:02.875678    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:17:02.942029    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:17:03.310880    4724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:17:03.388472    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:17:03.432929    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:17:03.819325    4724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:17:03.896800    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:17:03.940625    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:17:04.315180    4724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:17:04.385548    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:17:04.424846    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:17:04.808469    4724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:17:04.894601    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:17:04.932415    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:17:05.305175    4724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:17:05.378931    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:17:05.435537    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:17:05.815483    4724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:17:05.884881    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:17:05.941188    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:17:06.324541    4724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:17:06.384642    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:17:06.442511    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:17:06.810507    4724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:17:06.884337    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:17:06.924531    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:17:07.313911    4724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:17:07.389299    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:17:07.432006    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:17:07.807640    4724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:17:07.882780    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:17:07.939452    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:17:08.311144    4724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:17:08.383694    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:17:08.435202    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:17:08.803917    4724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:17:08.892897    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:17:08.923014    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:17:09.306154    4724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:17:09.395856    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:17:09.439815    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:17:09.809149    4724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:17:09.882935    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:17:09.939667    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:17:10.319175    4724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:17:10.397068    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:17:10.436616    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:17:10.802923    4724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:17:10.896132    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:17:10.937222    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:17:11.305957    4724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:17:11.396200    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:17:11.433608    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:17:11.812018    4724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:17:11.889666    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:17:11.928088    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:17:12.305305    4724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:17:12.386445    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:17:12.424801    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:17:12.804384    4724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:17:12.886619    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:17:12.925343    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:17:13.308469    4724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:17:13.382720    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:17:13.444636    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:17:13.815851    4724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:17:13.885906    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:17:13.939027    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:17:14.316492    4724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:17:14.396578    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:17:14.432870    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:17:14.803406    4724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:17:14.897921    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:17:14.940574    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:17:15.303922    4724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:17:15.380610    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:17:15.426747    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:17:15.814468    4724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:17:15.894919    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:17:15.930660    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:17:16.307969    4724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:17:16.381734    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:17:16.440524    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:17:16.826181    4724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:17:16.879299    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:17:16.948362    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:17:17.304792    4724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:17:17.382803    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:17:17.439054    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:17:17.804105    4724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:17:17.883462    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:17:17.937344    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:17:18.308886    4724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:17:18.384743    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:17:18.425845    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:17:18.818603    4724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:17:18.891665    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:17:18.928071    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:17:19.303524    4724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:17:19.385613    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:17:19.430517    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:17:19.812702    4724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:17:19.885833    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:17:19.925723    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:17:20.303981    4724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:17:20.390220    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:17:20.429493    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:17:20.817321    4724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:17:20.884118    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:17:20.940331    4724 kapi.go:107] duration metric: took 1m59.0238415s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0108 20:17:21.308107    4724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:17:21.393340    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:17:21.813708    4724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:17:21.883060    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:17:22.310919    4724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:17:22.387936    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:17:22.803613    4724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:17:22.880499    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:17:23.311598    4724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:17:23.380183    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:17:23.817183    4724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:17:23.886388    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:17:24.313756    4724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:17:24.390723    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:17:24.809009    4724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:17:24.878539    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:17:25.310931    4724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:17:25.384318    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:17:25.816540    4724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:17:25.884662    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:17:26.308882    4724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:17:26.384340    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:17:26.813587    4724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:17:26.879825    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:17:27.315523    4724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:17:27.386201    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:17:27.807850    4724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:17:27.891835    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:17:28.302976    4724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:17:28.380719    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:17:28.812104    4724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:17:28.879360    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:17:29.326634    4724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:17:29.379493    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:17:29.812904    4724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:17:29.880997    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:17:30.317599    4724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:17:30.384163    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:17:30.810899    4724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:17:30.878991    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:17:31.310055    4724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:17:31.382693    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:17:31.804412    4724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:17:31.887013    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:17:32.317476    4724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:17:32.392178    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:17:32.804066    4724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:17:32.892946    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:17:33.315519    4724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:17:33.382749    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:17:33.819183    4724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:17:33.889863    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:17:34.308215    4724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:17:34.385897    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:17:34.806698    4724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:17:34.885223    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:17:35.308966    4724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:17:35.389635    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:17:35.802263    4724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:17:35.890162    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:17:36.303009    4724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:17:36.382073    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:17:36.805587    4724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:17:36.881577    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:17:37.304123    4724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:17:37.399190    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:17:37.824402    4724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:17:37.885005    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:17:38.318294    4724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:17:38.385523    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:17:38.809385    4724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:17:38.891742    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:17:39.307429    4724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:17:39.392509    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:17:39.814318    4724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:17:39.887899    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:17:40.324368    4724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:17:40.393506    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:17:40.827841    4724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:17:40.885633    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:17:41.320541    4724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:17:41.378663    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:17:41.821382    4724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:17:41.888167    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:17:42.313338    4724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:17:42.388974    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:17:42.805709    4724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:17:42.889668    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:17:43.312618    4724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:17:43.381251    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:17:43.810963    4724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:17:43.891535    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:17:44.485180    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:17:44.486363    4724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:17:44.822495    4724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:17:44.893083    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:17:45.322787    4724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:17:45.394756    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:17:45.811207    4724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:17:45.886601    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:17:46.313373    4724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:17:46.388699    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:17:46.806449    4724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:17:46.884225    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:17:47.307563    4724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:17:47.380178    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:17:47.804148    4724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:17:47.886435    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:17:48.312535    4724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:17:48.381250    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:17:48.812130    4724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:17:48.892035    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:17:49.320492    4724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:17:49.392329    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:17:49.811091    4724 kapi.go:107] duration metric: took 2m30.5153025s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0108 20:17:49.879084    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:17:50.386274    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:17:50.888524    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:17:51.385252    4724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:17:51.892862    4724 kapi.go:107] duration metric: took 2m27.0199637s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0108 20:17:51.893539    4724 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-084500 cluster.
	I0108 20:17:51.894061    4724 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0108 20:17:51.894680    4724 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0108 20:17:51.896088    4724 out.go:177] * Enabled addons: nvidia-device-plugin, cloud-spanner, ingress-dns, yakd, helm-tiller, storage-provisioner, inspektor-gadget, metrics-server, storage-provisioner-rancher, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I0108 20:17:51.897007    4724 addons.go:508] enable addons completed in 3m5.4098357s: enabled=[nvidia-device-plugin cloud-spanner ingress-dns yakd helm-tiller storage-provisioner inspektor-gadget metrics-server storage-provisioner-rancher volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I0108 20:17:51.897007    4724 start.go:233] waiting for cluster config update ...
	I0108 20:17:51.897007    4724 start.go:242] writing updated cluster config ...
	I0108 20:17:51.909302    4724 ssh_runner.go:195] Run: rm -f paused
	I0108 20:17:52.157778    4724 start.go:600] kubectl: 1.29.0, cluster: 1.28.4 (minor skew: 1)
	I0108 20:17:52.158943    4724 out.go:177] * Done! kubectl is now configured to use "addons-084500" cluster and "default" namespace by default
	
	
	==> Docker <==
	-- Journal begins at Mon 2024-01-08 20:12:39 UTC, ends at Mon 2024-01-08 20:18:47 UTC. --
	Jan 08 20:18:34 addons-084500 dockerd[1323]: time="2024-01-08T20:18:34.699371077Z" level=warning msg="cleaning up after shim disconnected" id=a2c606a312cbe9e8333c22662a8da64d44a92b7b27103d6d8e721f6a33930bbf namespace=moby
	Jan 08 20:18:34 addons-084500 dockerd[1323]: time="2024-01-08T20:18:34.699546684Z" level=info msg="cleaning up dead shim" namespace=moby
	Jan 08 20:18:35 addons-084500 dockerd[1323]: time="2024-01-08T20:18:35.035627556Z" level=info msg="shim disconnected" id=3fa5abb30c1bf518af9f8d9b8e643a29cb9b6c318ecf6c70c78594557cbd1029 namespace=moby
	Jan 08 20:18:35 addons-084500 dockerd[1323]: time="2024-01-08T20:18:35.036310984Z" level=warning msg="cleaning up after shim disconnected" id=3fa5abb30c1bf518af9f8d9b8e643a29cb9b6c318ecf6c70c78594557cbd1029 namespace=moby
	Jan 08 20:18:35 addons-084500 dockerd[1316]: time="2024-01-08T20:18:35.036498592Z" level=info msg="ignoring event" container=3fa5abb30c1bf518af9f8d9b8e643a29cb9b6c318ecf6c70c78594557cbd1029 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 08 20:18:35 addons-084500 dockerd[1323]: time="2024-01-08T20:18:35.036684199Z" level=info msg="cleaning up dead shim" namespace=moby
	Jan 08 20:18:38 addons-084500 dockerd[1323]: time="2024-01-08T20:18:38.794442885Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jan 08 20:18:38 addons-084500 dockerd[1323]: time="2024-01-08T20:18:38.794695895Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 08 20:18:38 addons-084500 dockerd[1323]: time="2024-01-08T20:18:38.794718296Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jan 08 20:18:38 addons-084500 dockerd[1323]: time="2024-01-08T20:18:38.794729696Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 08 20:18:39 addons-084500 cri-dockerd[1207]: time="2024-01-08T20:18:39Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/ef942408d323ee0f8837267d7139c2d61b515864662fb3a1f94cf3b2c7af5726/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Jan 08 20:18:40 addons-084500 cri-dockerd[1207]: time="2024-01-08T20:18:40Z" level=info msg="Stop pulling image docker.io/nginx:latest: Status: Image is up to date for nginx:latest"
	Jan 08 20:18:40 addons-084500 dockerd[1323]: time="2024-01-08T20:18:40.428553168Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jan 08 20:18:40 addons-084500 dockerd[1323]: time="2024-01-08T20:18:40.430919660Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 08 20:18:40 addons-084500 dockerd[1323]: time="2024-01-08T20:18:40.431054666Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jan 08 20:18:40 addons-084500 dockerd[1323]: time="2024-01-08T20:18:40.431089567Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 08 20:18:41 addons-084500 cri-dockerd[1207]: time="2024-01-08T20:18:41Z" level=info msg="Stop pulling image ghcr.io/inspektor-gadget/inspektor-gadget:v0.23.1@sha256:c77d8eb2b3dc6e9d60767f824b296e42d6d4fdc2f17f507492a2c981933db931: Status: Image is up to date for ghcr.io/inspektor-gadget/inspektor-gadget@sha256:c77d8eb2b3dc6e9d60767f824b296e42d6d4fdc2f17f507492a2c981933db931"
	Jan 08 20:18:41 addons-084500 dockerd[1323]: time="2024-01-08T20:18:41.863292450Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jan 08 20:18:41 addons-084500 dockerd[1323]: time="2024-01-08T20:18:41.864422694Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 08 20:18:41 addons-084500 dockerd[1323]: time="2024-01-08T20:18:41.864585700Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jan 08 20:18:41 addons-084500 dockerd[1323]: time="2024-01-08T20:18:41.864754006Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 08 20:18:42 addons-084500 dockerd[1316]: time="2024-01-08T20:18:42.914365052Z" level=info msg="ignoring event" container=d95b8cbfd8548c34e0185c2596ab3bd1f49db0e6d49aaca43a374369787a2d60 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 08 20:18:42 addons-084500 dockerd[1323]: time="2024-01-08T20:18:42.915827408Z" level=info msg="shim disconnected" id=d95b8cbfd8548c34e0185c2596ab3bd1f49db0e6d49aaca43a374369787a2d60 namespace=moby
	Jan 08 20:18:42 addons-084500 dockerd[1323]: time="2024-01-08T20:18:42.915899411Z" level=warning msg="cleaning up after shim disconnected" id=d95b8cbfd8548c34e0185c2596ab3bd1f49db0e6d49aaca43a374369787a2d60 namespace=moby
	Jan 08 20:18:42 addons-084500 dockerd[1323]: time="2024-01-08T20:18:42.915919612Z" level=info msg="cleaning up dead shim" namespace=moby
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED              STATE               NAME                                     ATTEMPT             POD ID              POD
	d95b8cbfd8548       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:c77d8eb2b3dc6e9d60767f824b296e42d6d4fdc2f17f507492a2c981933db931                            6 seconds ago        Exited              gadget                                   4                   5360c3c8e182a       gadget-vfh65
	400bb03939eec       nginx@sha256:2bdc49f2f8ae8d8dc50ed00f2ee56d00385c6f8bc8a8b320d0a294d9e3b49026                                                                7 seconds ago        Running             task-pv-container                        0                   ef942408d323e       task-pv-pod-restore
	0fc280b5fbe36       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:d730651bb6584f969d95d8279a754cf9d8d31b5055c43dbdb8d7363a8c6371cf                                 57 seconds ago       Running             gcp-auth                                 0                   2fed4c1c7d607       gcp-auth-d4c87556c-nnfrs
	525b3a51afc8b       registry.k8s.io/ingress-nginx/controller@sha256:b3aba22b1da80e7acfc52b115cae1d4c687172cbf2b742d5b502419c25ff340e                             About a minute ago   Running             controller                               0                   95d46f89c4175       ingress-nginx-controller-69cff4fd79-z4gkw
	8b9099efcf47b       registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f                          About a minute ago   Running             csi-snapshotter                          0                   6682f90205be9       csi-hostpathplugin-fnjmb
	4d4748d6d0b86       registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8                          About a minute ago   Running             csi-provisioner                          0                   6682f90205be9       csi-hostpathplugin-fnjmb
	498ca88c80e6c       registry.k8s.io/sig-storage/livenessprobe@sha256:cacee2b5c36dd59d4c7e8469c05c9e4ef53ecb2df9025fa8c10cdaf61bce62f0                            About a minute ago   Running             liveness-probe                           0                   6682f90205be9       csi-hostpathplugin-fnjmb
	50353c778506b       registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5                           About a minute ago   Running             hostpath                                 0                   6682f90205be9       csi-hostpathplugin-fnjmb
	7be7e1a95619b       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:f1c25991bac2fbb7f5fcf91ed9438df31e30edee6bed5a780464238aa09ad24c                About a minute ago   Running             node-driver-registrar                    0                   6682f90205be9       csi-hostpathplugin-fnjmb
	d804439cd12bb       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:80b9ba94aa2afe24553d69bd165a6a51552d1582d68618ec00d3b804a7d9193c   About a minute ago   Running             csi-external-health-monitor-controller   0                   6682f90205be9       csi-hostpathplugin-fnjmb
	046042ce3fcb1       registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b                             About a minute ago   Running             csi-attacher                             0                   16c56904bc20e       csi-hostpath-attacher-0
	7a6f7a81f1b29       registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7                              About a minute ago   Running             csi-resizer                              0                   a08348b288f03       csi-hostpath-resizer-0
	98a8c63e0cb56       registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280                      About a minute ago   Running             volume-snapshot-controller               0                   68333493bdced       snapshot-controller-58dbcc7b99-czw56
	3b25025f7871b       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a7943503b45d552785aa3b5e457f169a5661fb94d82b8a3373bcd9ebaf9aac80                   About a minute ago   Exited              patch                                    0                   dcb52920c5208       ingress-nginx-admission-patch-2vml7
	9c4062e4245a8       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a7943503b45d552785aa3b5e457f169a5661fb94d82b8a3373bcd9ebaf9aac80                   About a minute ago   Exited              create                                   0                   c704113c2d7c5       ingress-nginx-admission-create-f26wj
	702ba8c24b74b       rancher/local-path-provisioner@sha256:e34c88ae0affb1cdefbb874140d6339d4a27ec4ee420ae8199cd839997b05246                                       About a minute ago   Running             local-path-provisioner                   0                   7f3d7172b5125       local-path-provisioner-78b46b4d5c-bt2sv
	7cd78ced04d38       registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280                      About a minute ago   Running             volume-snapshot-controller               0                   e12a80055fdd6       snapshot-controller-58dbcc7b99-6qrhl
	33653a3780623       marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310                                                        2 minutes ago        Running             yakd                                     0                   534ee4fce3e82       yakd-dashboard-9947fc6bf-gcq5f
	b7da0195c094e       ghcr.io/helm/tiller@sha256:4c43eb385032945cad047d2350e4945d913b90b3ab43ee61cecb32a495c6df0f                                                  2 minutes ago        Running             tiller                                   0                   3e2a9543195a7       tiller-deploy-7b677967b9-6qkvr
	fae7a1e0044c1       gcr.io/k8s-minikube/minikube-ingress-dns@sha256:4abe27f9fc03fedab1d655e2020e6b165faf3bf6de1088ce6cf215a75b78f05f                             2 minutes ago        Running             minikube-ingress-dns                     0                   fbdbd4c63be82       kube-ingress-dns-minikube
	f2b264ccaf9e2       nvcr.io/nvidia/k8s-device-plugin@sha256:339be23400f58c04f09b6ba1d4d2e0e7120648f2b114880513685b22093311f1                                     3 minutes ago        Running             nvidia-device-plugin-ctr                 0                   35cc11b6bfe43       nvidia-device-plugin-daemonset-jftp2
	b2fcdbf2b12de       6e38f40d628db                                                                                                                                3 minutes ago        Running             storage-provisioner                      0                   b8d4948c863e4       storage-provisioner
	149daf748e65b       ead0a4a53df89                                                                                                                                3 minutes ago        Running             coredns                                  0                   0d988cf3397af       coredns-5dd5756b68-bt568
	d033da4b06b52       83f6cc407eed8                                                                                                                                3 minutes ago        Running             kube-proxy                               0                   71393ec1d1cc3       kube-proxy-lddb9
	801676b354827       73deb9a3f7025                                                                                                                                4 minutes ago        Running             etcd                                     0                   dd39f93ea17ad       etcd-addons-084500
	bcfb1020bc366       e3db313c6dbc0                                                                                                                                4 minutes ago        Running             kube-scheduler                           0                   bdcf369cacd31       kube-scheduler-addons-084500
	53ed11f8118bf       d058aa5ab969c                                                                                                                                4 minutes ago        Running             kube-controller-manager                  0                   3cd20d85d506e       kube-controller-manager-addons-084500
	5867dca9f4e4c       7fe0e6f37db33                                                                                                                                4 minutes ago        Running             kube-apiserver                           0                   8800edeb332b2       kube-apiserver-addons-084500
	
	
	==> controller_ingress [525b3a51afc8] <==
	W0108 20:17:48.802639       8 client_config.go:618] Neither --kubeconfig nor --master was specified.  Using the inClusterConfig.  This might not work.
	I0108 20:17:48.802909       8 main.go:205] "Creating API client" host="https://10.96.0.1:443"
	I0108 20:17:48.813333       8 main.go:249] "Running in Kubernetes cluster" major="1" minor="28" git="v1.28.4" state="clean" commit="bae2c62678db2b5053817bc97181fcc2e8388103" platform="linux/amd64"
	I0108 20:17:49.187964       8 main.go:101] "SSL fake certificate created" file="/etc/ingress-controller/ssl/default-fake-certificate.pem"
	I0108 20:17:49.208929       8 ssl.go:536] "loading tls certificate" path="/usr/local/certificates/cert" key="/usr/local/certificates/key"
	I0108 20:17:49.231086       8 nginx.go:260] "Starting NGINX Ingress controller"
	I0108 20:17:49.268078       8 event.go:298] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"ingress-nginx-controller", UID:"48f62a18-befb-48f8-ac58-a04c71b48eb2", APIVersion:"v1", ResourceVersion:"704", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/ingress-nginx-controller
	I0108 20:17:49.281675       8 event.go:298] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"tcp-services", UID:"24ecc5b5-da38-420d-9cac-792e6a6d4960", APIVersion:"v1", ResourceVersion:"706", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/tcp-services
	I0108 20:17:49.281716       8 event.go:298] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"udp-services", UID:"f54a45b8-6375-491a-ad5d-d73fe371c0b7", APIVersion:"v1", ResourceVersion:"707", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/udp-services
	I0108 20:17:50.432270       8 nginx.go:303] "Starting NGINX process"
	I0108 20:17:50.432662       8 leaderelection.go:245] attempting to acquire leader lease ingress-nginx/ingress-nginx-leader...
	I0108 20:17:50.433557       8 nginx.go:323] "Starting validation webhook" address=":8443" certPath="/usr/local/certificates/cert" keyPath="/usr/local/certificates/key"
	I0108 20:17:50.433982       8 controller.go:190] "Configuration changes detected, backend reload required"
	I0108 20:17:50.475480       8 leaderelection.go:255] successfully acquired lease ingress-nginx/ingress-nginx-leader
	I0108 20:17:50.475933       8 status.go:84] "New leader elected" identity="ingress-nginx-controller-69cff4fd79-z4gkw"
	I0108 20:17:50.484028       8 status.go:219] "POD is not ready" pod="ingress-nginx/ingress-nginx-controller-69cff4fd79-z4gkw" node="addons-084500"
	I0108 20:17:50.559885       8 controller.go:210] "Backend successfully reloaded"
	I0108 20:17:50.560047       8 controller.go:221] "Initial sync, sleeping for 1 second"
	I0108 20:17:50.560185       8 event.go:298] Event(v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-69cff4fd79-z4gkw", UID:"a7a4f356-3189-46b3-9a8d-899330de57f8", APIVersion:"v1", ResourceVersion:"732", FieldPath:""}): type: 'Normal' reason: 'RELOAD' NGINX reload triggered due to a change in configuration
	  Build:         f503c4bb5fa7d857ad29e94970eb550c2bc00b7c
	  Repository:    https://github.com/kubernetes/ingress-nginx
	  nginx version: nginx/1.21.6
	
	-------------------------------------------------------------------------------
	
	
	
	==> coredns [149daf748e65] <==
	[INFO] plugin/reload: Running configuration SHA512 = ecb7ac485f9c2b1ea9804efa09f1e19321672736f367e944ec746de174838ff4ac13f0ea72d0f91eb72162a02d709deb909d06018a457ac2adfe17d34b3613d8
	[INFO] Reloading complete
	[INFO] 127.0.0.1:59963 - 51853 "HINFO IN 3852559733756153154.6342366305175584530. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.035807072s
	[INFO] 10.244.0.6:53480 - 15939 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.001094625s
	[INFO] 10.244.0.6:53480 - 35612 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.001339031s
	[INFO] 10.244.0.6:41145 - 58082 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000107802s
	[INFO] 10.244.0.6:41145 - 10726 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000274407s
	[INFO] 10.244.0.6:41563 - 31380 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000111403s
	[INFO] 10.244.0.6:41563 - 63635 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000135704s
	[INFO] 10.244.0.6:36316 - 24145 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000123203s
	[INFO] 10.244.0.6:36316 - 32082 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000105202s
	[INFO] 10.244.0.6:58360 - 36048 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000162504s
	[INFO] 10.244.0.6:49806 - 42323 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000211805s
	[INFO] 10.244.0.6:60530 - 64455 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000054801s
	[INFO] 10.244.0.6:47073 - 5101 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000100003s
	[INFO] 10.244.0.21:49722 - 1373 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000637142s
	[INFO] 10.244.0.21:53421 - 50989 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000825055s
	[INFO] 10.244.0.21:41727 - 60045 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000117407s
	[INFO] 10.244.0.21:41359 - 34511 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000134309s
	[INFO] 10.244.0.21:51513 - 59293 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000408727s
	[INFO] 10.244.0.21:35197 - 26390 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000126008s
	[INFO] 10.244.0.21:42883 - 22802 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd 240 0.002867189s
	[INFO] 10.244.0.21:36294 - 18538 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd 230 0.002185444s
	[INFO] 10.244.0.24:42083 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000281715s
	[INFO] 10.244.0.24:40119 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000265915s
	
	
	==> describe nodes <==
	Name:               addons-084500
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-084500
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=255792ad75c0218cbe9d2c7121633a1b1d442f28
	                    minikube.k8s.io/name=addons-084500
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_01_08T20_14_34_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-084500
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-084500"}
	                    kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 08 Jan 2024 20:14:30 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-084500
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 08 Jan 2024 20:18:39 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 08 Jan 2024 20:18:40 +0000   Mon, 08 Jan 2024 20:14:28 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 08 Jan 2024 20:18:40 +0000   Mon, 08 Jan 2024 20:14:28 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 08 Jan 2024 20:18:40 +0000   Mon, 08 Jan 2024 20:14:28 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 08 Jan 2024 20:18:40 +0000   Mon, 08 Jan 2024 20:14:38 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.29.100.38
	  Hostname:    addons-084500
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             3914580Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             3914580Ki
	  pods:               110
	System Info:
	  Machine ID:                 02ab00d75166441e9e381a61282a7f9e
	  System UUID:                72e06d14-d696-3f46-ae36-1f137c60d6ef
	  Boot ID:                    adf013aa-d2aa-4295-8d23-7f17e0459b46
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://24.0.7
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (21 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     task-pv-pod-restore                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10s
	  gadget                      gadget-vfh65                                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m34s
	  gcp-auth                    gcp-auth-d4c87556c-nnfrs                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m24s
	  ingress-nginx               ingress-nginx-controller-69cff4fd79-z4gkw    100m (5%!)(MISSING)     0 (0%!)(MISSING)      90Mi (2%!)(MISSING)        0 (0%!)(MISSING)         3m29s
	  kube-system                 coredns-5dd5756b68-bt568                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     4m2s
	  kube-system                 csi-hostpath-attacher-0                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m27s
	  kube-system                 csi-hostpath-resizer-0                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m27s
	  kube-system                 csi-hostpathplugin-fnjmb                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m27s
	  kube-system                 etcd-addons-084500                           100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         4m16s
	  kube-system                 kube-apiserver-addons-084500                 250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m14s
	  kube-system                 kube-controller-manager-addons-084500        200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m14s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m37s
	  kube-system                 kube-proxy-lddb9                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m2s
	  kube-system                 kube-scheduler-addons-084500                 100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m14s
	  kube-system                 nvidia-device-plugin-daemonset-jftp2         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m39s
	  kube-system                 snapshot-controller-58dbcc7b99-6qrhl         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m32s
	  kube-system                 snapshot-controller-58dbcc7b99-czw56         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m32s
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m32s
	  kube-system                 tiller-deploy-7b677967b9-6qkvr               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m36s
	  local-path-storage          local-path-provisioner-78b46b4d5c-bt2sv      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m32s
	  yakd-dashboard              yakd-dashboard-9947fc6bf-gcq5f               0 (0%!)(MISSING)        0 (0%!)(MISSING)      128Mi (3%!)(MISSING)       256Mi (6%!)(MISSING)     3m32s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             388Mi (10%!)(MISSING)  426Mi (11%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 3m53s                  kube-proxy       
	  Normal  Starting                 4m23s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m23s (x8 over 4m23s)  kubelet          Node addons-084500 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m23s (x8 over 4m23s)  kubelet          Node addons-084500 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m23s (x7 over 4m23s)  kubelet          Node addons-084500 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m23s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 4m14s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m14s                  kubelet          Node addons-084500 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m14s                  kubelet          Node addons-084500 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m14s                  kubelet          Node addons-084500 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m14s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                4m10s                  kubelet          Node addons-084500 status is now: NodeReady
	  Normal  RegisteredNode           4m2s                   node-controller  Node addons-084500 event: Registered Node addons-084500 in Controller
	
	
	==> dmesg <==
	[  +1.343934] kauditd_printk_skb: 14 callbacks suppressed
	[  +0.405168] systemd-fstab-generator[1162]: Ignoring "noauto" for root device
	[  +0.170288] systemd-fstab-generator[1173]: Ignoring "noauto" for root device
	[  +0.183220] systemd-fstab-generator[1184]: Ignoring "noauto" for root device
	[  +0.239877] systemd-fstab-generator[1199]: Ignoring "noauto" for root device
	[  +8.734875] systemd-fstab-generator[1307]: Ignoring "noauto" for root device
	[  +6.186051] kauditd_printk_skb: 29 callbacks suppressed
	[  +6.479457] systemd-fstab-generator[1673]: Ignoring "noauto" for root device
	[  +0.798351] kauditd_printk_skb: 29 callbacks suppressed
	[  +8.527599] systemd-fstab-generator[2645]: Ignoring "noauto" for root device
	[ +25.615103] kauditd_printk_skb: 19 callbacks suppressed
	[Jan 8 20:15] kauditd_printk_skb: 33 callbacks suppressed
	[  +5.984770] kauditd_printk_skb: 12 callbacks suppressed
	[  +5.053718] kauditd_printk_skb: 48 callbacks suppressed
	[  +2.808875] hrtimer: interrupt took 466935 ns
	[Jan 8 20:16] kauditd_printk_skb: 20 callbacks suppressed
	[Jan 8 20:17] kauditd_printk_skb: 26 callbacks suppressed
	[ +22.081942] kauditd_printk_skb: 18 callbacks suppressed
	[ +11.133268] kauditd_printk_skb: 3 callbacks suppressed
	[ +12.209405] kauditd_printk_skb: 25 callbacks suppressed
	[Jan 8 20:18] kauditd_printk_skb: 7 callbacks suppressed
	[ +16.367992] kauditd_printk_skb: 3 callbacks suppressed
	[  +6.264975] kauditd_printk_skb: 6 callbacks suppressed
	[  +7.327188] kauditd_printk_skb: 2 callbacks suppressed
	[  +8.139705] kauditd_printk_skb: 23 callbacks suppressed
	
	
	==> etcd [801676b35482] <==
	{"level":"info","ts":"2024-01-08T20:16:07.527497Z","caller":"traceutil/trace.go:171","msg":"trace[1600327616] range","detail":"{range_begin:/registry/pods/gcp-auth/; range_end:/registry/pods/gcp-auth0; response_count:3; response_revision:953; }","duration":"145.435359ms","start":"2024-01-08T20:16:07.382051Z","end":"2024-01-08T20:16:07.527486Z","steps":["trace[1600327616] 'range keys from in-memory index tree'  (duration: 145.043948ms)"],"step_count":1}
	{"level":"warn","ts":"2024-01-08T20:16:07.527886Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"107.645278ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" ","response":"range_response_count:18 size:82004"}
	{"level":"info","ts":"2024-01-08T20:16:07.527954Z","caller":"traceutil/trace.go:171","msg":"trace[817946395] range","detail":"{range_begin:/registry/pods/kube-system/; range_end:/registry/pods/kube-system0; response_count:18; response_revision:953; }","duration":"107.71518ms","start":"2024-01-08T20:16:07.420231Z","end":"2024-01-08T20:16:07.527947Z","steps":["trace[817946395] 'range keys from in-memory index tree'  (duration: 107.35447ms)"],"step_count":1}
	{"level":"warn","ts":"2024-01-08T20:16:16.986761Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"109.430492ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/gcp-auth/\" range_end:\"/registry/pods/gcp-auth0\" ","response":"range_response_count:3 size:10572"}
	{"level":"info","ts":"2024-01-08T20:16:16.987016Z","caller":"traceutil/trace.go:171","msg":"trace[567331217] range","detail":"{range_begin:/registry/pods/gcp-auth/; range_end:/registry/pods/gcp-auth0; response_count:3; response_revision:962; }","duration":"109.699398ms","start":"2024-01-08T20:16:16.877303Z","end":"2024-01-08T20:16:16.987002Z","steps":["trace[567331217] 'range keys from in-memory index tree'  (duration: 108.97998ms)"],"step_count":1}
	{"level":"warn","ts":"2024-01-08T20:16:20.417307Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"112.083555ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/ingress-nginx/\" range_end:\"/registry/pods/ingress-nginx0\" ","response":"range_response_count:3 size:13485"}
	{"level":"info","ts":"2024-01-08T20:16:20.41751Z","caller":"traceutil/trace.go:171","msg":"trace[541393955] range","detail":"{range_begin:/registry/pods/ingress-nginx/; range_end:/registry/pods/ingress-nginx0; response_count:3; response_revision:971; }","duration":"112.445463ms","start":"2024-01-08T20:16:20.30505Z","end":"2024-01-08T20:16:20.417495Z","steps":["trace[541393955] 'range keys from in-memory index tree'  (duration: 111.973752ms)"],"step_count":1}
	{"level":"info","ts":"2024-01-08T20:16:27.518071Z","caller":"traceutil/trace.go:171","msg":"trace[1648204206] linearizableReadLoop","detail":"{readStateIndex:1032; appliedIndex:1031; }","duration":"127.654563ms","start":"2024-01-08T20:16:27.390386Z","end":"2024-01-08T20:16:27.51804Z","steps":["trace[1648204206] 'read index received'  (duration: 127.517761ms)","trace[1648204206] 'applied index is now lower than readState.Index'  (duration: 136.202µs)"],"step_count":2}
	{"level":"info","ts":"2024-01-08T20:16:27.518386Z","caller":"traceutil/trace.go:171","msg":"trace[619925410] transaction","detail":"{read_only:false; response_revision:989; number_of_response:1; }","duration":"157.961648ms","start":"2024-01-08T20:16:27.360415Z","end":"2024-01-08T20:16:27.518377Z","steps":["trace[619925410] 'process raft request'  (duration: 157.525739ms)"],"step_count":1}
	{"level":"warn","ts":"2024-01-08T20:16:27.52041Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"129.937307ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/gcp-auth/\" range_end:\"/registry/pods/gcp-auth0\" ","response":"range_response_count:3 size:10572"}
	{"level":"info","ts":"2024-01-08T20:16:27.52059Z","caller":"traceutil/trace.go:171","msg":"trace[1263848646] range","detail":"{range_begin:/registry/pods/gcp-auth/; range_end:/registry/pods/gcp-auth0; response_count:3; response_revision:989; }","duration":"130.217812ms","start":"2024-01-08T20:16:27.390361Z","end":"2024-01-08T20:16:27.520579Z","steps":["trace[1263848646] 'agreement among raft nodes before linearized reading'  (duration: 129.768204ms)"],"step_count":1}
	{"level":"warn","ts":"2024-01-08T20:16:31.924016Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"210.359796ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/ingressclasses/\" range_end:\"/registry/ingressclasses0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2024-01-08T20:16:31.924087Z","caller":"traceutil/trace.go:171","msg":"trace[1152127738] range","detail":"{range_begin:/registry/ingressclasses/; range_end:/registry/ingressclasses0; response_count:0; response_revision:993; }","duration":"210.460898ms","start":"2024-01-08T20:16:31.713615Z","end":"2024-01-08T20:16:31.924076Z","steps":["trace[1152127738] 'count revisions from in-memory index tree'  (duration: 210.264895ms)"],"step_count":1}
	{"level":"warn","ts":"2024-01-08T20:16:31.924592Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"116.790707ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/ingress-nginx/\" range_end:\"/registry/pods/ingress-nginx0\" ","response":"range_response_count:3 size:13485"}
	{"level":"info","ts":"2024-01-08T20:16:31.924631Z","caller":"traceutil/trace.go:171","msg":"trace[790670521] range","detail":"{range_begin:/registry/pods/ingress-nginx/; range_end:/registry/pods/ingress-nginx0; response_count:3; response_revision:993; }","duration":"116.817608ms","start":"2024-01-08T20:16:31.807792Z","end":"2024-01-08T20:16:31.924609Z","steps":["trace[790670521] 'range keys from in-memory index tree'  (duration: 116.698106ms)"],"step_count":1}
	{"level":"info","ts":"2024-01-08T20:16:39.724713Z","caller":"traceutil/trace.go:171","msg":"trace[1347376351] transaction","detail":"{read_only:false; response_revision:1014; number_of_response:1; }","duration":"224.049779ms","start":"2024-01-08T20:16:39.500568Z","end":"2024-01-08T20:16:39.724618Z","steps":["trace[1347376351] 'process raft request'  (duration: 223.829475ms)"],"step_count":1}
	{"level":"warn","ts":"2024-01-08T20:17:16.821544Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"134.716085ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/horizontalpodautoscalers/\" range_end:\"/registry/horizontalpodautoscalers0\" count_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-01-08T20:17:16.82161Z","caller":"traceutil/trace.go:171","msg":"trace[1273225257] range","detail":"{range_begin:/registry/horizontalpodautoscalers/; range_end:/registry/horizontalpodautoscalers0; response_count:0; response_revision:1181; }","duration":"134.799786ms","start":"2024-01-08T20:17:16.686797Z","end":"2024-01-08T20:17:16.821597Z","steps":["trace[1273225257] 'count revisions from in-memory index tree'  (duration: 134.549683ms)"],"step_count":1}
	{"level":"info","ts":"2024-01-08T20:17:44.478378Z","caller":"traceutil/trace.go:171","msg":"trace[821843805] linearizableReadLoop","detail":"{readStateIndex:1310; appliedIndex:1309; }","duration":"171.684971ms","start":"2024-01-08T20:17:44.306674Z","end":"2024-01-08T20:17:44.478359Z","steps":["trace[821843805] 'read index received'  (duration: 171.441854ms)","trace[821843805] 'applied index is now lower than readState.Index'  (duration: 242.017µs)"],"step_count":2}
	{"level":"info","ts":"2024-01-08T20:17:44.479144Z","caller":"traceutil/trace.go:171","msg":"trace[314632572] transaction","detail":"{read_only:false; response_revision:1247; number_of_response:1; }","duration":"204.373788ms","start":"2024-01-08T20:17:44.274759Z","end":"2024-01-08T20:17:44.479133Z","steps":["trace[314632572] 'process raft request'  (duration: 203.458923ms)"],"step_count":1}
	{"level":"warn","ts":"2024-01-08T20:17:44.481116Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"174.406863ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/ingress-nginx/\" range_end:\"/registry/pods/ingress-nginx0\" ","response":"range_response_count:3 size:13903"}
	{"level":"info","ts":"2024-01-08T20:17:44.481529Z","caller":"traceutil/trace.go:171","msg":"trace[1202285807] range","detail":"{range_begin:/registry/pods/ingress-nginx/; range_end:/registry/pods/ingress-nginx0; response_count:3; response_revision:1247; }","duration":"174.862396ms","start":"2024-01-08T20:17:44.306656Z","end":"2024-01-08T20:17:44.481519Z","steps":["trace[1202285807] 'agreement among raft nodes before linearized reading'  (duration: 174.36326ms)"],"step_count":1}
	{"level":"info","ts":"2024-01-08T20:17:50.623292Z","caller":"traceutil/trace.go:171","msg":"trace[16566437] transaction","detail":"{read_only:false; response_revision:1269; number_of_response:1; }","duration":"139.958837ms","start":"2024-01-08T20:17:50.483316Z","end":"2024-01-08T20:17:50.623275Z","steps":["trace[16566437] 'process raft request'  (duration: 139.815027ms)"],"step_count":1}
	{"level":"warn","ts":"2024-01-08T20:17:52.676023Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"117.215856ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/persistentvolumeclaims/default/test-pvc\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-01-08T20:17:52.676114Z","caller":"traceutil/trace.go:171","msg":"trace[1866415672] range","detail":"{range_begin:/registry/persistentvolumeclaims/default/test-pvc; range_end:; response_count:0; response_revision:1287; }","duration":"117.313362ms","start":"2024-01-08T20:17:52.558787Z","end":"2024-01-08T20:17:52.6761Z","steps":["trace[1866415672] 'range keys from in-memory index tree'  (duration: 117.138551ms)"],"step_count":1}
	
	
	==> gcp-auth [0fc280b5fbe3] <==
	2024/01/08 20:17:50 GCP Auth Webhook started!
	2024/01/08 20:17:53 Ready to marshal response ...
	2024/01/08 20:17:53 Ready to write response ...
	2024/01/08 20:17:53 Ready to marshal response ...
	2024/01/08 20:17:53 Ready to write response ...
	2024/01/08 20:18:02 Ready to marshal response ...
	2024/01/08 20:18:02 Ready to write response ...
	2024/01/08 20:18:08 Ready to marshal response ...
	2024/01/08 20:18:08 Ready to write response ...
	2024/01/08 20:18:16 Ready to marshal response ...
	2024/01/08 20:18:16 Ready to write response ...
	2024/01/08 20:18:38 Ready to marshal response ...
	2024/01/08 20:18:38 Ready to write response ...
	
	
	==> kernel <==
	 20:18:48 up 6 min,  0 users,  load average: 2.43, 2.31, 1.08
	Linux addons-084500 5.10.57 #1 SMP Sat Dec 16 11:03:54 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	
	==> kube-apiserver [5867dca9f4e4] <==
	I0108 20:15:21.749696       1 alloc.go:330] "allocated clusterIPs" service="kube-system/csi-hostpath-resizer" clusterIPs={"IPv4":"10.99.114.83"}
	W0108 20:15:23.176887       1 aggregator.go:166] failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0108 20:15:24.601593       1 alloc.go:330] "allocated clusterIPs" service="gcp-auth/gcp-auth" clusterIPs={"IPv4":"10.102.9.214"}
	I0108 20:15:30.440779       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0108 20:16:16.941470       1 handler_proxy.go:93] no RequestInfo found in the context
	E0108 20:16:16.941732       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0108 20:16:16.941905       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0108 20:16:16.942721       1 handler_proxy.go:93] no RequestInfo found in the context
	E0108 20:16:16.942836       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0108 20:16:16.942947       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0108 20:16:30.440563       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E0108 20:16:46.811398       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.105.159.196:443/apis/metrics.k8s.io/v1beta1: Get "https://10.105.159.196:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.105.159.196:443: connect: connection refused
	W0108 20:16:46.811942       1 handler_proxy.go:93] no RequestInfo found in the context
	E0108 20:16:46.812120       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0108 20:16:46.814259       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E0108 20:16:46.819585       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.105.159.196:443/apis/metrics.k8s.io/v1beta1: Get "https://10.105.159.196:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.105.159.196:443: connect: connection refused
	E0108 20:16:46.820219       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.105.159.196:443/apis/metrics.k8s.io/v1beta1: Get "https://10.105.159.196:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.105.159.196:443: connect: connection refused
	E0108 20:16:46.830342       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.105.159.196:443/apis/metrics.k8s.io/v1beta1: Get "https://10.105.159.196:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.105.159.196:443: connect: connection refused
	I0108 20:16:46.941613       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0108 20:17:30.446430       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0108 20:18:30.641740       1 controller.go:624] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0108 20:18:47.829994       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	
	
	==> kube-controller-manager [53ed11f8118b] <==
	I0108 20:17:08.206146       1 job_controller.go:562] "enqueueing job" key="gcp-auth/gcp-auth-certs-create"
	I0108 20:17:08.206954       1 event.go:307] "Event occurred" object="gcp-auth/gcp-auth-certs-create" fieldPath="" kind="Job" apiVersion="batch/v1" type="Normal" reason="Completed" message="Job completed"
	I0108 20:17:21.335983       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/snapshot-controller-58dbcc7b99" duration="7.716882ms"
	I0108 20:17:21.336763       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/snapshot-controller-58dbcc7b99" duration="534.906µs"
	I0108 20:17:38.027112       1 job_controller.go:562] "enqueueing job" key="gcp-auth/gcp-auth-certs-patch"
	I0108 20:17:38.039243       1 job_controller.go:562] "enqueueing job" key="gcp-auth/gcp-auth-certs-create"
	I0108 20:17:38.097824       1 job_controller.go:562] "enqueueing job" key="gcp-auth/gcp-auth-certs-patch"
	I0108 20:17:38.101627       1 job_controller.go:562] "enqueueing job" key="gcp-auth/gcp-auth-certs-create"
	I0108 20:17:49.511789       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-69cff4fd79" duration="75.005µs"
	I0108 20:17:51.601303       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="gcp-auth/gcp-auth-d4c87556c" duration="18.680318ms"
	I0108 20:17:51.601406       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="gcp-auth/gcp-auth-d4c87556c" duration="46.103µs"
	I0108 20:17:52.697423       1 event.go:307] "Event occurred" object="default/test-pvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="WaitForFirstConsumer" message="waiting for first consumer to be created before binding"
	I0108 20:17:52.737341       1 event.go:307] "Event occurred" object="default/hpvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="Waiting for a volume to be created either by the external provisioner 'hostpath.csi.k8s.io' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered."
	I0108 20:17:52.738235       1 event.go:307] "Event occurred" object="default/hpvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="Waiting for a volume to be created either by the external provisioner 'hostpath.csi.k8s.io' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered."
	I0108 20:17:53.081211       1 event.go:307] "Event occurred" object="default/test-pvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="Waiting for a volume to be created either by the external provisioner 'rancher.io/local-path' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered."
	I0108 20:18:01.281482       1 event.go:307] "Event occurred" object="default/hpvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="Waiting for a volume to be created either by the external provisioner 'hostpath.csi.k8s.io' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered."
	I0108 20:18:03.629243       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-69cff4fd79" duration="19.664618ms"
	I0108 20:18:03.629650       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-69cff4fd79" duration="79.104µs"
	I0108 20:18:07.124014       1 event.go:307] "Event occurred" object="default/hpvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="Waiting for a volume to be created either by the external provisioner 'hostpath.csi.k8s.io' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered."
	I0108 20:18:13.589233       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-7c66d45ddc" duration="10.301µs"
	I0108 20:18:25.962259       1 replica_set.go:676] "Finished syncing" kind="ReplicationController" key="kube-system/registry" duration="8.201µs"
	I0108 20:18:33.759319       1 event.go:307] "Event occurred" object="default/hpvc-restore" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="Waiting for a volume to be created either by the external provisioner 'hostpath.csi.k8s.io' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered."
	I0108 20:18:34.351801       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="local-path-storage/local-path-provisioner-78b46b4d5c" duration="6.2µs"
	I0108 20:18:34.560906       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/cloud-spanner-emulator-64c8c85f65" duration="5.4µs"
	I0108 20:18:37.161887       1 event.go:307] "Event occurred" object="default/hpvc-restore" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="Waiting for a volume to be created either by the external provisioner 'hostpath.csi.k8s.io' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered."
	
	
	==> kube-proxy [d033da4b06b5] <==
	I0108 20:14:53.623776       1 server_others.go:69] "Using iptables proxy"
	I0108 20:14:53.793239       1 node.go:141] Successfully retrieved node IP: 172.29.100.38
	I0108 20:14:54.075130       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0108 20:14:54.075759       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0108 20:14:54.139405       1 server_others.go:152] "Using iptables Proxier"
	I0108 20:14:54.139860       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0108 20:14:54.140931       1 server.go:846] "Version info" version="v1.28.4"
	I0108 20:14:54.141052       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0108 20:14:54.156092       1 config.go:188] "Starting service config controller"
	I0108 20:14:54.156628       1 config.go:97] "Starting endpoint slice config controller"
	I0108 20:14:54.234469       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0108 20:14:54.187569       1 config.go:315] "Starting node config controller"
	I0108 20:14:54.234588       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0108 20:14:54.234599       1 shared_informer.go:318] Caches are synced for node config
	I0108 20:14:54.234383       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0108 20:14:54.234617       1 shared_informer.go:318] Caches are synced for service config
	I0108 20:14:54.234702       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [bcfb1020bc36] <==
	W0108 20:14:30.593556       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0108 20:14:30.593650       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0108 20:14:31.474216       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0108 20:14:31.474292       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0108 20:14:31.492689       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0108 20:14:31.492783       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0108 20:14:31.527727       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0108 20:14:31.527926       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0108 20:14:31.530862       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0108 20:14:31.531087       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0108 20:14:31.540498       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0108 20:14:31.540558       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0108 20:14:31.612437       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0108 20:14:31.612466       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0108 20:14:31.668886       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0108 20:14:31.668940       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0108 20:14:31.797943       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0108 20:14:31.798256       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0108 20:14:32.006719       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0108 20:14:32.006765       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0108 20:14:32.008958       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0108 20:14:32.008980       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0108 20:14:32.024633       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0108 20:14:32.024655       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0108 20:14:34.751976       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	-- Journal begins at Mon 2024-01-08 20:12:39 UTC, ends at Mon 2024-01-08 20:18:48 UTC. --
	Jan 08 20:18:38 addons-084500 kubelet[2672]: I0108 20:18:38.447401    2672 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lq65f\" (UniqueName: \"kubernetes.io/projected/40438817-77a5-46b6-83b7-d542628949ae-kube-api-access-lq65f\") pod \"task-pv-pod-restore\" (UID: \"40438817-77a5-46b6-83b7-d542628949ae\") " pod="default/task-pv-pod-restore"
	Jan 08 20:18:38 addons-084500 kubelet[2672]: I0108 20:18:38.447436    2672 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/40438817-77a5-46b6-83b7-d542628949ae-gcp-creds\") pod \"task-pv-pod-restore\" (UID: \"40438817-77a5-46b6-83b7-d542628949ae\") " pod="default/task-pv-pod-restore"
	Jan 08 20:18:38 addons-084500 kubelet[2672]: I0108 20:18:38.568118    2672 operation_generator.go:665] "MountVolume.MountDevice succeeded for volume \"pvc-28eee6bc-f779-4b59-96b7-0fe40cc4c0ca\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^1aee01bf-ae63-11ee-97e2-26659441a61e\") pod \"task-pv-pod-restore\" (UID: \"40438817-77a5-46b6-83b7-d542628949ae\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/hostpath.csi.k8s.io/ba65c8f53fa4d40f435cd09a470f8953e63d4d07c4ebad9c966237786366cc1b/globalmount\"" pod="default/task-pv-pod-restore"
	Jan 08 20:18:39 addons-084500 kubelet[2672]: I0108 20:18:39.563406    2672 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ef942408d323ee0f8837267d7139c2d61b515864662fb3a1f94cf3b2c7af5726"
	Jan 08 20:18:41 addons-084500 kubelet[2672]: I0108 20:18:41.587488    2672 scope.go:117] "RemoveContainer" containerID="af9a22a8021088efbfd05d4cab4015facb7262ff37cf33e074660f7bd664d6b7"
	Jan 08 20:18:42 addons-084500 kubelet[2672]: I0108 20:18:42.708545    2672 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/task-pv-pod-restore" podStartSLOduration=4.169426694 podCreationTimestamp="2024-01-08 20:18:38 +0000 UTC" firstStartedPulling="2024-01-08 20:18:39.771768812 +0000 UTC m=+245.532309712" lastFinishedPulling="2024-01-08 20:18:40.310828266 +0000 UTC m=+246.071369166" observedRunningTime="2024-01-08 20:18:40.645441246 +0000 UTC m=+246.405982246" watchObservedRunningTime="2024-01-08 20:18:42.708486148 +0000 UTC m=+248.469027148"
	Jan 08 20:18:43 addons-084500 kubelet[2672]: I0108 20:18:43.722488    2672 scope.go:117] "RemoveContainer" containerID="af9a22a8021088efbfd05d4cab4015facb7262ff37cf33e074660f7bd664d6b7"
	Jan 08 20:18:43 addons-084500 kubelet[2672]: I0108 20:18:43.723211    2672 scope.go:117] "RemoveContainer" containerID="d95b8cbfd8548c34e0185c2596ab3bd1f49db0e6d49aaca43a374369787a2d60"
	Jan 08 20:18:43 addons-084500 kubelet[2672]: E0108 20:18:43.723738    2672 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=gadget pod=gadget-vfh65_gadget(161400a4-6cfc-47fc-a4f9-2c7ed8aa8592)\"" pod="gadget/gadget-vfh65" podUID="161400a4-6cfc-47fc-a4f9-2c7ed8aa8592"
	Jan 08 20:18:44 addons-084500 kubelet[2672]: E0108 20:18:44.691881    2672 remote_runtime.go:496] "ExecSync cmd from runtime service failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" containerID="d95b8cbfd8548c34e0185c2596ab3bd1f49db0e6d49aaca43a374369787a2d60" cmd=["/bin/gadgettracermanager","-liveness"]
	Jan 08 20:18:44 addons-084500 kubelet[2672]: I0108 20:18:44.767794    2672 scope.go:117] "RemoveContainer" containerID="d95b8cbfd8548c34e0185c2596ab3bd1f49db0e6d49aaca43a374369787a2d60"
	Jan 08 20:18:44 addons-084500 kubelet[2672]: E0108 20:18:44.768287    2672 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=gadget pod=gadget-vfh65_gadget(161400a4-6cfc-47fc-a4f9-2c7ed8aa8592)\"" pod="gadget/gadget-vfh65" podUID="161400a4-6cfc-47fc-a4f9-2c7ed8aa8592"
	Jan 08 20:18:45 addons-084500 kubelet[2672]: I0108 20:18:45.801298    2672 scope.go:117] "RemoveContainer" containerID="d95b8cbfd8548c34e0185c2596ab3bd1f49db0e6d49aaca43a374369787a2d60"
	Jan 08 20:18:45 addons-084500 kubelet[2672]: E0108 20:18:45.801799    2672 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=gadget pod=gadget-vfh65_gadget(161400a4-6cfc-47fc-a4f9-2c7ed8aa8592)\"" pod="gadget/gadget-vfh65" podUID="161400a4-6cfc-47fc-a4f9-2c7ed8aa8592"
	Jan 08 20:18:48 addons-084500 kubelet[2672]: I0108 20:18:48.051653    2672 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/40438817-77a5-46b6-83b7-d542628949ae-gcp-creds\") pod \"40438817-77a5-46b6-83b7-d542628949ae\" (UID: \"40438817-77a5-46b6-83b7-d542628949ae\") "
	Jan 08 20:18:48 addons-084500 kubelet[2672]: I0108 20:18:48.051788    2672 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"task-pv-storage\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^1aee01bf-ae63-11ee-97e2-26659441a61e\") pod \"40438817-77a5-46b6-83b7-d542628949ae\" (UID: \"40438817-77a5-46b6-83b7-d542628949ae\") "
	Jan 08 20:18:48 addons-084500 kubelet[2672]: I0108 20:18:48.052313    2672 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lq65f\" (UniqueName: \"kubernetes.io/projected/40438817-77a5-46b6-83b7-d542628949ae-kube-api-access-lq65f\") pod \"40438817-77a5-46b6-83b7-d542628949ae\" (UID: \"40438817-77a5-46b6-83b7-d542628949ae\") "
	Jan 08 20:18:48 addons-084500 kubelet[2672]: I0108 20:18:48.052758    2672 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/40438817-77a5-46b6-83b7-d542628949ae-gcp-creds" (OuterVolumeSpecName: "gcp-creds") pod "40438817-77a5-46b6-83b7-d542628949ae" (UID: "40438817-77a5-46b6-83b7-d542628949ae"). InnerVolumeSpecName "gcp-creds". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Jan 08 20:18:48 addons-084500 kubelet[2672]: I0108 20:18:48.056701    2672 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/40438817-77a5-46b6-83b7-d542628949ae-kube-api-access-lq65f" (OuterVolumeSpecName: "kube-api-access-lq65f") pod "40438817-77a5-46b6-83b7-d542628949ae" (UID: "40438817-77a5-46b6-83b7-d542628949ae"). InnerVolumeSpecName "kube-api-access-lq65f". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Jan 08 20:18:48 addons-084500 kubelet[2672]: I0108 20:18:48.071808    2672 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/hostpath.csi.k8s.io^1aee01bf-ae63-11ee-97e2-26659441a61e" (OuterVolumeSpecName: "task-pv-storage") pod "40438817-77a5-46b6-83b7-d542628949ae" (UID: "40438817-77a5-46b6-83b7-d542628949ae"). InnerVolumeSpecName "pvc-28eee6bc-f779-4b59-96b7-0fe40cc4c0ca". PluginName "kubernetes.io/csi", VolumeGidValue ""
	Jan 08 20:18:48 addons-084500 kubelet[2672]: I0108 20:18:48.154278    2672 reconciler_common.go:293] "operationExecutor.UnmountDevice started for volume \"pvc-28eee6bc-f779-4b59-96b7-0fe40cc4c0ca\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^1aee01bf-ae63-11ee-97e2-26659441a61e\") on node \"addons-084500\" "
	Jan 08 20:18:48 addons-084500 kubelet[2672]: I0108 20:18:48.154514    2672 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-lq65f\" (UniqueName: \"kubernetes.io/projected/40438817-77a5-46b6-83b7-d542628949ae-kube-api-access-lq65f\") on node \"addons-084500\" DevicePath \"\""
	Jan 08 20:18:48 addons-084500 kubelet[2672]: I0108 20:18:48.154559    2672 reconciler_common.go:300] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/40438817-77a5-46b6-83b7-d542628949ae-gcp-creds\") on node \"addons-084500\" DevicePath \"\""
	Jan 08 20:18:48 addons-084500 kubelet[2672]: I0108 20:18:48.162342    2672 operation_generator.go:996] UnmountDevice succeeded for volume "pvc-28eee6bc-f779-4b59-96b7-0fe40cc4c0ca" (UniqueName: "kubernetes.io/csi/hostpath.csi.k8s.io^1aee01bf-ae63-11ee-97e2-26659441a61e") on node "addons-084500"
	Jan 08 20:18:48 addons-084500 kubelet[2672]: I0108 20:18:48.255551    2672 reconciler_common.go:300] "Volume detached for volume \"pvc-28eee6bc-f779-4b59-96b7-0fe40cc4c0ca\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^1aee01bf-ae63-11ee-97e2-26659441a61e\") on node \"addons-084500\" DevicePath \"\""
	
	
	==> storage-provisioner [b2fcdbf2b12d] <==
	I0108 20:15:26.204253       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0108 20:15:26.222942       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0108 20:15:26.223028       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0108 20:15:26.234540       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0108 20:15:26.235462       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-084500_77c5e852-31ed-4a91-92c9-e4886aad45f6!
	I0108 20:15:26.238234       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"9c581f63-e5f7-4a6e-926d-c4cf6a524366", APIVersion:"v1", ResourceVersion:"879", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-084500_77c5e852-31ed-4a91-92c9-e4886aad45f6 became leader
	I0108 20:15:26.337278       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-084500_77c5e852-31ed-4a91-92c9-e4886aad45f6!
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0108 20:18:39.665054   10160 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p addons-084500 -n addons-084500
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p addons-084500 -n addons-084500: (12.5528986s)
helpers_test.go:261: (dbg) Run:  kubectl --context addons-084500 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: ingress-nginx-admission-create-f26wj ingress-nginx-admission-patch-2vml7
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Registry]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-084500 describe pod ingress-nginx-admission-create-f26wj ingress-nginx-admission-patch-2vml7
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-084500 describe pod ingress-nginx-admission-create-f26wj ingress-nginx-admission-patch-2vml7: exit status 1 (192.6696ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-f26wj" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-2vml7" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-084500 describe pod ingress-nginx-admission-create-f26wj ingress-nginx-admission-patch-2vml7: exit status 1
--- FAIL: TestAddons/parallel/Registry (70.35s)

                                                
                                    
x
+
TestCertExpiration (1202.38s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-windows-amd64.exe start -p cert-expiration-785700 --memory=2048 --cert-expiration=3m --driver=hyperv
E0108 22:15:48.019959    3008 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ingress-addon-legacy-054400\client.crt: The system cannot find the path specified.
cert_options_test.go:123: (dbg) Done: out/minikube-windows-amd64.exe start -p cert-expiration-785700 --memory=2048 --cert-expiration=3m --driver=hyperv: (7m8.9650138s)
E0108 22:22:52.252928    3008 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-084500\client.crt: The system cannot find the path specified.
E0108 22:22:58.315562    3008 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-242800\client.crt: The system cannot find the path specified.
E0108 22:23:51.270704    3008 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ingress-addon-legacy-054400\client.crt: The system cannot find the path specified.
cert_options_test.go:131: (dbg) Run:  out/minikube-windows-amd64.exe start -p cert-expiration-785700 --memory=2048 --cert-expiration=8760h --driver=hyperv
E0108 22:25:48.029580    3008 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ingress-addon-legacy-054400\client.crt: The system cannot find the path specified.
cert_options_test.go:131: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p cert-expiration-785700 --memory=2048 --cert-expiration=8760h --driver=hyperv: exit status 90 (5m31.3758567s)

                                                
                                                
-- stdout --
	* [cert-expiration-785700] minikube v1.32.0 on Microsoft Windows 10 Enterprise N 10.0.19045.3803 Build 19045.3803
	  - KUBECONFIG=C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube7\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=17907
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the hyperv driver based on existing profile
	* Starting control plane node cert-expiration-785700 in cluster cert-expiration-785700
	* Updating the running hyperv "cert-expiration-785700" VM ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0108 22:25:45.598231    6868 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	! This VM is having trouble accessing https://registry.k8s.io
	* To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	-- Journal begins at Mon 2024-01-08 22:20:34 UTC, ends at Mon 2024-01-08 22:31:16 UTC. --
	Jan 08 22:21:28 cert-expiration-785700 systemd[1]: Starting Docker Application Container Engine...
	Jan 08 22:21:28 cert-expiration-785700 dockerd[672]: time="2024-01-08T22:21:28.193198552Z" level=info msg="Starting up"
	Jan 08 22:21:28 cert-expiration-785700 dockerd[672]: time="2024-01-08T22:21:28.194123684Z" level=info msg="containerd not running, starting managed containerd"
	Jan 08 22:21:28 cert-expiration-785700 dockerd[672]: time="2024-01-08T22:21:28.195271123Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=679
	Jan 08 22:21:28 cert-expiration-785700 dockerd[679]: time="2024-01-08T22:21:28.233412041Z" level=info msg="starting containerd" revision=64b8a811b07ba6288238eefc14d898ee0b5b99ba version=v1.7.11
	Jan 08 22:21:28 cert-expiration-785700 dockerd[679]: time="2024-01-08T22:21:28.257365668Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Jan 08 22:21:28 cert-expiration-785700 dockerd[679]: time="2024-01-08T22:21:28.258201797Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Jan 08 22:21:28 cert-expiration-785700 dockerd[679]: time="2024-01-08T22:21:28.260381872Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.57\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Jan 08 22:21:28 cert-expiration-785700 dockerd[679]: time="2024-01-08T22:21:28.260508677Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Jan 08 22:21:28 cert-expiration-785700 dockerd[679]: time="2024-01-08T22:21:28.260800287Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jan 08 22:21:28 cert-expiration-785700 dockerd[679]: time="2024-01-08T22:21:28.260950792Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Jan 08 22:21:28 cert-expiration-785700 dockerd[679]: time="2024-01-08T22:21:28.261049596Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Jan 08 22:21:28 cert-expiration-785700 dockerd[679]: time="2024-01-08T22:21:28.261192900Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Jan 08 22:21:28 cert-expiration-785700 dockerd[679]: time="2024-01-08T22:21:28.261285404Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Jan 08 22:21:28 cert-expiration-785700 dockerd[679]: time="2024-01-08T22:21:28.261395907Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Jan 08 22:21:28 cert-expiration-785700 dockerd[679]: time="2024-01-08T22:21:28.261872024Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Jan 08 22:21:28 cert-expiration-785700 dockerd[679]: time="2024-01-08T22:21:28.262002128Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Jan 08 22:21:28 cert-expiration-785700 dockerd[679]: time="2024-01-08T22:21:28.262020129Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Jan 08 22:21:28 cert-expiration-785700 dockerd[679]: time="2024-01-08T22:21:28.262199935Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jan 08 22:21:28 cert-expiration-785700 dockerd[679]: time="2024-01-08T22:21:28.262290738Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Jan 08 22:21:28 cert-expiration-785700 dockerd[679]: time="2024-01-08T22:21:28.262437343Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Jan 08 22:21:28 cert-expiration-785700 dockerd[679]: time="2024-01-08T22:21:28.262461444Z" level=info msg="metadata content store policy set" policy=shared
	Jan 08 22:21:28 cert-expiration-785700 dockerd[679]: time="2024-01-08T22:21:28.272974407Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Jan 08 22:21:28 cert-expiration-785700 dockerd[679]: time="2024-01-08T22:21:28.273082111Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Jan 08 22:21:28 cert-expiration-785700 dockerd[679]: time="2024-01-08T22:21:28.273104012Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Jan 08 22:21:28 cert-expiration-785700 dockerd[679]: time="2024-01-08T22:21:28.273138013Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Jan 08 22:21:28 cert-expiration-785700 dockerd[679]: time="2024-01-08T22:21:28.273154614Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Jan 08 22:21:28 cert-expiration-785700 dockerd[679]: time="2024-01-08T22:21:28.273165614Z" level=info msg="NRI interface is disabled by configuration."
	Jan 08 22:21:28 cert-expiration-785700 dockerd[679]: time="2024-01-08T22:21:28.273177914Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Jan 08 22:21:28 cert-expiration-785700 dockerd[679]: time="2024-01-08T22:21:28.273607429Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Jan 08 22:21:28 cert-expiration-785700 dockerd[679]: time="2024-01-08T22:21:28.273717833Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Jan 08 22:21:28 cert-expiration-785700 dockerd[679]: time="2024-01-08T22:21:28.273739634Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Jan 08 22:21:28 cert-expiration-785700 dockerd[679]: time="2024-01-08T22:21:28.273754834Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Jan 08 22:21:28 cert-expiration-785700 dockerd[679]: time="2024-01-08T22:21:28.273769235Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Jan 08 22:21:28 cert-expiration-785700 dockerd[679]: time="2024-01-08T22:21:28.273786235Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Jan 08 22:21:28 cert-expiration-785700 dockerd[679]: time="2024-01-08T22:21:28.273799636Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Jan 08 22:21:28 cert-expiration-785700 dockerd[679]: time="2024-01-08T22:21:28.273812736Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Jan 08 22:21:28 cert-expiration-785700 dockerd[679]: time="2024-01-08T22:21:28.273828837Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Jan 08 22:21:28 cert-expiration-785700 dockerd[679]: time="2024-01-08T22:21:28.273842737Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Jan 08 22:21:28 cert-expiration-785700 dockerd[679]: time="2024-01-08T22:21:28.273855238Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Jan 08 22:21:28 cert-expiration-785700 dockerd[679]: time="2024-01-08T22:21:28.273868538Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Jan 08 22:21:28 cert-expiration-785700 dockerd[679]: time="2024-01-08T22:21:28.274022844Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Jan 08 22:21:28 cert-expiration-785700 dockerd[679]: time="2024-01-08T22:21:28.274282553Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Jan 08 22:21:28 cert-expiration-785700 dockerd[679]: time="2024-01-08T22:21:28.274338055Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Jan 08 22:21:28 cert-expiration-785700 dockerd[679]: time="2024-01-08T22:21:28.274355855Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Jan 08 22:21:28 cert-expiration-785700 dockerd[679]: time="2024-01-08T22:21:28.274379356Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Jan 08 22:21:28 cert-expiration-785700 dockerd[679]: time="2024-01-08T22:21:28.274527561Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Jan 08 22:21:28 cert-expiration-785700 dockerd[679]: time="2024-01-08T22:21:28.274555362Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Jan 08 22:21:28 cert-expiration-785700 dockerd[679]: time="2024-01-08T22:21:28.274570563Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Jan 08 22:21:28 cert-expiration-785700 dockerd[679]: time="2024-01-08T22:21:28.274583363Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Jan 08 22:21:28 cert-expiration-785700 dockerd[679]: time="2024-01-08T22:21:28.274596463Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Jan 08 22:21:28 cert-expiration-785700 dockerd[679]: time="2024-01-08T22:21:28.274610264Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Jan 08 22:21:28 cert-expiration-785700 dockerd[679]: time="2024-01-08T22:21:28.274623064Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Jan 08 22:21:28 cert-expiration-785700 dockerd[679]: time="2024-01-08T22:21:28.274635665Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Jan 08 22:21:28 cert-expiration-785700 dockerd[679]: time="2024-01-08T22:21:28.274649265Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Jan 08 22:21:28 cert-expiration-785700 dockerd[679]: time="2024-01-08T22:21:28.274704967Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Jan 08 22:21:28 cert-expiration-785700 dockerd[679]: time="2024-01-08T22:21:28.274830372Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Jan 08 22:21:28 cert-expiration-785700 dockerd[679]: time="2024-01-08T22:21:28.274850672Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Jan 08 22:21:28 cert-expiration-785700 dockerd[679]: time="2024-01-08T22:21:28.274867173Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Jan 08 22:21:28 cert-expiration-785700 dockerd[679]: time="2024-01-08T22:21:28.274881373Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Jan 08 22:21:28 cert-expiration-785700 dockerd[679]: time="2024-01-08T22:21:28.274896374Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Jan 08 22:21:28 cert-expiration-785700 dockerd[679]: time="2024-01-08T22:21:28.274908574Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Jan 08 22:21:28 cert-expiration-785700 dockerd[679]: time="2024-01-08T22:21:28.274919775Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Jan 08 22:21:28 cert-expiration-785700 dockerd[679]: time="2024-01-08T22:21:28.274934175Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	Jan 08 22:21:28 cert-expiration-785700 dockerd[679]: time="2024-01-08T22:21:28.274947276Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Jan 08 22:21:28 cert-expiration-785700 dockerd[679]: time="2024-01-08T22:21:28.274958776Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	Jan 08 22:21:28 cert-expiration-785700 dockerd[679]: time="2024-01-08T22:21:28.275199284Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Jan 08 22:21:28 cert-expiration-785700 dockerd[679]: time="2024-01-08T22:21:28.275337889Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Jan 08 22:21:28 cert-expiration-785700 dockerd[679]: time="2024-01-08T22:21:28.275388891Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Jan 08 22:21:28 cert-expiration-785700 dockerd[679]: time="2024-01-08T22:21:28.275412992Z" level=info msg="containerd successfully booted in 0.044836s"
	Jan 08 22:21:28 cert-expiration-785700 dockerd[672]: time="2024-01-08T22:21:28.318338074Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Jan 08 22:21:28 cert-expiration-785700 dockerd[672]: time="2024-01-08T22:21:28.334201222Z" level=info msg="Loading containers: start."
	Jan 08 22:21:28 cert-expiration-785700 dockerd[672]: time="2024-01-08T22:21:28.551807639Z" level=info msg="Loading containers: done."
	Jan 08 22:21:28 cert-expiration-785700 dockerd[672]: time="2024-01-08T22:21:28.568860428Z" level=warning msg="WARNING: No blkio throttle.read_bps_device support"
	Jan 08 22:21:28 cert-expiration-785700 dockerd[672]: time="2024-01-08T22:21:28.568969332Z" level=warning msg="WARNING: No blkio throttle.write_bps_device support"
	Jan 08 22:21:28 cert-expiration-785700 dockerd[672]: time="2024-01-08T22:21:28.569015134Z" level=warning msg="WARNING: No blkio throttle.read_iops_device support"
	Jan 08 22:21:28 cert-expiration-785700 dockerd[672]: time="2024-01-08T22:21:28.569053435Z" level=warning msg="WARNING: No blkio throttle.write_iops_device support"
	Jan 08 22:21:28 cert-expiration-785700 dockerd[672]: time="2024-01-08T22:21:28.569102537Z" level=info msg="Docker daemon" commit=311b9ff graphdriver=overlay2 version=24.0.7
	Jan 08 22:21:28 cert-expiration-785700 dockerd[672]: time="2024-01-08T22:21:28.569340445Z" level=info msg="Daemon has completed initialization"
	Jan 08 22:21:28 cert-expiration-785700 dockerd[672]: time="2024-01-08T22:21:28.623614920Z" level=info msg="API listen on /var/run/docker.sock"
	Jan 08 22:21:28 cert-expiration-785700 dockerd[672]: time="2024-01-08T22:21:28.623715923Z" level=info msg="API listen on [::]:2376"
	Jan 08 22:21:28 cert-expiration-785700 systemd[1]: Started Docker Application Container Engine.
	Jan 08 22:21:59 cert-expiration-785700 dockerd[672]: time="2024-01-08T22:21:59.212133234Z" level=info msg="Processing signal 'terminated'"
	Jan 08 22:21:59 cert-expiration-785700 systemd[1]: Stopping Docker Application Container Engine...
	Jan 08 22:21:59 cert-expiration-785700 dockerd[672]: time="2024-01-08T22:21:59.214439634Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jan 08 22:21:59 cert-expiration-785700 dockerd[672]: time="2024-01-08T22:21:59.214443934Z" level=info msg="Daemon shutdown complete"
	Jan 08 22:21:59 cert-expiration-785700 dockerd[672]: time="2024-01-08T22:21:59.215317534Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Jan 08 22:21:59 cert-expiration-785700 dockerd[672]: time="2024-01-08T22:21:59.215638834Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jan 08 22:22:00 cert-expiration-785700 systemd[1]: docker.service: Succeeded.
	Jan 08 22:22:00 cert-expiration-785700 systemd[1]: Stopped Docker Application Container Engine.
	Jan 08 22:22:00 cert-expiration-785700 systemd[1]: Starting Docker Application Container Engine...
	Jan 08 22:22:00 cert-expiration-785700 dockerd[1010]: time="2024-01-08T22:22:00.293546134Z" level=info msg="Starting up"
	Jan 08 22:22:00 cert-expiration-785700 dockerd[1010]: time="2024-01-08T22:22:00.294646234Z" level=info msg="containerd not running, starting managed containerd"
	Jan 08 22:22:00 cert-expiration-785700 dockerd[1010]: time="2024-01-08T22:22:00.296068734Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1016
	Jan 08 22:22:00 cert-expiration-785700 dockerd[1016]: time="2024-01-08T22:22:00.335709634Z" level=info msg="starting containerd" revision=64b8a811b07ba6288238eefc14d898ee0b5b99ba version=v1.7.11
	Jan 08 22:22:00 cert-expiration-785700 dockerd[1016]: time="2024-01-08T22:22:00.365911834Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Jan 08 22:22:00 cert-expiration-785700 dockerd[1016]: time="2024-01-08T22:22:00.366051134Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Jan 08 22:22:00 cert-expiration-785700 dockerd[1016]: time="2024-01-08T22:22:00.368846634Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.57\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Jan 08 22:22:00 cert-expiration-785700 dockerd[1016]: time="2024-01-08T22:22:00.368961934Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Jan 08 22:22:00 cert-expiration-785700 dockerd[1016]: time="2024-01-08T22:22:00.369206634Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jan 08 22:22:00 cert-expiration-785700 dockerd[1016]: time="2024-01-08T22:22:00.369349734Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Jan 08 22:22:00 cert-expiration-785700 dockerd[1016]: time="2024-01-08T22:22:00.369384534Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Jan 08 22:22:00 cert-expiration-785700 dockerd[1016]: time="2024-01-08T22:22:00.369411134Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Jan 08 22:22:00 cert-expiration-785700 dockerd[1016]: time="2024-01-08T22:22:00.369424434Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Jan 08 22:22:00 cert-expiration-785700 dockerd[1016]: time="2024-01-08T22:22:00.369655334Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Jan 08 22:22:00 cert-expiration-785700 dockerd[1016]: time="2024-01-08T22:22:00.369850534Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Jan 08 22:22:00 cert-expiration-785700 dockerd[1016]: time="2024-01-08T22:22:00.369953034Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Jan 08 22:22:00 cert-expiration-785700 dockerd[1016]: time="2024-01-08T22:22:00.369973334Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Jan 08 22:22:00 cert-expiration-785700 dockerd[1016]: time="2024-01-08T22:22:00.370351534Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jan 08 22:22:00 cert-expiration-785700 dockerd[1016]: time="2024-01-08T22:22:00.370398834Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Jan 08 22:22:00 cert-expiration-785700 dockerd[1016]: time="2024-01-08T22:22:00.370423734Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Jan 08 22:22:00 cert-expiration-785700 dockerd[1016]: time="2024-01-08T22:22:00.370438134Z" level=info msg="metadata content store policy set" policy=shared
	Jan 08 22:22:00 cert-expiration-785700 dockerd[1016]: time="2024-01-08T22:22:00.370542734Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Jan 08 22:22:00 cert-expiration-785700 dockerd[1016]: time="2024-01-08T22:22:00.370566134Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Jan 08 22:22:00 cert-expiration-785700 dockerd[1016]: time="2024-01-08T22:22:00.370580334Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Jan 08 22:22:00 cert-expiration-785700 dockerd[1016]: time="2024-01-08T22:22:00.370662934Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Jan 08 22:22:00 cert-expiration-785700 dockerd[1016]: time="2024-01-08T22:22:00.370684634Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Jan 08 22:22:00 cert-expiration-785700 dockerd[1016]: time="2024-01-08T22:22:00.370698134Z" level=info msg="NRI interface is disabled by configuration."
	Jan 08 22:22:00 cert-expiration-785700 dockerd[1016]: time="2024-01-08T22:22:00.370712634Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Jan 08 22:22:00 cert-expiration-785700 dockerd[1016]: time="2024-01-08T22:22:00.370763034Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Jan 08 22:22:00 cert-expiration-785700 dockerd[1016]: time="2024-01-08T22:22:00.370801634Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Jan 08 22:22:00 cert-expiration-785700 dockerd[1016]: time="2024-01-08T22:22:00.370820034Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Jan 08 22:22:00 cert-expiration-785700 dockerd[1016]: time="2024-01-08T22:22:00.370835334Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Jan 08 22:22:00 cert-expiration-785700 dockerd[1016]: time="2024-01-08T22:22:00.370851034Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Jan 08 22:22:00 cert-expiration-785700 dockerd[1016]: time="2024-01-08T22:22:00.370868434Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Jan 08 22:22:00 cert-expiration-785700 dockerd[1016]: time="2024-01-08T22:22:00.370883334Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Jan 08 22:22:00 cert-expiration-785700 dockerd[1016]: time="2024-01-08T22:22:00.370898134Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Jan 08 22:22:00 cert-expiration-785700 dockerd[1016]: time="2024-01-08T22:22:00.370912934Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Jan 08 22:22:00 cert-expiration-785700 dockerd[1016]: time="2024-01-08T22:22:00.370928034Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Jan 08 22:22:00 cert-expiration-785700 dockerd[1016]: time="2024-01-08T22:22:00.370942934Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Jan 08 22:22:00 cert-expiration-785700 dockerd[1016]: time="2024-01-08T22:22:00.370957534Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Jan 08 22:22:00 cert-expiration-785700 dockerd[1016]: time="2024-01-08T22:22:00.370996834Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Jan 08 22:22:00 cert-expiration-785700 dockerd[1016]: time="2024-01-08T22:22:00.371358834Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Jan 08 22:22:00 cert-expiration-785700 dockerd[1016]: time="2024-01-08T22:22:00.371489234Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Jan 08 22:22:00 cert-expiration-785700 dockerd[1016]: time="2024-01-08T22:22:00.371514634Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Jan 08 22:22:00 cert-expiration-785700 dockerd[1016]: time="2024-01-08T22:22:00.371539134Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Jan 08 22:22:00 cert-expiration-785700 dockerd[1016]: time="2024-01-08T22:22:00.371660334Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Jan 08 22:22:00 cert-expiration-785700 dockerd[1016]: time="2024-01-08T22:22:00.371761734Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Jan 08 22:22:00 cert-expiration-785700 dockerd[1016]: time="2024-01-08T22:22:00.371783134Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Jan 08 22:22:00 cert-expiration-785700 dockerd[1016]: time="2024-01-08T22:22:00.371797234Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Jan 08 22:22:00 cert-expiration-785700 dockerd[1016]: time="2024-01-08T22:22:00.371811734Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Jan 08 22:22:00 cert-expiration-785700 dockerd[1016]: time="2024-01-08T22:22:00.371825534Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Jan 08 22:22:00 cert-expiration-785700 dockerd[1016]: time="2024-01-08T22:22:00.371838834Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Jan 08 22:22:00 cert-expiration-785700 dockerd[1016]: time="2024-01-08T22:22:00.371851934Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Jan 08 22:22:00 cert-expiration-785700 dockerd[1016]: time="2024-01-08T22:22:00.371871734Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Jan 08 22:22:00 cert-expiration-785700 dockerd[1016]: time="2024-01-08T22:22:00.371906134Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Jan 08 22:22:00 cert-expiration-785700 dockerd[1016]: time="2024-01-08T22:22:00.371922734Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Jan 08 22:22:00 cert-expiration-785700 dockerd[1016]: time="2024-01-08T22:22:00.371939134Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Jan 08 22:22:00 cert-expiration-785700 dockerd[1016]: time="2024-01-08T22:22:00.371953234Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Jan 08 22:22:00 cert-expiration-785700 dockerd[1016]: time="2024-01-08T22:22:00.371967234Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Jan 08 22:22:00 cert-expiration-785700 dockerd[1016]: time="2024-01-08T22:22:00.371982034Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Jan 08 22:22:00 cert-expiration-785700 dockerd[1016]: time="2024-01-08T22:22:00.371995434Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Jan 08 22:22:00 cert-expiration-785700 dockerd[1016]: time="2024-01-08T22:22:00.372008734Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Jan 08 22:22:00 cert-expiration-785700 dockerd[1016]: time="2024-01-08T22:22:00.372024034Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	Jan 08 22:22:00 cert-expiration-785700 dockerd[1016]: time="2024-01-08T22:22:00.372036534Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Jan 08 22:22:00 cert-expiration-785700 dockerd[1016]: time="2024-01-08T22:22:00.372047934Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	Jan 08 22:22:00 cert-expiration-785700 dockerd[1016]: time="2024-01-08T22:22:00.372388834Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Jan 08 22:22:00 cert-expiration-785700 dockerd[1016]: time="2024-01-08T22:22:00.372590934Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Jan 08 22:22:00 cert-expiration-785700 dockerd[1016]: time="2024-01-08T22:22:00.372694934Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Jan 08 22:22:00 cert-expiration-785700 dockerd[1016]: time="2024-01-08T22:22:00.372790434Z" level=info msg="containerd successfully booted in 0.039586s"
	Jan 08 22:22:00 cert-expiration-785700 dockerd[1010]: time="2024-01-08T22:22:00.398779034Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Jan 08 22:22:00 cert-expiration-785700 dockerd[1010]: time="2024-01-08T22:22:00.471986334Z" level=info msg="Loading containers: start."
	Jan 08 22:22:00 cert-expiration-785700 dockerd[1010]: time="2024-01-08T22:22:00.641571534Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Jan 08 22:22:00 cert-expiration-785700 dockerd[1010]: time="2024-01-08T22:22:00.709143734Z" level=info msg="Loading containers: done."
	Jan 08 22:22:00 cert-expiration-785700 dockerd[1010]: time="2024-01-08T22:22:00.729675034Z" level=warning msg="WARNING: No blkio throttle.read_bps_device support"
	Jan 08 22:22:00 cert-expiration-785700 dockerd[1010]: time="2024-01-08T22:22:00.729696734Z" level=warning msg="WARNING: No blkio throttle.write_bps_device support"
	Jan 08 22:22:00 cert-expiration-785700 dockerd[1010]: time="2024-01-08T22:22:00.729704234Z" level=warning msg="WARNING: No blkio throttle.read_iops_device support"
	Jan 08 22:22:00 cert-expiration-785700 dockerd[1010]: time="2024-01-08T22:22:00.729710534Z" level=warning msg="WARNING: No blkio throttle.write_iops_device support"
	Jan 08 22:22:00 cert-expiration-785700 dockerd[1010]: time="2024-01-08T22:22:00.729731034Z" level=info msg="Docker daemon" commit=311b9ff graphdriver=overlay2 version=24.0.7
	Jan 08 22:22:00 cert-expiration-785700 dockerd[1010]: time="2024-01-08T22:22:00.729776034Z" level=info msg="Daemon has completed initialization"
	Jan 08 22:22:00 cert-expiration-785700 dockerd[1010]: time="2024-01-08T22:22:00.782866134Z" level=info msg="API listen on /var/run/docker.sock"
	Jan 08 22:22:00 cert-expiration-785700 dockerd[1010]: time="2024-01-08T22:22:00.783014634Z" level=info msg="API listen on [::]:2376"
	Jan 08 22:22:00 cert-expiration-785700 systemd[1]: Started Docker Application Container Engine.
	Jan 08 22:22:14 cert-expiration-785700 dockerd[1010]: time="2024-01-08T22:22:14.674247734Z" level=info msg="Processing signal 'terminated'"
	Jan 08 22:22:14 cert-expiration-785700 systemd[1]: Stopping Docker Application Container Engine...
	Jan 08 22:22:14 cert-expiration-785700 dockerd[1010]: time="2024-01-08T22:22:14.676458034Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jan 08 22:22:14 cert-expiration-785700 dockerd[1010]: time="2024-01-08T22:22:14.676486534Z" level=info msg="Daemon shutdown complete"
	Jan 08 22:22:14 cert-expiration-785700 dockerd[1010]: time="2024-01-08T22:22:14.677006334Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Jan 08 22:22:14 cert-expiration-785700 dockerd[1010]: time="2024-01-08T22:22:14.677256634Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jan 08 22:22:15 cert-expiration-785700 systemd[1]: docker.service: Succeeded.
	Jan 08 22:22:15 cert-expiration-785700 systemd[1]: Stopped Docker Application Container Engine.
	Jan 08 22:22:15 cert-expiration-785700 systemd[1]: Starting Docker Application Container Engine...
	Jan 08 22:22:15 cert-expiration-785700 dockerd[1315]: time="2024-01-08T22:22:15.753378834Z" level=info msg="Starting up"
	Jan 08 22:22:15 cert-expiration-785700 dockerd[1315]: time="2024-01-08T22:22:15.755776234Z" level=info msg="containerd not running, starting managed containerd"
	Jan 08 22:22:15 cert-expiration-785700 dockerd[1315]: time="2024-01-08T22:22:15.757719634Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1321
	Jan 08 22:22:15 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:22:15.792922934Z" level=info msg="starting containerd" revision=64b8a811b07ba6288238eefc14d898ee0b5b99ba version=v1.7.11
	Jan 08 22:22:15 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:22:15.819834134Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Jan 08 22:22:15 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:22:15.819938334Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Jan 08 22:22:15 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:22:15.822840434Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.57\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Jan 08 22:22:15 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:22:15.822947134Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Jan 08 22:22:15 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:22:15.823208734Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jan 08 22:22:15 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:22:15.823305834Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Jan 08 22:22:15 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:22:15.823337734Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Jan 08 22:22:15 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:22:15.823362734Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Jan 08 22:22:15 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:22:15.823376034Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Jan 08 22:22:15 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:22:15.823400234Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Jan 08 22:22:15 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:22:15.823737834Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Jan 08 22:22:15 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:22:15.823855334Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Jan 08 22:22:15 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:22:15.823907834Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Jan 08 22:22:15 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:22:15.824128834Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jan 08 22:22:15 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:22:15.824221234Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Jan 08 22:22:15 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:22:15.824248434Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Jan 08 22:22:15 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:22:15.824260534Z" level=info msg="metadata content store policy set" policy=shared
	Jan 08 22:22:15 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:22:15.824387934Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Jan 08 22:22:15 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:22:15.824488234Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Jan 08 22:22:15 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:22:15.824508534Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Jan 08 22:22:15 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:22:15.824555634Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Jan 08 22:22:15 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:22:15.824571334Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Jan 08 22:22:15 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:22:15.824582734Z" level=info msg="NRI interface is disabled by configuration."
	Jan 08 22:22:15 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:22:15.824640934Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Jan 08 22:22:15 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:22:15.824694634Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Jan 08 22:22:15 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:22:15.824786734Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Jan 08 22:22:15 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:22:15.824807834Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Jan 08 22:22:15 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:22:15.824827434Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Jan 08 22:22:15 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:22:15.824843834Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Jan 08 22:22:15 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:22:15.824860934Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Jan 08 22:22:15 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:22:15.824875334Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Jan 08 22:22:15 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:22:15.824889534Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Jan 08 22:22:15 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:22:15.824906334Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Jan 08 22:22:15 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:22:15.824920934Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Jan 08 22:22:15 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:22:15.824951334Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Jan 08 22:22:15 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:22:15.824963034Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Jan 08 22:22:15 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:22:15.825075234Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Jan 08 22:22:15 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:22:15.825347734Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Jan 08 22:22:15 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:22:15.825456434Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Jan 08 22:22:15 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:22:15.825477534Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Jan 08 22:22:15 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:22:15.825501434Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Jan 08 22:22:15 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:22:15.825553034Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Jan 08 22:22:15 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:22:15.825691034Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Jan 08 22:22:15 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:22:15.825710434Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Jan 08 22:22:15 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:22:15.825726234Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Jan 08 22:22:15 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:22:15.825739834Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Jan 08 22:22:15 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:22:15.825754434Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Jan 08 22:22:15 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:22:15.825767034Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Jan 08 22:22:15 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:22:15.825778734Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Jan 08 22:22:15 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:22:15.825792334Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Jan 08 22:22:15 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:22:15.825824534Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Jan 08 22:22:15 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:22:15.825840834Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Jan 08 22:22:15 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:22:15.825853034Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Jan 08 22:22:15 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:22:15.825865534Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Jan 08 22:22:15 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:22:15.825877934Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Jan 08 22:22:15 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:22:15.825895934Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Jan 08 22:22:15 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:22:15.825909534Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Jan 08 22:22:15 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:22:15.825922634Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Jan 08 22:22:15 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:22:15.825936634Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	Jan 08 22:22:15 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:22:15.825948334Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Jan 08 22:22:15 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:22:15.825958734Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	Jan 08 22:22:15 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:22:15.826270234Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Jan 08 22:22:15 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:22:15.826403834Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Jan 08 22:22:15 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:22:15.826526734Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Jan 08 22:22:15 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:22:15.826643734Z" level=info msg="containerd successfully booted in 0.034679s"
	Jan 08 22:22:16 cert-expiration-785700 dockerd[1315]: time="2024-01-08T22:22:16.428774034Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Jan 08 22:22:17 cert-expiration-785700 dockerd[1315]: time="2024-01-08T22:22:17.023068134Z" level=info msg="Loading containers: start."
	Jan 08 22:22:17 cert-expiration-785700 dockerd[1315]: time="2024-01-08T22:22:17.190851634Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Jan 08 22:22:17 cert-expiration-785700 dockerd[1315]: time="2024-01-08T22:22:17.257313934Z" level=info msg="Loading containers: done."
	Jan 08 22:22:17 cert-expiration-785700 dockerd[1315]: time="2024-01-08T22:22:17.281708734Z" level=warning msg="WARNING: No blkio throttle.read_bps_device support"
	Jan 08 22:22:17 cert-expiration-785700 dockerd[1315]: time="2024-01-08T22:22:17.281816934Z" level=warning msg="WARNING: No blkio throttle.write_bps_device support"
	Jan 08 22:22:17 cert-expiration-785700 dockerd[1315]: time="2024-01-08T22:22:17.281831234Z" level=warning msg="WARNING: No blkio throttle.read_iops_device support"
	Jan 08 22:22:17 cert-expiration-785700 dockerd[1315]: time="2024-01-08T22:22:17.281838834Z" level=warning msg="WARNING: No blkio throttle.write_iops_device support"
	Jan 08 22:22:17 cert-expiration-785700 dockerd[1315]: time="2024-01-08T22:22:17.281861634Z" level=info msg="Docker daemon" commit=311b9ff graphdriver=overlay2 version=24.0.7
	Jan 08 22:22:17 cert-expiration-785700 dockerd[1315]: time="2024-01-08T22:22:17.281910334Z" level=info msg="Daemon has completed initialization"
	Jan 08 22:22:17 cert-expiration-785700 dockerd[1315]: time="2024-01-08T22:22:17.323263034Z" level=info msg="API listen on /var/run/docker.sock"
	Jan 08 22:22:17 cert-expiration-785700 systemd[1]: Started Docker Application Container Engine.
	Jan 08 22:22:17 cert-expiration-785700 dockerd[1315]: time="2024-01-08T22:22:17.325796734Z" level=info msg="API listen on [::]:2376"
	Jan 08 22:22:25 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:22:25.265577518Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jan 08 22:22:25 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:22:25.266731262Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 08 22:22:25 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:22:25.267137378Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jan 08 22:22:25 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:22:25.267381287Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 08 22:22:25 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:22:25.378441754Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jan 08 22:22:25 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:22:25.378589559Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 08 22:22:25 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:22:25.378687963Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jan 08 22:22:25 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:22:25.378705464Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 08 22:22:25 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:22:25.380351427Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jan 08 22:22:25 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:22:25.380562735Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 08 22:22:25 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:22:25.381584574Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jan 08 22:22:25 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:22:25.381821984Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 08 22:22:25 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:22:25.428892592Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jan 08 22:22:25 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:22:25.429202304Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 08 22:22:25 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:22:25.429303608Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jan 08 22:22:25 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:22:25.429413112Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 08 22:22:26 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:22:26.129440897Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jan 08 22:22:26 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:22:26.129851712Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 08 22:22:26 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:22:26.130056619Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jan 08 22:22:26 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:22:26.130244926Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 08 22:22:26 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:22:26.383784657Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jan 08 22:22:26 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:22:26.384505283Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 08 22:22:26 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:22:26.384727191Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jan 08 22:22:26 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:22:26.384874597Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 08 22:22:26 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:22:26.805398642Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jan 08 22:22:26 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:22:26.805666251Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 08 22:22:26 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:22:26.805771255Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jan 08 22:22:26 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:22:26.805976862Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 08 22:22:27 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:22:27.056908975Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jan 08 22:22:27 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:22:27.057353090Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 08 22:22:27 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:22:27.060792706Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jan 08 22:22:27 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:22:27.060970012Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 08 22:22:46 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:22:46.730904747Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jan 08 22:22:46 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:22:46.730971447Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 08 22:22:46 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:22:46.730990147Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jan 08 22:22:46 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:22:46.731004748Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 08 22:22:46 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:22:46.756114496Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jan 08 22:22:46 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:22:46.756200797Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 08 22:22:46 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:22:46.756232097Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jan 08 22:22:46 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:22:46.756249598Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 08 22:22:47 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:22:47.006898177Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jan 08 22:22:47 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:22:47.007206580Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 08 22:22:47 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:22:47.007322181Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jan 08 22:22:47 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:22:47.007540483Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 08 22:22:47 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:22:47.612067897Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jan 08 22:22:47 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:22:47.612473001Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 08 22:22:47 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:22:47.613565511Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jan 08 22:22:47 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:22:47.614042016Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 08 22:22:48 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:22:48.028000444Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jan 08 22:22:48 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:22:48.028305647Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 08 22:22:48 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:22:48.028379748Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jan 08 22:22:48 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:22:48.028418448Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 08 22:22:48 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:22:48.816560710Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jan 08 22:22:48 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:22:48.817148015Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 08 22:22:48 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:22:48.817282716Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jan 08 22:22:48 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:22:48.817302817Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 08 22:30:04 cert-expiration-785700 dockerd[1315]: time="2024-01-08T22:30:04.816052792Z" level=info msg="Processing signal 'terminated'"
	Jan 08 22:30:04 cert-expiration-785700 systemd[1]: Stopping Docker Application Container Engine...
	Jan 08 22:30:05 cert-expiration-785700 dockerd[1315]: time="2024-01-08T22:30:05.131852499Z" level=info msg="ignoring event" container=f35e0da4b4ec6dae9d7a25ee062170b857f23499c440736930afe77ea62aeccf module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 08 22:30:05 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:30:05.132121600Z" level=info msg="shim disconnected" id=f35e0da4b4ec6dae9d7a25ee062170b857f23499c440736930afe77ea62aeccf namespace=moby
	Jan 08 22:30:05 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:30:05.132283700Z" level=warning msg="cleaning up after shim disconnected" id=f35e0da4b4ec6dae9d7a25ee062170b857f23499c440736930afe77ea62aeccf namespace=moby
	Jan 08 22:30:05 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:30:05.132303300Z" level=info msg="cleaning up dead shim" namespace=moby
	Jan 08 22:30:05 cert-expiration-785700 dockerd[1315]: time="2024-01-08T22:30:05.134417706Z" level=info msg="ignoring event" container=1e859794dd09a4a134ba89fe24c8fd683d82d3d5f205928e1b576a6c18430767 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 08 22:30:05 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:30:05.136326612Z" level=info msg="shim disconnected" id=1e859794dd09a4a134ba89fe24c8fd683d82d3d5f205928e1b576a6c18430767 namespace=moby
	Jan 08 22:30:05 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:30:05.136368112Z" level=warning msg="cleaning up after shim disconnected" id=1e859794dd09a4a134ba89fe24c8fd683d82d3d5f205928e1b576a6c18430767 namespace=moby
	Jan 08 22:30:05 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:30:05.136379512Z" level=info msg="cleaning up dead shim" namespace=moby
	Jan 08 22:30:05 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:30:05.137027914Z" level=info msg="shim disconnected" id=a1a83e8a6210894b8a556485b2a6df18f9247bcb42f7022c74bbf67e34c873c0 namespace=moby
	Jan 08 22:30:05 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:30:05.137072314Z" level=warning msg="cleaning up after shim disconnected" id=a1a83e8a6210894b8a556485b2a6df18f9247bcb42f7022c74bbf67e34c873c0 namespace=moby
	Jan 08 22:30:05 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:30:05.137083714Z" level=info msg="cleaning up dead shim" namespace=moby
	Jan 08 22:30:05 cert-expiration-785700 dockerd[1315]: time="2024-01-08T22:30:05.137966517Z" level=info msg="ignoring event" container=a1a83e8a6210894b8a556485b2a6df18f9247bcb42f7022c74bbf67e34c873c0 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 08 22:30:05 cert-expiration-785700 dockerd[1315]: time="2024-01-08T22:30:05.154409464Z" level=info msg="ignoring event" container=de386a0a973bab42bde9d8877c103167f5a0479a58cfcfdc2e337d0959040ca6 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 08 22:30:05 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:30:05.154946565Z" level=info msg="shim disconnected" id=de386a0a973bab42bde9d8877c103167f5a0479a58cfcfdc2e337d0959040ca6 namespace=moby
	Jan 08 22:30:05 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:30:05.154992466Z" level=warning msg="cleaning up after shim disconnected" id=de386a0a973bab42bde9d8877c103167f5a0479a58cfcfdc2e337d0959040ca6 namespace=moby
	Jan 08 22:30:05 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:30:05.155074066Z" level=info msg="cleaning up dead shim" namespace=moby
	Jan 08 22:30:05 cert-expiration-785700 dockerd[1315]: time="2024-01-08T22:30:05.165106095Z" level=info msg="ignoring event" container=953b566f951bb9432c01fbfd656e52890ebdac05df59b6abf4065532eaefa43c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 08 22:30:05 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:30:05.174232321Z" level=info msg="shim disconnected" id=953b566f951bb9432c01fbfd656e52890ebdac05df59b6abf4065532eaefa43c namespace=moby
	Jan 08 22:30:05 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:30:05.174334921Z" level=warning msg="cleaning up after shim disconnected" id=953b566f951bb9432c01fbfd656e52890ebdac05df59b6abf4065532eaefa43c namespace=moby
	Jan 08 22:30:05 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:30:05.174349321Z" level=info msg="cleaning up dead shim" namespace=moby
	Jan 08 22:30:05 cert-expiration-785700 dockerd[1315]: time="2024-01-08T22:30:05.180943240Z" level=info msg="ignoring event" container=504ed1e085f548241814297002225defded88b3afe543f08d6cd32c4ed42477b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 08 22:30:05 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:30:05.181339641Z" level=info msg="shim disconnected" id=504ed1e085f548241814297002225defded88b3afe543f08d6cd32c4ed42477b namespace=moby
	Jan 08 22:30:05 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:30:05.181494842Z" level=warning msg="cleaning up after shim disconnected" id=504ed1e085f548241814297002225defded88b3afe543f08d6cd32c4ed42477b namespace=moby
	Jan 08 22:30:05 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:30:05.181513642Z" level=info msg="cleaning up dead shim" namespace=moby
	Jan 08 22:30:05 cert-expiration-785700 dockerd[1315]: time="2024-01-08T22:30:05.212464231Z" level=info msg="ignoring event" container=d475444ed114cbd25bf76dd2e12e846d9e489d9fbb36403c4b19c8ac7c310f5f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 08 22:30:05 cert-expiration-785700 dockerd[1315]: time="2024-01-08T22:30:05.212714131Z" level=info msg="ignoring event" container=c67eb3764e72f785d32a4823e8d4ab77b9eb868253cd92265295e7aa1498452d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 08 22:30:05 cert-expiration-785700 dockerd[1315]: time="2024-01-08T22:30:05.212828232Z" level=info msg="ignoring event" container=fa758777e424baa614120a5c1282754590ada8a1fb8d922249f6069de611c942 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 08 22:30:05 cert-expiration-785700 dockerd[1315]: time="2024-01-08T22:30:05.212848032Z" level=info msg="ignoring event" container=0bb9aa34fef2619bee5818a4514e3f2576426df2090a293dda8b10bbefaea427 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 08 22:30:05 cert-expiration-785700 dockerd[1315]: time="2024-01-08T22:30:05.212865532Z" level=info msg="ignoring event" container=22d7cda2998201806029779579d6063512f3da5f49f1ade7ec46aa613cf020d1 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 08 22:30:05 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:30:05.213386633Z" level=info msg="shim disconnected" id=22d7cda2998201806029779579d6063512f3da5f49f1ade7ec46aa613cf020d1 namespace=moby
	Jan 08 22:30:05 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:30:05.213467533Z" level=warning msg="cleaning up after shim disconnected" id=22d7cda2998201806029779579d6063512f3da5f49f1ade7ec46aa613cf020d1 namespace=moby
	Jan 08 22:30:05 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:30:05.213482533Z" level=info msg="cleaning up dead shim" namespace=moby
	Jan 08 22:30:05 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:30:05.213654034Z" level=info msg="shim disconnected" id=fa758777e424baa614120a5c1282754590ada8a1fb8d922249f6069de611c942 namespace=moby
	Jan 08 22:30:05 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:30:05.213722434Z" level=warning msg="cleaning up after shim disconnected" id=fa758777e424baa614120a5c1282754590ada8a1fb8d922249f6069de611c942 namespace=moby
	Jan 08 22:30:05 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:30:05.213736034Z" level=info msg="cleaning up dead shim" namespace=moby
	Jan 08 22:30:05 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:30:05.213848435Z" level=info msg="shim disconnected" id=0bb9aa34fef2619bee5818a4514e3f2576426df2090a293dda8b10bbefaea427 namespace=moby
	Jan 08 22:30:05 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:30:05.213877135Z" level=warning msg="cleaning up after shim disconnected" id=0bb9aa34fef2619bee5818a4514e3f2576426df2090a293dda8b10bbefaea427 namespace=moby
	Jan 08 22:30:05 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:30:05.213901935Z" level=info msg="cleaning up dead shim" namespace=moby
	Jan 08 22:30:05 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:30:05.214106735Z" level=info msg="shim disconnected" id=d475444ed114cbd25bf76dd2e12e846d9e489d9fbb36403c4b19c8ac7c310f5f namespace=moby
	Jan 08 22:30:05 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:30:05.214158535Z" level=warning msg="cleaning up after shim disconnected" id=d475444ed114cbd25bf76dd2e12e846d9e489d9fbb36403c4b19c8ac7c310f5f namespace=moby
	Jan 08 22:30:05 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:30:05.214170735Z" level=info msg="cleaning up dead shim" namespace=moby
	Jan 08 22:30:05 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:30:05.216849443Z" level=info msg="shim disconnected" id=c67eb3764e72f785d32a4823e8d4ab77b9eb868253cd92265295e7aa1498452d namespace=moby
	Jan 08 22:30:05 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:30:05.216893443Z" level=warning msg="cleaning up after shim disconnected" id=c67eb3764e72f785d32a4823e8d4ab77b9eb868253cd92265295e7aa1498452d namespace=moby
	Jan 08 22:30:05 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:30:05.216937643Z" level=info msg="cleaning up dead shim" namespace=moby
	Jan 08 22:30:05 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:30:05.243373719Z" level=info msg="shim disconnected" id=03eb614a5c902b45ed9463f7255a237cfb92b53e9afecf9c9b5fed6ce550bed9 namespace=moby
	Jan 08 22:30:05 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:30:05.248682035Z" level=warning msg="cleaning up after shim disconnected" id=03eb614a5c902b45ed9463f7255a237cfb92b53e9afecf9c9b5fed6ce550bed9 namespace=moby
	Jan 08 22:30:05 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:30:05.248702035Z" level=info msg="cleaning up dead shim" namespace=moby
	Jan 08 22:30:05 cert-expiration-785700 dockerd[1315]: time="2024-01-08T22:30:05.249079536Z" level=info msg="ignoring event" container=03eb614a5c902b45ed9463f7255a237cfb92b53e9afecf9c9b5fed6ce550bed9 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 08 22:30:10 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:30:10.050250521Z" level=info msg="shim disconnected" id=742d6685f01a6bde4f129a0977d8289d3602e5479ab8a7f31554a1b0eae10f41 namespace=moby
	Jan 08 22:30:10 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:30:10.051070323Z" level=warning msg="cleaning up after shim disconnected" id=742d6685f01a6bde4f129a0977d8289d3602e5479ab8a7f31554a1b0eae10f41 namespace=moby
	Jan 08 22:30:10 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:30:10.051160224Z" level=info msg="cleaning up dead shim" namespace=moby
	Jan 08 22:30:10 cert-expiration-785700 dockerd[1315]: time="2024-01-08T22:30:10.052309927Z" level=info msg="ignoring event" container=742d6685f01a6bde4f129a0977d8289d3602e5479ab8a7f31554a1b0eae10f41 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 08 22:30:15 cert-expiration-785700 dockerd[1315]: time="2024-01-08T22:30:15.068932970Z" level=info msg="Container failed to exit within 10s of signal 15 - using the force" container=bb7f8834279bfe095fb51af0415e582862d3cc4ccd1c9f701ec94c1d47e23339
	Jan 08 22:30:15 cert-expiration-785700 dockerd[1315]: time="2024-01-08T22:30:15.126669689Z" level=info msg="ignoring event" container=bb7f8834279bfe095fb51af0415e582862d3cc4ccd1c9f701ec94c1d47e23339 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 08 22:30:15 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:30:15.128287503Z" level=info msg="shim disconnected" id=bb7f8834279bfe095fb51af0415e582862d3cc4ccd1c9f701ec94c1d47e23339 namespace=moby
	Jan 08 22:30:15 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:30:15.128583006Z" level=warning msg="cleaning up after shim disconnected" id=bb7f8834279bfe095fb51af0415e582862d3cc4ccd1c9f701ec94c1d47e23339 namespace=moby
	Jan 08 22:30:15 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:30:15.128603606Z" level=info msg="cleaning up dead shim" namespace=moby
	Jan 08 22:30:15 cert-expiration-785700 dockerd[1315]: time="2024-01-08T22:30:15.623195052Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jan 08 22:30:15 cert-expiration-785700 dockerd[1315]: time="2024-01-08T22:30:15.623769257Z" level=info msg="Daemon shutdown complete"
	Jan 08 22:30:15 cert-expiration-785700 dockerd[1315]: time="2024-01-08T22:30:15.624021560Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Jan 08 22:30:15 cert-expiration-785700 dockerd[1315]: time="2024-01-08T22:30:15.624311462Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jan 08 22:30:16 cert-expiration-785700 systemd[1]: docker.service: Succeeded.
	Jan 08 22:30:16 cert-expiration-785700 systemd[1]: Stopped Docker Application Container Engine.
	Jan 08 22:30:16 cert-expiration-785700 systemd[1]: Starting Docker Application Container Engine...
	Jan 08 22:30:16 cert-expiration-785700 dockerd[8316]: time="2024-01-08T22:30:16.703936206Z" level=info msg="Starting up"
	Jan 08 22:31:16 cert-expiration-785700 dockerd[8316]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jan 08 22:31:16 cert-expiration-785700 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jan 08 22:31:16 cert-expiration-785700 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jan 08 22:31:16 cert-expiration-785700 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:133: failed to start minikube after cert expiration: "out/minikube-windows-amd64.exe start -p cert-expiration-785700 --memory=2048 --cert-expiration=8760h --driver=hyperv" : exit status 90
cert_options_test.go:136: minikube start output did not warn about expired certs: 
-- stdout --
	* [cert-expiration-785700] minikube v1.32.0 on Microsoft Windows 10 Enterprise N 10.0.19045.3803 Build 19045.3803
	  - KUBECONFIG=C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube7\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=17907
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the hyperv driver based on existing profile
	* Starting control plane node cert-expiration-785700 in cluster cert-expiration-785700
	* Updating the running hyperv "cert-expiration-785700" VM ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0108 22:25:45.598231    6868 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	! This VM is having trouble accessing https://registry.k8s.io
	* To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	-- Journal begins at Mon 2024-01-08 22:20:34 UTC, ends at Mon 2024-01-08 22:31:16 UTC. --
	Jan 08 22:21:28 cert-expiration-785700 systemd[1]: Starting Docker Application Container Engine...
	Jan 08 22:21:28 cert-expiration-785700 dockerd[672]: time="2024-01-08T22:21:28.193198552Z" level=info msg="Starting up"
	Jan 08 22:21:28 cert-expiration-785700 dockerd[672]: time="2024-01-08T22:21:28.194123684Z" level=info msg="containerd not running, starting managed containerd"
	Jan 08 22:21:28 cert-expiration-785700 dockerd[672]: time="2024-01-08T22:21:28.195271123Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=679
	Jan 08 22:21:28 cert-expiration-785700 dockerd[679]: time="2024-01-08T22:21:28.233412041Z" level=info msg="starting containerd" revision=64b8a811b07ba6288238eefc14d898ee0b5b99ba version=v1.7.11
	Jan 08 22:21:28 cert-expiration-785700 dockerd[679]: time="2024-01-08T22:21:28.257365668Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Jan 08 22:21:28 cert-expiration-785700 dockerd[679]: time="2024-01-08T22:21:28.258201797Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Jan 08 22:21:28 cert-expiration-785700 dockerd[679]: time="2024-01-08T22:21:28.260381872Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.57\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Jan 08 22:21:28 cert-expiration-785700 dockerd[679]: time="2024-01-08T22:21:28.260508677Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Jan 08 22:21:28 cert-expiration-785700 dockerd[679]: time="2024-01-08T22:21:28.260800287Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jan 08 22:21:28 cert-expiration-785700 dockerd[679]: time="2024-01-08T22:21:28.260950792Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Jan 08 22:21:28 cert-expiration-785700 dockerd[679]: time="2024-01-08T22:21:28.261049596Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Jan 08 22:21:28 cert-expiration-785700 dockerd[679]: time="2024-01-08T22:21:28.261192900Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Jan 08 22:21:28 cert-expiration-785700 dockerd[679]: time="2024-01-08T22:21:28.261285404Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Jan 08 22:21:28 cert-expiration-785700 dockerd[679]: time="2024-01-08T22:21:28.261395907Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Jan 08 22:21:28 cert-expiration-785700 dockerd[679]: time="2024-01-08T22:21:28.261872024Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Jan 08 22:21:28 cert-expiration-785700 dockerd[679]: time="2024-01-08T22:21:28.262002128Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Jan 08 22:21:28 cert-expiration-785700 dockerd[679]: time="2024-01-08T22:21:28.262020129Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Jan 08 22:21:28 cert-expiration-785700 dockerd[679]: time="2024-01-08T22:21:28.262199935Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jan 08 22:21:28 cert-expiration-785700 dockerd[679]: time="2024-01-08T22:21:28.262290738Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Jan 08 22:21:28 cert-expiration-785700 dockerd[679]: time="2024-01-08T22:21:28.262437343Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Jan 08 22:21:28 cert-expiration-785700 dockerd[679]: time="2024-01-08T22:21:28.262461444Z" level=info msg="metadata content store policy set" policy=shared
	Jan 08 22:21:28 cert-expiration-785700 dockerd[679]: time="2024-01-08T22:21:28.272974407Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Jan 08 22:21:28 cert-expiration-785700 dockerd[679]: time="2024-01-08T22:21:28.273082111Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Jan 08 22:21:28 cert-expiration-785700 dockerd[679]: time="2024-01-08T22:21:28.273104012Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Jan 08 22:21:28 cert-expiration-785700 dockerd[679]: time="2024-01-08T22:21:28.273138013Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Jan 08 22:21:28 cert-expiration-785700 dockerd[679]: time="2024-01-08T22:21:28.273154614Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Jan 08 22:21:28 cert-expiration-785700 dockerd[679]: time="2024-01-08T22:21:28.273165614Z" level=info msg="NRI interface is disabled by configuration."
	Jan 08 22:21:28 cert-expiration-785700 dockerd[679]: time="2024-01-08T22:21:28.273177914Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Jan 08 22:21:28 cert-expiration-785700 dockerd[679]: time="2024-01-08T22:21:28.273607429Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Jan 08 22:21:28 cert-expiration-785700 dockerd[679]: time="2024-01-08T22:21:28.273717833Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Jan 08 22:21:28 cert-expiration-785700 dockerd[679]: time="2024-01-08T22:21:28.273739634Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Jan 08 22:21:28 cert-expiration-785700 dockerd[679]: time="2024-01-08T22:21:28.273754834Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Jan 08 22:21:28 cert-expiration-785700 dockerd[679]: time="2024-01-08T22:21:28.273769235Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Jan 08 22:21:28 cert-expiration-785700 dockerd[679]: time="2024-01-08T22:21:28.273786235Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Jan 08 22:21:28 cert-expiration-785700 dockerd[679]: time="2024-01-08T22:21:28.273799636Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Jan 08 22:21:28 cert-expiration-785700 dockerd[679]: time="2024-01-08T22:21:28.273812736Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Jan 08 22:21:28 cert-expiration-785700 dockerd[679]: time="2024-01-08T22:21:28.273828837Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Jan 08 22:21:28 cert-expiration-785700 dockerd[679]: time="2024-01-08T22:21:28.273842737Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Jan 08 22:21:28 cert-expiration-785700 dockerd[679]: time="2024-01-08T22:21:28.273855238Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Jan 08 22:21:28 cert-expiration-785700 dockerd[679]: time="2024-01-08T22:21:28.273868538Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Jan 08 22:21:28 cert-expiration-785700 dockerd[679]: time="2024-01-08T22:21:28.274022844Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Jan 08 22:21:28 cert-expiration-785700 dockerd[679]: time="2024-01-08T22:21:28.274282553Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Jan 08 22:21:28 cert-expiration-785700 dockerd[679]: time="2024-01-08T22:21:28.274338055Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Jan 08 22:21:28 cert-expiration-785700 dockerd[679]: time="2024-01-08T22:21:28.274355855Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Jan 08 22:21:28 cert-expiration-785700 dockerd[679]: time="2024-01-08T22:21:28.274379356Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Jan 08 22:21:28 cert-expiration-785700 dockerd[679]: time="2024-01-08T22:21:28.274527561Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Jan 08 22:21:28 cert-expiration-785700 dockerd[679]: time="2024-01-08T22:21:28.274555362Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Jan 08 22:21:28 cert-expiration-785700 dockerd[679]: time="2024-01-08T22:21:28.274570563Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Jan 08 22:21:28 cert-expiration-785700 dockerd[679]: time="2024-01-08T22:21:28.274583363Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Jan 08 22:21:28 cert-expiration-785700 dockerd[679]: time="2024-01-08T22:21:28.274596463Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Jan 08 22:21:28 cert-expiration-785700 dockerd[679]: time="2024-01-08T22:21:28.274610264Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Jan 08 22:21:28 cert-expiration-785700 dockerd[679]: time="2024-01-08T22:21:28.274623064Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Jan 08 22:21:28 cert-expiration-785700 dockerd[679]: time="2024-01-08T22:21:28.274635665Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Jan 08 22:21:28 cert-expiration-785700 dockerd[679]: time="2024-01-08T22:21:28.274649265Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Jan 08 22:21:28 cert-expiration-785700 dockerd[679]: time="2024-01-08T22:21:28.274704967Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Jan 08 22:21:28 cert-expiration-785700 dockerd[679]: time="2024-01-08T22:21:28.274830372Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Jan 08 22:21:28 cert-expiration-785700 dockerd[679]: time="2024-01-08T22:21:28.274850672Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Jan 08 22:21:28 cert-expiration-785700 dockerd[679]: time="2024-01-08T22:21:28.274867173Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Jan 08 22:21:28 cert-expiration-785700 dockerd[679]: time="2024-01-08T22:21:28.274881373Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Jan 08 22:21:28 cert-expiration-785700 dockerd[679]: time="2024-01-08T22:21:28.274896374Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Jan 08 22:21:28 cert-expiration-785700 dockerd[679]: time="2024-01-08T22:21:28.274908574Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Jan 08 22:21:28 cert-expiration-785700 dockerd[679]: time="2024-01-08T22:21:28.274919775Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Jan 08 22:21:28 cert-expiration-785700 dockerd[679]: time="2024-01-08T22:21:28.274934175Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	Jan 08 22:21:28 cert-expiration-785700 dockerd[679]: time="2024-01-08T22:21:28.274947276Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Jan 08 22:21:28 cert-expiration-785700 dockerd[679]: time="2024-01-08T22:21:28.274958776Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	Jan 08 22:21:28 cert-expiration-785700 dockerd[679]: time="2024-01-08T22:21:28.275199284Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Jan 08 22:21:28 cert-expiration-785700 dockerd[679]: time="2024-01-08T22:21:28.275337889Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Jan 08 22:21:28 cert-expiration-785700 dockerd[679]: time="2024-01-08T22:21:28.275388891Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Jan 08 22:21:28 cert-expiration-785700 dockerd[679]: time="2024-01-08T22:21:28.275412992Z" level=info msg="containerd successfully booted in 0.044836s"
	Jan 08 22:21:28 cert-expiration-785700 dockerd[672]: time="2024-01-08T22:21:28.318338074Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Jan 08 22:21:28 cert-expiration-785700 dockerd[672]: time="2024-01-08T22:21:28.334201222Z" level=info msg="Loading containers: start."
	Jan 08 22:21:28 cert-expiration-785700 dockerd[672]: time="2024-01-08T22:21:28.551807639Z" level=info msg="Loading containers: done."
	Jan 08 22:21:28 cert-expiration-785700 dockerd[672]: time="2024-01-08T22:21:28.568860428Z" level=warning msg="WARNING: No blkio throttle.read_bps_device support"
	Jan 08 22:21:28 cert-expiration-785700 dockerd[672]: time="2024-01-08T22:21:28.568969332Z" level=warning msg="WARNING: No blkio throttle.write_bps_device support"
	Jan 08 22:21:28 cert-expiration-785700 dockerd[672]: time="2024-01-08T22:21:28.569015134Z" level=warning msg="WARNING: No blkio throttle.read_iops_device support"
	Jan 08 22:21:28 cert-expiration-785700 dockerd[672]: time="2024-01-08T22:21:28.569053435Z" level=warning msg="WARNING: No blkio throttle.write_iops_device support"
	Jan 08 22:21:28 cert-expiration-785700 dockerd[672]: time="2024-01-08T22:21:28.569102537Z" level=info msg="Docker daemon" commit=311b9ff graphdriver=overlay2 version=24.0.7
	Jan 08 22:21:28 cert-expiration-785700 dockerd[672]: time="2024-01-08T22:21:28.569340445Z" level=info msg="Daemon has completed initialization"
	Jan 08 22:21:28 cert-expiration-785700 dockerd[672]: time="2024-01-08T22:21:28.623614920Z" level=info msg="API listen on /var/run/docker.sock"
	Jan 08 22:21:28 cert-expiration-785700 dockerd[672]: time="2024-01-08T22:21:28.623715923Z" level=info msg="API listen on [::]:2376"
	Jan 08 22:21:28 cert-expiration-785700 systemd[1]: Started Docker Application Container Engine.
	Jan 08 22:21:59 cert-expiration-785700 dockerd[672]: time="2024-01-08T22:21:59.212133234Z" level=info msg="Processing signal 'terminated'"
	Jan 08 22:21:59 cert-expiration-785700 systemd[1]: Stopping Docker Application Container Engine...
	Jan 08 22:21:59 cert-expiration-785700 dockerd[672]: time="2024-01-08T22:21:59.214439634Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jan 08 22:21:59 cert-expiration-785700 dockerd[672]: time="2024-01-08T22:21:59.214443934Z" level=info msg="Daemon shutdown complete"
	Jan 08 22:21:59 cert-expiration-785700 dockerd[672]: time="2024-01-08T22:21:59.215317534Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Jan 08 22:21:59 cert-expiration-785700 dockerd[672]: time="2024-01-08T22:21:59.215638834Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jan 08 22:22:00 cert-expiration-785700 systemd[1]: docker.service: Succeeded.
	Jan 08 22:22:00 cert-expiration-785700 systemd[1]: Stopped Docker Application Container Engine.
	Jan 08 22:22:00 cert-expiration-785700 systemd[1]: Starting Docker Application Container Engine...
	Jan 08 22:22:00 cert-expiration-785700 dockerd[1010]: time="2024-01-08T22:22:00.293546134Z" level=info msg="Starting up"
	Jan 08 22:22:00 cert-expiration-785700 dockerd[1010]: time="2024-01-08T22:22:00.294646234Z" level=info msg="containerd not running, starting managed containerd"
	Jan 08 22:22:00 cert-expiration-785700 dockerd[1010]: time="2024-01-08T22:22:00.296068734Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1016
	Jan 08 22:22:00 cert-expiration-785700 dockerd[1016]: time="2024-01-08T22:22:00.335709634Z" level=info msg="starting containerd" revision=64b8a811b07ba6288238eefc14d898ee0b5b99ba version=v1.7.11
	Jan 08 22:22:00 cert-expiration-785700 dockerd[1016]: time="2024-01-08T22:22:00.365911834Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Jan 08 22:22:00 cert-expiration-785700 dockerd[1016]: time="2024-01-08T22:22:00.366051134Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Jan 08 22:22:00 cert-expiration-785700 dockerd[1016]: time="2024-01-08T22:22:00.368846634Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.57\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Jan 08 22:22:00 cert-expiration-785700 dockerd[1016]: time="2024-01-08T22:22:00.368961934Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Jan 08 22:22:00 cert-expiration-785700 dockerd[1016]: time="2024-01-08T22:22:00.369206634Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jan 08 22:22:00 cert-expiration-785700 dockerd[1016]: time="2024-01-08T22:22:00.369349734Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Jan 08 22:22:00 cert-expiration-785700 dockerd[1016]: time="2024-01-08T22:22:00.369384534Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Jan 08 22:22:00 cert-expiration-785700 dockerd[1016]: time="2024-01-08T22:22:00.369411134Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Jan 08 22:22:00 cert-expiration-785700 dockerd[1016]: time="2024-01-08T22:22:00.369424434Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Jan 08 22:22:00 cert-expiration-785700 dockerd[1016]: time="2024-01-08T22:22:00.369655334Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Jan 08 22:22:00 cert-expiration-785700 dockerd[1016]: time="2024-01-08T22:22:00.369850534Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Jan 08 22:22:00 cert-expiration-785700 dockerd[1016]: time="2024-01-08T22:22:00.369953034Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Jan 08 22:22:00 cert-expiration-785700 dockerd[1016]: time="2024-01-08T22:22:00.369973334Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Jan 08 22:22:00 cert-expiration-785700 dockerd[1016]: time="2024-01-08T22:22:00.370351534Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jan 08 22:22:00 cert-expiration-785700 dockerd[1016]: time="2024-01-08T22:22:00.370398834Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Jan 08 22:22:00 cert-expiration-785700 dockerd[1016]: time="2024-01-08T22:22:00.370423734Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Jan 08 22:22:00 cert-expiration-785700 dockerd[1016]: time="2024-01-08T22:22:00.370438134Z" level=info msg="metadata content store policy set" policy=shared
	Jan 08 22:22:00 cert-expiration-785700 dockerd[1016]: time="2024-01-08T22:22:00.370542734Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Jan 08 22:22:00 cert-expiration-785700 dockerd[1016]: time="2024-01-08T22:22:00.370566134Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Jan 08 22:22:00 cert-expiration-785700 dockerd[1016]: time="2024-01-08T22:22:00.370580334Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Jan 08 22:22:00 cert-expiration-785700 dockerd[1016]: time="2024-01-08T22:22:00.370662934Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Jan 08 22:22:00 cert-expiration-785700 dockerd[1016]: time="2024-01-08T22:22:00.370684634Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Jan 08 22:22:00 cert-expiration-785700 dockerd[1016]: time="2024-01-08T22:22:00.370698134Z" level=info msg="NRI interface is disabled by configuration."
	Jan 08 22:22:00 cert-expiration-785700 dockerd[1016]: time="2024-01-08T22:22:00.370712634Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Jan 08 22:22:00 cert-expiration-785700 dockerd[1016]: time="2024-01-08T22:22:00.370763034Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Jan 08 22:22:00 cert-expiration-785700 dockerd[1016]: time="2024-01-08T22:22:00.370801634Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Jan 08 22:22:00 cert-expiration-785700 dockerd[1016]: time="2024-01-08T22:22:00.370820034Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Jan 08 22:22:00 cert-expiration-785700 dockerd[1016]: time="2024-01-08T22:22:00.370835334Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Jan 08 22:22:00 cert-expiration-785700 dockerd[1016]: time="2024-01-08T22:22:00.370851034Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Jan 08 22:22:00 cert-expiration-785700 dockerd[1016]: time="2024-01-08T22:22:00.370868434Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Jan 08 22:22:00 cert-expiration-785700 dockerd[1016]: time="2024-01-08T22:22:00.370883334Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Jan 08 22:22:00 cert-expiration-785700 dockerd[1016]: time="2024-01-08T22:22:00.370898134Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Jan 08 22:22:00 cert-expiration-785700 dockerd[1016]: time="2024-01-08T22:22:00.370912934Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Jan 08 22:22:00 cert-expiration-785700 dockerd[1016]: time="2024-01-08T22:22:00.370928034Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Jan 08 22:22:00 cert-expiration-785700 dockerd[1016]: time="2024-01-08T22:22:00.370942934Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Jan 08 22:22:00 cert-expiration-785700 dockerd[1016]: time="2024-01-08T22:22:00.370957534Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Jan 08 22:22:00 cert-expiration-785700 dockerd[1016]: time="2024-01-08T22:22:00.370996834Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Jan 08 22:22:00 cert-expiration-785700 dockerd[1016]: time="2024-01-08T22:22:00.371358834Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Jan 08 22:22:00 cert-expiration-785700 dockerd[1016]: time="2024-01-08T22:22:00.371489234Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Jan 08 22:22:00 cert-expiration-785700 dockerd[1016]: time="2024-01-08T22:22:00.371514634Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Jan 08 22:22:00 cert-expiration-785700 dockerd[1016]: time="2024-01-08T22:22:00.371539134Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Jan 08 22:22:00 cert-expiration-785700 dockerd[1016]: time="2024-01-08T22:22:00.371660334Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Jan 08 22:22:00 cert-expiration-785700 dockerd[1016]: time="2024-01-08T22:22:00.371761734Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Jan 08 22:22:00 cert-expiration-785700 dockerd[1016]: time="2024-01-08T22:22:00.371783134Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Jan 08 22:22:00 cert-expiration-785700 dockerd[1016]: time="2024-01-08T22:22:00.371797234Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Jan 08 22:22:00 cert-expiration-785700 dockerd[1016]: time="2024-01-08T22:22:00.371811734Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Jan 08 22:22:00 cert-expiration-785700 dockerd[1016]: time="2024-01-08T22:22:00.371825534Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Jan 08 22:22:00 cert-expiration-785700 dockerd[1016]: time="2024-01-08T22:22:00.371838834Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Jan 08 22:22:00 cert-expiration-785700 dockerd[1016]: time="2024-01-08T22:22:00.371851934Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Jan 08 22:22:00 cert-expiration-785700 dockerd[1016]: time="2024-01-08T22:22:00.371871734Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Jan 08 22:22:00 cert-expiration-785700 dockerd[1016]: time="2024-01-08T22:22:00.371906134Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Jan 08 22:22:00 cert-expiration-785700 dockerd[1016]: time="2024-01-08T22:22:00.371922734Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Jan 08 22:22:00 cert-expiration-785700 dockerd[1016]: time="2024-01-08T22:22:00.371939134Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Jan 08 22:22:00 cert-expiration-785700 dockerd[1016]: time="2024-01-08T22:22:00.371953234Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Jan 08 22:22:00 cert-expiration-785700 dockerd[1016]: time="2024-01-08T22:22:00.371967234Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Jan 08 22:22:00 cert-expiration-785700 dockerd[1016]: time="2024-01-08T22:22:00.371982034Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Jan 08 22:22:00 cert-expiration-785700 dockerd[1016]: time="2024-01-08T22:22:00.371995434Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Jan 08 22:22:00 cert-expiration-785700 dockerd[1016]: time="2024-01-08T22:22:00.372008734Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Jan 08 22:22:00 cert-expiration-785700 dockerd[1016]: time="2024-01-08T22:22:00.372024034Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	Jan 08 22:22:00 cert-expiration-785700 dockerd[1016]: time="2024-01-08T22:22:00.372036534Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Jan 08 22:22:00 cert-expiration-785700 dockerd[1016]: time="2024-01-08T22:22:00.372047934Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	Jan 08 22:22:00 cert-expiration-785700 dockerd[1016]: time="2024-01-08T22:22:00.372388834Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Jan 08 22:22:00 cert-expiration-785700 dockerd[1016]: time="2024-01-08T22:22:00.372590934Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Jan 08 22:22:00 cert-expiration-785700 dockerd[1016]: time="2024-01-08T22:22:00.372694934Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Jan 08 22:22:00 cert-expiration-785700 dockerd[1016]: time="2024-01-08T22:22:00.372790434Z" level=info msg="containerd successfully booted in 0.039586s"
	Jan 08 22:22:00 cert-expiration-785700 dockerd[1010]: time="2024-01-08T22:22:00.398779034Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Jan 08 22:22:00 cert-expiration-785700 dockerd[1010]: time="2024-01-08T22:22:00.471986334Z" level=info msg="Loading containers: start."
	Jan 08 22:22:00 cert-expiration-785700 dockerd[1010]: time="2024-01-08T22:22:00.641571534Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Jan 08 22:22:00 cert-expiration-785700 dockerd[1010]: time="2024-01-08T22:22:00.709143734Z" level=info msg="Loading containers: done."
	Jan 08 22:22:00 cert-expiration-785700 dockerd[1010]: time="2024-01-08T22:22:00.729675034Z" level=warning msg="WARNING: No blkio throttle.read_bps_device support"
	Jan 08 22:22:00 cert-expiration-785700 dockerd[1010]: time="2024-01-08T22:22:00.729696734Z" level=warning msg="WARNING: No blkio throttle.write_bps_device support"
	Jan 08 22:22:00 cert-expiration-785700 dockerd[1010]: time="2024-01-08T22:22:00.729704234Z" level=warning msg="WARNING: No blkio throttle.read_iops_device support"
	Jan 08 22:22:00 cert-expiration-785700 dockerd[1010]: time="2024-01-08T22:22:00.729710534Z" level=warning msg="WARNING: No blkio throttle.write_iops_device support"
	Jan 08 22:22:00 cert-expiration-785700 dockerd[1010]: time="2024-01-08T22:22:00.729731034Z" level=info msg="Docker daemon" commit=311b9ff graphdriver=overlay2 version=24.0.7
	Jan 08 22:22:00 cert-expiration-785700 dockerd[1010]: time="2024-01-08T22:22:00.729776034Z" level=info msg="Daemon has completed initialization"
	Jan 08 22:22:00 cert-expiration-785700 dockerd[1010]: time="2024-01-08T22:22:00.782866134Z" level=info msg="API listen on /var/run/docker.sock"
	Jan 08 22:22:00 cert-expiration-785700 dockerd[1010]: time="2024-01-08T22:22:00.783014634Z" level=info msg="API listen on [::]:2376"
	Jan 08 22:22:00 cert-expiration-785700 systemd[1]: Started Docker Application Container Engine.
	Jan 08 22:22:14 cert-expiration-785700 dockerd[1010]: time="2024-01-08T22:22:14.674247734Z" level=info msg="Processing signal 'terminated'"
	Jan 08 22:22:14 cert-expiration-785700 systemd[1]: Stopping Docker Application Container Engine...
	Jan 08 22:22:14 cert-expiration-785700 dockerd[1010]: time="2024-01-08T22:22:14.676458034Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jan 08 22:22:14 cert-expiration-785700 dockerd[1010]: time="2024-01-08T22:22:14.676486534Z" level=info msg="Daemon shutdown complete"
	Jan 08 22:22:14 cert-expiration-785700 dockerd[1010]: time="2024-01-08T22:22:14.677006334Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Jan 08 22:22:14 cert-expiration-785700 dockerd[1010]: time="2024-01-08T22:22:14.677256634Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jan 08 22:22:15 cert-expiration-785700 systemd[1]: docker.service: Succeeded.
	Jan 08 22:22:15 cert-expiration-785700 systemd[1]: Stopped Docker Application Container Engine.
	Jan 08 22:22:15 cert-expiration-785700 systemd[1]: Starting Docker Application Container Engine...
	Jan 08 22:22:15 cert-expiration-785700 dockerd[1315]: time="2024-01-08T22:22:15.753378834Z" level=info msg="Starting up"
	Jan 08 22:22:15 cert-expiration-785700 dockerd[1315]: time="2024-01-08T22:22:15.755776234Z" level=info msg="containerd not running, starting managed containerd"
	Jan 08 22:22:15 cert-expiration-785700 dockerd[1315]: time="2024-01-08T22:22:15.757719634Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1321
	Jan 08 22:22:15 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:22:15.792922934Z" level=info msg="starting containerd" revision=64b8a811b07ba6288238eefc14d898ee0b5b99ba version=v1.7.11
	Jan 08 22:22:15 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:22:15.819834134Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Jan 08 22:22:15 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:22:15.819938334Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Jan 08 22:22:15 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:22:15.822840434Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.57\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Jan 08 22:22:15 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:22:15.822947134Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Jan 08 22:22:15 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:22:15.823208734Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jan 08 22:22:15 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:22:15.823305834Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Jan 08 22:22:15 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:22:15.823337734Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Jan 08 22:22:15 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:22:15.823362734Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Jan 08 22:22:15 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:22:15.823376034Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Jan 08 22:22:15 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:22:15.823400234Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Jan 08 22:22:15 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:22:15.823737834Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Jan 08 22:22:15 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:22:15.823855334Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Jan 08 22:22:15 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:22:15.823907834Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Jan 08 22:22:15 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:22:15.824128834Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jan 08 22:22:15 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:22:15.824221234Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Jan 08 22:22:15 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:22:15.824248434Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Jan 08 22:22:15 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:22:15.824260534Z" level=info msg="metadata content store policy set" policy=shared
	Jan 08 22:22:15 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:22:15.824387934Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Jan 08 22:22:15 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:22:15.824488234Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Jan 08 22:22:15 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:22:15.824508534Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Jan 08 22:22:15 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:22:15.824555634Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Jan 08 22:22:15 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:22:15.824571334Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Jan 08 22:22:15 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:22:15.824582734Z" level=info msg="NRI interface is disabled by configuration."
	Jan 08 22:22:15 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:22:15.824640934Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Jan 08 22:22:15 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:22:15.824694634Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Jan 08 22:22:15 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:22:15.824786734Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Jan 08 22:22:15 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:22:15.824807834Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Jan 08 22:22:15 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:22:15.824827434Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Jan 08 22:22:15 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:22:15.824843834Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Jan 08 22:22:15 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:22:15.824860934Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Jan 08 22:22:15 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:22:15.824875334Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Jan 08 22:22:15 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:22:15.824889534Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Jan 08 22:22:15 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:22:15.824906334Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Jan 08 22:22:15 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:22:15.824920934Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Jan 08 22:22:15 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:22:15.824951334Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Jan 08 22:22:15 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:22:15.824963034Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Jan 08 22:22:15 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:22:15.825075234Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Jan 08 22:22:15 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:22:15.825347734Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Jan 08 22:22:15 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:22:15.825456434Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Jan 08 22:22:15 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:22:15.825477534Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Jan 08 22:22:15 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:22:15.825501434Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Jan 08 22:22:15 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:22:15.825553034Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Jan 08 22:22:15 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:22:15.825691034Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Jan 08 22:22:15 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:22:15.825710434Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Jan 08 22:22:15 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:22:15.825726234Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Jan 08 22:22:15 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:22:15.825739834Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Jan 08 22:22:15 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:22:15.825754434Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Jan 08 22:22:15 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:22:15.825767034Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Jan 08 22:22:15 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:22:15.825778734Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Jan 08 22:22:15 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:22:15.825792334Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Jan 08 22:22:15 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:22:15.825824534Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Jan 08 22:22:15 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:22:15.825840834Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Jan 08 22:22:15 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:22:15.825853034Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Jan 08 22:22:15 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:22:15.825865534Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Jan 08 22:22:15 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:22:15.825877934Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Jan 08 22:22:15 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:22:15.825895934Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Jan 08 22:22:15 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:22:15.825909534Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Jan 08 22:22:15 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:22:15.825922634Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Jan 08 22:22:15 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:22:15.825936634Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	Jan 08 22:22:15 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:22:15.825948334Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Jan 08 22:22:15 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:22:15.825958734Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	Jan 08 22:22:15 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:22:15.826270234Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Jan 08 22:22:15 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:22:15.826403834Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Jan 08 22:22:15 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:22:15.826526734Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Jan 08 22:22:15 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:22:15.826643734Z" level=info msg="containerd successfully booted in 0.034679s"
	Jan 08 22:22:16 cert-expiration-785700 dockerd[1315]: time="2024-01-08T22:22:16.428774034Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Jan 08 22:22:17 cert-expiration-785700 dockerd[1315]: time="2024-01-08T22:22:17.023068134Z" level=info msg="Loading containers: start."
	Jan 08 22:22:17 cert-expiration-785700 dockerd[1315]: time="2024-01-08T22:22:17.190851634Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Jan 08 22:22:17 cert-expiration-785700 dockerd[1315]: time="2024-01-08T22:22:17.257313934Z" level=info msg="Loading containers: done."
	Jan 08 22:22:17 cert-expiration-785700 dockerd[1315]: time="2024-01-08T22:22:17.281708734Z" level=warning msg="WARNING: No blkio throttle.read_bps_device support"
	Jan 08 22:22:17 cert-expiration-785700 dockerd[1315]: time="2024-01-08T22:22:17.281816934Z" level=warning msg="WARNING: No blkio throttle.write_bps_device support"
	Jan 08 22:22:17 cert-expiration-785700 dockerd[1315]: time="2024-01-08T22:22:17.281831234Z" level=warning msg="WARNING: No blkio throttle.read_iops_device support"
	Jan 08 22:22:17 cert-expiration-785700 dockerd[1315]: time="2024-01-08T22:22:17.281838834Z" level=warning msg="WARNING: No blkio throttle.write_iops_device support"
	Jan 08 22:22:17 cert-expiration-785700 dockerd[1315]: time="2024-01-08T22:22:17.281861634Z" level=info msg="Docker daemon" commit=311b9ff graphdriver=overlay2 version=24.0.7
	Jan 08 22:22:17 cert-expiration-785700 dockerd[1315]: time="2024-01-08T22:22:17.281910334Z" level=info msg="Daemon has completed initialization"
	Jan 08 22:22:17 cert-expiration-785700 dockerd[1315]: time="2024-01-08T22:22:17.323263034Z" level=info msg="API listen on /var/run/docker.sock"
	Jan 08 22:22:17 cert-expiration-785700 systemd[1]: Started Docker Application Container Engine.
	Jan 08 22:22:17 cert-expiration-785700 dockerd[1315]: time="2024-01-08T22:22:17.325796734Z" level=info msg="API listen on [::]:2376"
	Jan 08 22:22:25 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:22:25.265577518Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jan 08 22:22:25 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:22:25.266731262Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 08 22:22:25 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:22:25.267137378Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jan 08 22:22:25 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:22:25.267381287Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 08 22:22:25 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:22:25.378441754Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jan 08 22:22:25 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:22:25.378589559Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 08 22:22:25 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:22:25.378687963Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jan 08 22:22:25 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:22:25.378705464Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 08 22:22:25 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:22:25.380351427Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jan 08 22:22:25 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:22:25.380562735Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 08 22:22:25 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:22:25.381584574Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jan 08 22:22:25 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:22:25.381821984Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 08 22:22:25 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:22:25.428892592Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jan 08 22:22:25 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:22:25.429202304Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 08 22:22:25 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:22:25.429303608Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jan 08 22:22:25 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:22:25.429413112Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 08 22:22:26 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:22:26.129440897Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jan 08 22:22:26 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:22:26.129851712Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 08 22:22:26 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:22:26.130056619Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jan 08 22:22:26 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:22:26.130244926Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 08 22:22:26 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:22:26.383784657Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jan 08 22:22:26 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:22:26.384505283Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 08 22:22:26 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:22:26.384727191Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jan 08 22:22:26 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:22:26.384874597Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 08 22:22:26 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:22:26.805398642Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jan 08 22:22:26 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:22:26.805666251Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 08 22:22:26 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:22:26.805771255Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jan 08 22:22:26 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:22:26.805976862Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 08 22:22:27 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:22:27.056908975Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jan 08 22:22:27 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:22:27.057353090Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 08 22:22:27 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:22:27.060792706Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jan 08 22:22:27 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:22:27.060970012Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 08 22:22:46 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:22:46.730904747Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jan 08 22:22:46 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:22:46.730971447Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 08 22:22:46 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:22:46.730990147Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jan 08 22:22:46 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:22:46.731004748Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 08 22:22:46 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:22:46.756114496Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jan 08 22:22:46 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:22:46.756200797Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 08 22:22:46 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:22:46.756232097Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jan 08 22:22:46 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:22:46.756249598Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 08 22:22:47 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:22:47.006898177Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jan 08 22:22:47 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:22:47.007206580Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 08 22:22:47 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:22:47.007322181Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jan 08 22:22:47 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:22:47.007540483Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 08 22:22:47 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:22:47.612067897Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jan 08 22:22:47 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:22:47.612473001Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 08 22:22:47 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:22:47.613565511Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jan 08 22:22:47 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:22:47.614042016Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 08 22:22:48 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:22:48.028000444Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jan 08 22:22:48 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:22:48.028305647Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 08 22:22:48 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:22:48.028379748Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jan 08 22:22:48 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:22:48.028418448Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 08 22:22:48 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:22:48.816560710Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jan 08 22:22:48 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:22:48.817148015Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 08 22:22:48 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:22:48.817282716Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jan 08 22:22:48 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:22:48.817302817Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 08 22:30:04 cert-expiration-785700 dockerd[1315]: time="2024-01-08T22:30:04.816052792Z" level=info msg="Processing signal 'terminated'"
	Jan 08 22:30:04 cert-expiration-785700 systemd[1]: Stopping Docker Application Container Engine...
	Jan 08 22:30:05 cert-expiration-785700 dockerd[1315]: time="2024-01-08T22:30:05.131852499Z" level=info msg="ignoring event" container=f35e0da4b4ec6dae9d7a25ee062170b857f23499c440736930afe77ea62aeccf module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 08 22:30:05 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:30:05.132121600Z" level=info msg="shim disconnected" id=f35e0da4b4ec6dae9d7a25ee062170b857f23499c440736930afe77ea62aeccf namespace=moby
	Jan 08 22:30:05 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:30:05.132283700Z" level=warning msg="cleaning up after shim disconnected" id=f35e0da4b4ec6dae9d7a25ee062170b857f23499c440736930afe77ea62aeccf namespace=moby
	Jan 08 22:30:05 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:30:05.132303300Z" level=info msg="cleaning up dead shim" namespace=moby
	Jan 08 22:30:05 cert-expiration-785700 dockerd[1315]: time="2024-01-08T22:30:05.134417706Z" level=info msg="ignoring event" container=1e859794dd09a4a134ba89fe24c8fd683d82d3d5f205928e1b576a6c18430767 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 08 22:30:05 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:30:05.136326612Z" level=info msg="shim disconnected" id=1e859794dd09a4a134ba89fe24c8fd683d82d3d5f205928e1b576a6c18430767 namespace=moby
	Jan 08 22:30:05 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:30:05.136368112Z" level=warning msg="cleaning up after shim disconnected" id=1e859794dd09a4a134ba89fe24c8fd683d82d3d5f205928e1b576a6c18430767 namespace=moby
	Jan 08 22:30:05 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:30:05.136379512Z" level=info msg="cleaning up dead shim" namespace=moby
	Jan 08 22:30:05 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:30:05.137027914Z" level=info msg="shim disconnected" id=a1a83e8a6210894b8a556485b2a6df18f9247bcb42f7022c74bbf67e34c873c0 namespace=moby
	Jan 08 22:30:05 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:30:05.137072314Z" level=warning msg="cleaning up after shim disconnected" id=a1a83e8a6210894b8a556485b2a6df18f9247bcb42f7022c74bbf67e34c873c0 namespace=moby
	Jan 08 22:30:05 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:30:05.137083714Z" level=info msg="cleaning up dead shim" namespace=moby
	Jan 08 22:30:05 cert-expiration-785700 dockerd[1315]: time="2024-01-08T22:30:05.137966517Z" level=info msg="ignoring event" container=a1a83e8a6210894b8a556485b2a6df18f9247bcb42f7022c74bbf67e34c873c0 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 08 22:30:05 cert-expiration-785700 dockerd[1315]: time="2024-01-08T22:30:05.154409464Z" level=info msg="ignoring event" container=de386a0a973bab42bde9d8877c103167f5a0479a58cfcfdc2e337d0959040ca6 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 08 22:30:05 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:30:05.154946565Z" level=info msg="shim disconnected" id=de386a0a973bab42bde9d8877c103167f5a0479a58cfcfdc2e337d0959040ca6 namespace=moby
	Jan 08 22:30:05 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:30:05.154992466Z" level=warning msg="cleaning up after shim disconnected" id=de386a0a973bab42bde9d8877c103167f5a0479a58cfcfdc2e337d0959040ca6 namespace=moby
	Jan 08 22:30:05 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:30:05.155074066Z" level=info msg="cleaning up dead shim" namespace=moby
	Jan 08 22:30:05 cert-expiration-785700 dockerd[1315]: time="2024-01-08T22:30:05.165106095Z" level=info msg="ignoring event" container=953b566f951bb9432c01fbfd656e52890ebdac05df59b6abf4065532eaefa43c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 08 22:30:05 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:30:05.174232321Z" level=info msg="shim disconnected" id=953b566f951bb9432c01fbfd656e52890ebdac05df59b6abf4065532eaefa43c namespace=moby
	Jan 08 22:30:05 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:30:05.174334921Z" level=warning msg="cleaning up after shim disconnected" id=953b566f951bb9432c01fbfd656e52890ebdac05df59b6abf4065532eaefa43c namespace=moby
	Jan 08 22:30:05 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:30:05.174349321Z" level=info msg="cleaning up dead shim" namespace=moby
	Jan 08 22:30:05 cert-expiration-785700 dockerd[1315]: time="2024-01-08T22:30:05.180943240Z" level=info msg="ignoring event" container=504ed1e085f548241814297002225defded88b3afe543f08d6cd32c4ed42477b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 08 22:30:05 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:30:05.181339641Z" level=info msg="shim disconnected" id=504ed1e085f548241814297002225defded88b3afe543f08d6cd32c4ed42477b namespace=moby
	Jan 08 22:30:05 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:30:05.181494842Z" level=warning msg="cleaning up after shim disconnected" id=504ed1e085f548241814297002225defded88b3afe543f08d6cd32c4ed42477b namespace=moby
	Jan 08 22:30:05 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:30:05.181513642Z" level=info msg="cleaning up dead shim" namespace=moby
	Jan 08 22:30:05 cert-expiration-785700 dockerd[1315]: time="2024-01-08T22:30:05.212464231Z" level=info msg="ignoring event" container=d475444ed114cbd25bf76dd2e12e846d9e489d9fbb36403c4b19c8ac7c310f5f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 08 22:30:05 cert-expiration-785700 dockerd[1315]: time="2024-01-08T22:30:05.212714131Z" level=info msg="ignoring event" container=c67eb3764e72f785d32a4823e8d4ab77b9eb868253cd92265295e7aa1498452d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 08 22:30:05 cert-expiration-785700 dockerd[1315]: time="2024-01-08T22:30:05.212828232Z" level=info msg="ignoring event" container=fa758777e424baa614120a5c1282754590ada8a1fb8d922249f6069de611c942 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 08 22:30:05 cert-expiration-785700 dockerd[1315]: time="2024-01-08T22:30:05.212848032Z" level=info msg="ignoring event" container=0bb9aa34fef2619bee5818a4514e3f2576426df2090a293dda8b10bbefaea427 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 08 22:30:05 cert-expiration-785700 dockerd[1315]: time="2024-01-08T22:30:05.212865532Z" level=info msg="ignoring event" container=22d7cda2998201806029779579d6063512f3da5f49f1ade7ec46aa613cf020d1 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 08 22:30:05 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:30:05.213386633Z" level=info msg="shim disconnected" id=22d7cda2998201806029779579d6063512f3da5f49f1ade7ec46aa613cf020d1 namespace=moby
	Jan 08 22:30:05 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:30:05.213467533Z" level=warning msg="cleaning up after shim disconnected" id=22d7cda2998201806029779579d6063512f3da5f49f1ade7ec46aa613cf020d1 namespace=moby
	Jan 08 22:30:05 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:30:05.213482533Z" level=info msg="cleaning up dead shim" namespace=moby
	Jan 08 22:30:05 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:30:05.213654034Z" level=info msg="shim disconnected" id=fa758777e424baa614120a5c1282754590ada8a1fb8d922249f6069de611c942 namespace=moby
	Jan 08 22:30:05 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:30:05.213722434Z" level=warning msg="cleaning up after shim disconnected" id=fa758777e424baa614120a5c1282754590ada8a1fb8d922249f6069de611c942 namespace=moby
	Jan 08 22:30:05 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:30:05.213736034Z" level=info msg="cleaning up dead shim" namespace=moby
	Jan 08 22:30:05 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:30:05.213848435Z" level=info msg="shim disconnected" id=0bb9aa34fef2619bee5818a4514e3f2576426df2090a293dda8b10bbefaea427 namespace=moby
	Jan 08 22:30:05 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:30:05.213877135Z" level=warning msg="cleaning up after shim disconnected" id=0bb9aa34fef2619bee5818a4514e3f2576426df2090a293dda8b10bbefaea427 namespace=moby
	Jan 08 22:30:05 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:30:05.213901935Z" level=info msg="cleaning up dead shim" namespace=moby
	Jan 08 22:30:05 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:30:05.214106735Z" level=info msg="shim disconnected" id=d475444ed114cbd25bf76dd2e12e846d9e489d9fbb36403c4b19c8ac7c310f5f namespace=moby
	Jan 08 22:30:05 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:30:05.214158535Z" level=warning msg="cleaning up after shim disconnected" id=d475444ed114cbd25bf76dd2e12e846d9e489d9fbb36403c4b19c8ac7c310f5f namespace=moby
	Jan 08 22:30:05 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:30:05.214170735Z" level=info msg="cleaning up dead shim" namespace=moby
	Jan 08 22:30:05 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:30:05.216849443Z" level=info msg="shim disconnected" id=c67eb3764e72f785d32a4823e8d4ab77b9eb868253cd92265295e7aa1498452d namespace=moby
	Jan 08 22:30:05 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:30:05.216893443Z" level=warning msg="cleaning up after shim disconnected" id=c67eb3764e72f785d32a4823e8d4ab77b9eb868253cd92265295e7aa1498452d namespace=moby
	Jan 08 22:30:05 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:30:05.216937643Z" level=info msg="cleaning up dead shim" namespace=moby
	Jan 08 22:30:05 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:30:05.243373719Z" level=info msg="shim disconnected" id=03eb614a5c902b45ed9463f7255a237cfb92b53e9afecf9c9b5fed6ce550bed9 namespace=moby
	Jan 08 22:30:05 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:30:05.248682035Z" level=warning msg="cleaning up after shim disconnected" id=03eb614a5c902b45ed9463f7255a237cfb92b53e9afecf9c9b5fed6ce550bed9 namespace=moby
	Jan 08 22:30:05 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:30:05.248702035Z" level=info msg="cleaning up dead shim" namespace=moby
	Jan 08 22:30:05 cert-expiration-785700 dockerd[1315]: time="2024-01-08T22:30:05.249079536Z" level=info msg="ignoring event" container=03eb614a5c902b45ed9463f7255a237cfb92b53e9afecf9c9b5fed6ce550bed9 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 08 22:30:10 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:30:10.050250521Z" level=info msg="shim disconnected" id=742d6685f01a6bde4f129a0977d8289d3602e5479ab8a7f31554a1b0eae10f41 namespace=moby
	Jan 08 22:30:10 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:30:10.051070323Z" level=warning msg="cleaning up after shim disconnected" id=742d6685f01a6bde4f129a0977d8289d3602e5479ab8a7f31554a1b0eae10f41 namespace=moby
	Jan 08 22:30:10 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:30:10.051160224Z" level=info msg="cleaning up dead shim" namespace=moby
	Jan 08 22:30:10 cert-expiration-785700 dockerd[1315]: time="2024-01-08T22:30:10.052309927Z" level=info msg="ignoring event" container=742d6685f01a6bde4f129a0977d8289d3602e5479ab8a7f31554a1b0eae10f41 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 08 22:30:15 cert-expiration-785700 dockerd[1315]: time="2024-01-08T22:30:15.068932970Z" level=info msg="Container failed to exit within 10s of signal 15 - using the force" container=bb7f8834279bfe095fb51af0415e582862d3cc4ccd1c9f701ec94c1d47e23339
	Jan 08 22:30:15 cert-expiration-785700 dockerd[1315]: time="2024-01-08T22:30:15.126669689Z" level=info msg="ignoring event" container=bb7f8834279bfe095fb51af0415e582862d3cc4ccd1c9f701ec94c1d47e23339 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 08 22:30:15 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:30:15.128287503Z" level=info msg="shim disconnected" id=bb7f8834279bfe095fb51af0415e582862d3cc4ccd1c9f701ec94c1d47e23339 namespace=moby
	Jan 08 22:30:15 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:30:15.128583006Z" level=warning msg="cleaning up after shim disconnected" id=bb7f8834279bfe095fb51af0415e582862d3cc4ccd1c9f701ec94c1d47e23339 namespace=moby
	Jan 08 22:30:15 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:30:15.128603606Z" level=info msg="cleaning up dead shim" namespace=moby
	Jan 08 22:30:15 cert-expiration-785700 dockerd[1315]: time="2024-01-08T22:30:15.623195052Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jan 08 22:30:15 cert-expiration-785700 dockerd[1315]: time="2024-01-08T22:30:15.623769257Z" level=info msg="Daemon shutdown complete"
	Jan 08 22:30:15 cert-expiration-785700 dockerd[1315]: time="2024-01-08T22:30:15.624021560Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Jan 08 22:30:15 cert-expiration-785700 dockerd[1315]: time="2024-01-08T22:30:15.624311462Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jan 08 22:30:16 cert-expiration-785700 systemd[1]: docker.service: Succeeded.
	Jan 08 22:30:16 cert-expiration-785700 systemd[1]: Stopped Docker Application Container Engine.
	Jan 08 22:30:16 cert-expiration-785700 systemd[1]: Starting Docker Application Container Engine...
	Jan 08 22:30:16 cert-expiration-785700 dockerd[8316]: time="2024-01-08T22:30:16.703936206Z" level=info msg="Starting up"
	Jan 08 22:31:16 cert-expiration-785700 dockerd[8316]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jan 08 22:31:16 cert-expiration-785700 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jan 08 22:31:16 cert-expiration-785700 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jan 08 22:31:16 cert-expiration-785700 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:138: *** TestCertExpiration FAILED at 2024-01-08 22:31:17.093201 +0000 UTC m=+8438.958486601
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p cert-expiration-785700 -n cert-expiration-785700
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p cert-expiration-785700 -n cert-expiration-785700: exit status 2 (12.803887s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
** stderr ** 
	W0108 22:31:17.265010    5640 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestCertExpiration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestCertExpiration]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p cert-expiration-785700 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p cert-expiration-785700 logs -n 25: (2m47.8846795s)
helpers_test.go:252: TestCertExpiration logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------|---------------------------|-------------------|---------|---------------------|---------------------|
	| Command |                 Args                  |          Profile          |       User        | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------|---------------------------|-------------------|---------|---------------------|---------------------|
	| delete  | -p offline-docker-152000              | offline-docker-152000     | minikube7\jenkins | v1.32.0 | 08 Jan 24 22:10 UTC | 08 Jan 24 22:11 UTC |
	| start   | -p force-systemd-flag-852700          | force-systemd-flag-852700 | minikube7\jenkins | v1.32.0 | 08 Jan 24 22:11 UTC | 08 Jan 24 22:17 UTC |
	|         | --memory=2048 --force-systemd         |                           |                   |         |                     |                     |
	|         | --alsologtostderr -v=5                |                           |                   |         |                     |                     |
	|         | --driver=hyperv                       |                           |                   |         |                     |                     |
	| start   | -p running-upgrade-680100             | running-upgrade-680100    | minikube7\jenkins | v1.32.0 | 08 Jan 24 22:11 UTC |                     |
	|         | --memory=2200                         |                           |                   |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |                   |         |                     |                     |
	|         | --driver=hyperv                       |                           |                   |         |                     |                     |
	| delete  | -p NoKubernetes-152000                | NoKubernetes-152000       | minikube7\jenkins | v1.32.0 | 08 Jan 24 22:12 UTC | 08 Jan 24 22:12 UTC |
	| start   | -p docker-flags-715600                | docker-flags-715600       | minikube7\jenkins | v1.32.0 | 08 Jan 24 22:12 UTC | 08 Jan 24 22:20 UTC |
	|         | --cache-images=false                  |                           |                   |         |                     |                     |
	|         | --memory=2048                         |                           |                   |         |                     |                     |
	|         | --install-addons=false                |                           |                   |         |                     |                     |
	|         | --wait=false                          |                           |                   |         |                     |                     |
	|         | --docker-env=FOO=BAR                  |                           |                   |         |                     |                     |
	|         | --docker-env=BAZ=BAT                  |                           |                   |         |                     |                     |
	|         | --docker-opt=debug                    |                           |                   |         |                     |                     |
	|         | --docker-opt=icc=true                 |                           |                   |         |                     |                     |
	|         | --alsologtostderr -v=5                |                           |                   |         |                     |                     |
	|         | --driver=hyperv                       |                           |                   |         |                     |                     |
	| ssh     | force-systemd-env-868600              | force-systemd-env-868600  | minikube7\jenkins | v1.32.0 | 08 Jan 24 22:14 UTC | 08 Jan 24 22:15 UTC |
	|         | ssh docker info --format              |                           |                   |         |                     |                     |
	|         | {{.CgroupDriver}}                     |                           |                   |         |                     |                     |
	| delete  | -p force-systemd-env-868600           | force-systemd-env-868600  | minikube7\jenkins | v1.32.0 | 08 Jan 24 22:15 UTC | 08 Jan 24 22:15 UTC |
	| start   | -p cert-expiration-785700             | cert-expiration-785700    | minikube7\jenkins | v1.32.0 | 08 Jan 24 22:15 UTC | 08 Jan 24 22:22 UTC |
	|         | --memory=2048                         |                           |                   |         |                     |                     |
	|         | --cert-expiration=3m                  |                           |                   |         |                     |                     |
	|         | --driver=hyperv                       |                           |                   |         |                     |                     |
	| ssh     | force-systemd-flag-852700             | force-systemd-flag-852700 | minikube7\jenkins | v1.32.0 | 08 Jan 24 22:17 UTC | 08 Jan 24 22:17 UTC |
	|         | ssh docker info --format              |                           |                   |         |                     |                     |
	|         | {{.CgroupDriver}}                     |                           |                   |         |                     |                     |
	| delete  | -p force-systemd-flag-852700          | force-systemd-flag-852700 | minikube7\jenkins | v1.32.0 | 08 Jan 24 22:17 UTC | 08 Jan 24 22:17 UTC |
	| start   | -p cert-options-283400                | cert-options-283400       | minikube7\jenkins | v1.32.0 | 08 Jan 24 22:17 UTC | 08 Jan 24 22:24 UTC |
	|         | --memory=2048                         |                           |                   |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1             |                           |                   |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15         |                           |                   |         |                     |                     |
	|         | --apiserver-names=localhost           |                           |                   |         |                     |                     |
	|         | --apiserver-names=www.google.com      |                           |                   |         |                     |                     |
	|         | --apiserver-port=8555                 |                           |                   |         |                     |                     |
	|         | --driver=hyperv                       |                           |                   |         |                     |                     |
	| delete  | -p running-upgrade-680100             | running-upgrade-680100    | minikube7\jenkins | v1.32.0 | 08 Jan 24 22:17 UTC | 08 Jan 24 22:18 UTC |
	| start   | -p pause-810600 --memory=2048         | pause-810600              | minikube7\jenkins | v1.32.0 | 08 Jan 24 22:18 UTC | 08 Jan 24 22:27 UTC |
	|         | --install-addons=false                |                           |                   |         |                     |                     |
	|         | --wait=all --driver=hyperv            |                           |                   |         |                     |                     |
	| ssh     | docker-flags-715600 ssh               | docker-flags-715600       | minikube7\jenkins | v1.32.0 | 08 Jan 24 22:20 UTC | 08 Jan 24 22:20 UTC |
	|         | sudo systemctl show docker            |                           |                   |         |                     |                     |
	|         | --property=Environment                |                           |                   |         |                     |                     |
	|         | --no-pager                            |                           |                   |         |                     |                     |
	| ssh     | docker-flags-715600 ssh               | docker-flags-715600       | minikube7\jenkins | v1.32.0 | 08 Jan 24 22:20 UTC | 08 Jan 24 22:20 UTC |
	|         | sudo systemctl show docker            |                           |                   |         |                     |                     |
	|         | --property=ExecStart                  |                           |                   |         |                     |                     |
	|         | --no-pager                            |                           |                   |         |                     |                     |
	| delete  | -p docker-flags-715600                | docker-flags-715600       | minikube7\jenkins | v1.32.0 | 08 Jan 24 22:20 UTC | 08 Jan 24 22:21 UTC |
	| start   | -p kubernetes-upgrade-158500          | kubernetes-upgrade-158500 | minikube7\jenkins | v1.32.0 | 08 Jan 24 22:21 UTC | 08 Jan 24 22:29 UTC |
	|         | --memory=2200                         |                           |                   |         |                     |                     |
	|         | --kubernetes-version=v1.16.0          |                           |                   |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |                   |         |                     |                     |
	|         | --driver=hyperv                       |                           |                   |         |                     |                     |
	| ssh     | cert-options-283400 ssh               | cert-options-283400       | minikube7\jenkins | v1.32.0 | 08 Jan 24 22:24 UTC | 08 Jan 24 22:25 UTC |
	|         | openssl x509 -text -noout -in         |                           |                   |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt |                           |                   |         |                     |                     |
	| ssh     | -p cert-options-283400 -- sudo        | cert-options-283400       | minikube7\jenkins | v1.32.0 | 08 Jan 24 22:25 UTC | 08 Jan 24 22:25 UTC |
	|         | cat /etc/kubernetes/admin.conf        |                           |                   |         |                     |                     |
	| delete  | -p cert-options-283400                | cert-options-283400       | minikube7\jenkins | v1.32.0 | 08 Jan 24 22:25 UTC | 08 Jan 24 22:26 UTC |
	| start   | -p cert-expiration-785700             | cert-expiration-785700    | minikube7\jenkins | v1.32.0 | 08 Jan 24 22:25 UTC |                     |
	|         | --memory=2048                         |                           |                   |         |                     |                     |
	|         | --cert-expiration=8760h               |                           |                   |         |                     |                     |
	|         | --driver=hyperv                       |                           |                   |         |                     |                     |
	| start   | -p pause-810600                       | pause-810600              | minikube7\jenkins | v1.32.0 | 08 Jan 24 22:27 UTC |                     |
	|         | --alsologtostderr -v=1                |                           |                   |         |                     |                     |
	|         | --driver=hyperv                       |                           |                   |         |                     |                     |
	| stop    | -p kubernetes-upgrade-158500          | kubernetes-upgrade-158500 | minikube7\jenkins | v1.32.0 | 08 Jan 24 22:29 UTC | 08 Jan 24 22:30 UTC |
	| start   | -p kubernetes-upgrade-158500          | kubernetes-upgrade-158500 | minikube7\jenkins | v1.32.0 | 08 Jan 24 22:30 UTC |                     |
	|         | --memory=2200                         |                           |                   |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2     |                           |                   |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |                   |         |                     |                     |
	|         | --driver=hyperv                       |                           |                   |         |                     |                     |
	| start   | -p stopped-upgrade-266300             | stopped-upgrade-266300    | minikube7\jenkins | v1.32.0 | 08 Jan 24 22:30 UTC |                     |
	|         | --memory=2200                         |                           |                   |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |                   |         |                     |                     |
	|         | --driver=hyperv                       |                           |                   |         |                     |                     |
	|---------|---------------------------------------|---------------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/01/08 22:30:55
	Running on machine: minikube7
	Binary: Built with gc go1.21.5 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0108 22:30:55.503020    7060 out.go:296] Setting OutFile to fd 1756 ...
	I0108 22:30:55.503020    7060 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 22:30:55.503020    7060 out.go:309] Setting ErrFile to fd 1760...
	I0108 22:30:55.503020    7060 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 22:30:55.529021    7060 out.go:303] Setting JSON to false
	I0108 22:30:55.534030    7060 start.go:128] hostinfo: {"hostname":"minikube7","uptime":30997,"bootTime":1704722057,"procs":210,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.3803 Build 19045.3803","kernelVersion":"10.0.19045.3803 Build 19045.3803","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f66ed2ea-6c04-4a6b-8eea-b2fb0953e990"}
	W0108 22:30:55.534030    7060 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0108 22:30:55.612012    7060 out.go:177] * [stopped-upgrade-266300] minikube v1.32.0 on Microsoft Windows 10 Enterprise N 10.0.19045.3803 Build 19045.3803
	I0108 22:30:55.614102    7060 notify.go:220] Checking for updates...
	I0108 22:30:55.661294    7060 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	I0108 22:30:55.662239    7060 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0108 22:30:55.709310    7060 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube7\minikube-integration\.minikube
	I0108 22:30:55.709310    7060 out.go:177]   - MINIKUBE_LOCATION=17907
	I0108 22:30:55.755866    7060 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0108 22:30:55.758382    7060 config.go:182] Loaded profile config "stopped-upgrade-266300": Driver=, ContainerRuntime=docker, KubernetesVersion=v1.17.0
	I0108 22:30:55.758382    7060 start_flags.go:694] config upgrade: Driver=hyperv
	I0108 22:30:55.758382    7060 start_flags.go:706] config upgrade: KicBaseImage=gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c
	I0108 22:30:55.758382    7060 profile.go:148] Saving config to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\stopped-upgrade-266300\config.json ...
	I0108 22:30:55.864769    7060 out.go:177] * Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	I0108 22:30:53.177815    5900 main.go:141] libmachine: [stdout =====>] : 172.29.109.19
	
	I0108 22:30:53.178044    5900 main.go:141] libmachine: [stderr =====>] : 
	I0108 22:30:53.178699    5900 sshutil.go:53] new ssh client: &{IP:172.29.109.19 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\pause-810600\id_rsa Username:docker}
	I0108 22:30:53.208357    5900 main.go:141] libmachine: [stdout =====>] : 172.29.109.19
	
	I0108 22:30:53.208442    5900 main.go:141] libmachine: [stderr =====>] : 
	I0108 22:30:53.208911    5900 sshutil.go:53] new ssh client: &{IP:172.29.109.19 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\pause-810600\id_rsa Username:docker}
	I0108 22:30:53.368008    5900 ssh_runner.go:235] Completed: cat /version.json: (5.8093002s)
	I0108 22:30:53.368102    5900 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (5.821355s)
	I0108 22:30:53.381798    5900 ssh_runner.go:195] Run: systemctl --version
	I0108 22:30:53.408301    5900 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0108 22:30:53.417215    5900 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0108 22:30:53.430853    5900 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0108 22:30:53.448975    5900 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0108 22:30:53.449058    5900 start.go:475] detecting cgroup driver to use...
	I0108 22:30:53.449058    5900 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0108 22:30:53.501012    5900 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0108 22:30:53.542046    5900 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0108 22:30:53.562904    5900 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0108 22:30:53.576846    5900 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0108 22:30:53.610554    5900 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0108 22:30:53.651809    5900 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0108 22:30:53.687901    5900 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0108 22:30:53.721711    5900 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0108 22:30:53.762389    5900 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0108 22:30:53.799343    5900 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0108 22:30:53.830739    5900 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0108 22:30:53.866363    5900 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0108 22:30:54.119418    5900 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0108 22:30:54.152394    5900 start.go:475] detecting cgroup driver to use...
	I0108 22:30:54.169481    5900 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0108 22:30:54.221045    5900 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0108 22:30:54.264393    5900 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0108 22:30:54.381619    5900 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0108 22:30:54.422980    5900 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0108 22:30:54.444112    5900 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0108 22:30:54.503657    5900 ssh_runner.go:195] Run: which cri-dockerd
	I0108 22:30:54.527272    5900 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0108 22:30:54.545830    5900 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0108 22:30:54.592769    5900 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0108 22:30:54.862645    5900 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0108 22:30:55.115143    5900 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0108 22:30:55.115143    5900 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0108 22:30:55.164594    5900 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0108 22:30:55.438902    5900 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0108 22:30:53.643807   10860 main.go:141] libmachine: [stdout =====>] : 
	I0108 22:30:53.643807   10860 main.go:141] libmachine: [stderr =====>] : 
	I0108 22:30:54.647187   10860 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-158500 ).state
	I0108 22:30:57.077006   10860 main.go:141] libmachine: [stdout =====>] : Running
	
	I0108 22:30:57.077006   10860 main.go:141] libmachine: [stderr =====>] : 
	I0108 22:30:57.077006   10860 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-158500 ).networkadapters[0]).ipaddresses[0]
	I0108 22:30:55.865772    7060 driver.go:392] Setting default libvirt URI to qemu:///system
	I0108 22:31:01.707242    7060 out.go:177] * Using the hyperv driver based on existing profile
	I0108 22:31:01.708872    7060 start.go:298] selected driver: hyperv
	I0108 22:31:01.708930    7060 start.go:902] validating driver "hyperv" against &{Name:stopped-upgrade-266300 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.6.0.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:2200 CPUs:2 DiskSize:20000 VMDriver:hyperv Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.17.0
ClusterName: Namespace: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:minikube IP:172.29.99.183 Port:8443 KubernetesVersion:v1.17.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[] StartHostTimeout:0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:
SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:0s GPUs:}
	I0108 22:31:01.709267    7060 start.go:913] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0108 22:31:01.762794    7060 cni.go:84] Creating CNI manager for ""
	I0108 22:31:01.762921    7060 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0108 22:31:01.762921    7060 start_flags.go:323] config:
	{Name:stopped-upgrade-266300 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.6.0.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:2200 CPUs:2 DiskSize:20000 VMDriver:hyperv Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.17.0 ClusterName: Namespace: APIServerName:minikubeCA APIServerNames:[] APIServerIPs
:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:minikube IP:172.29.99.183 Port:8443 KubernetesVersion:v1.17.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[] StartHostTimeout:0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:0s GPUs:}
	I0108 22:31:01.763385    7060 iso.go:125] acquiring lock: {Name:mk1869c5b5033c1301af33a7fa364ec43dd63efe Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 22:31:01.814041    7060 out.go:177] * Starting control plane node stopped-upgrade-266300 in cluster stopped-upgrade-266300
	I0108 22:30:59.817949   10860 main.go:141] libmachine: [stdout =====>] : 
	I0108 22:30:59.817949   10860 main.go:141] libmachine: [stderr =====>] : 
	I0108 22:31:00.828349   10860 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-158500 ).state
	I0108 22:31:01.815220    7060 preload.go:132] Checking if preload exists for k8s version v1.17.0 and runtime docker
	W0108 22:31:01.861952    7060 preload.go:115] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.17.0/preloaded-images-k8s-v18-v1.17.0-docker-overlay2-amd64.tar.lz4 status code: 404
	I0108 22:31:01.862875    7060 profile.go:148] Saving config to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\stopped-upgrade-266300\config.json ...
	I0108 22:31:01.863006    7060 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver:v1.17.0 -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver_v1.17.0
	I0108 22:31:01.863064    7060 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns:1.6.5 -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns_1.6.5
	I0108 22:31:01.863114    7060 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy:v1.17.0 -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.17.0
	I0108 22:31:01.863179    7060 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager:v1.17.0 -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager_v1.17.0
	I0108 22:31:01.863064    7060 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner:v5 -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner_v5
	I0108 22:31:01.863179    7060 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler:v1.17.0 -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler_v1.17.0
	I0108 22:31:01.863064    7060 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause:3.1 -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause_3.1
	I0108 22:31:01.863064    7060 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd:3.4.3-0 -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.4.3-0
	I0108 22:31:01.867468    7060 start.go:365] acquiring machines lock for stopped-upgrade-266300: {Name:mk814f158b6187cc9297257c36fdbe0d2871c950 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0108 22:31:02.060198    7060 cache.go:107] acquiring lock: {Name:mkc6e9060bea9211e4f8126ac5de344442cb8c23 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 22:31:02.060198    7060 cache.go:107] acquiring lock: {Name:mk43c24b3570a50e54ec9f1dc43aba5ea2e54859 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 22:31:02.060198    7060 cache.go:107] acquiring lock: {Name:mk945c9573a262bf2c410f3ec338c9e4cbac7ce3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 22:31:02.060198    7060 cache.go:107] acquiring lock: {Name:mkeac0ccf1d6f0e0eb0c19801602a218964c6025 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 22:31:02.060198    7060 cache.go:107] acquiring lock: {Name:mke680978131adbec647605a81bab7c783de93d6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 22:31:02.060198    7060 cache.go:107] acquiring lock: {Name:mk3a663ba67028a054dd5a6e96ba367c56e950d4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 22:31:02.060198    7060 cache.go:107] acquiring lock: {Name:mk1869bccfa4db5e538bd31af28e9c95a48df16c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 22:31:02.060198    7060 cache.go:107] acquiring lock: {Name:mk6522f86f404131d1768d0de0ce775513ec42e5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 22:31:02.060198    7060 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.17.0 exists
	I0108 22:31:02.060198    7060 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.4.3-0 exists
	I0108 22:31:02.060198    7060 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.17.0" -> "C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\kube-proxy_v1.17.0" took 197.0571ms
	I0108 22:31:02.060198    7060 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.17.0 -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.17.0 succeeded
	I0108 22:31:02.060746    7060 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns_1.6.5 exists
	I0108 22:31:02.060841    7060 cache.go:96] cache image "registry.k8s.io/coredns:1.6.5" -> "C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\coredns_1.6.5" took 197.7266ms
	I0108 22:31:02.060841    7060 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler_v1.17.0 exists
	I0108 22:31:02.060924    7060 cache.go:80] save to tar file registry.k8s.io/coredns:1.6.5 -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns_1.6.5 succeeded
	I0108 22:31:02.060746    7060 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager_v1.17.0 exists
	I0108 22:31:02.060924    7060 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver_v1.17.0 exists
	I0108 22:31:02.060198    7060 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner_v5 exists
	I0108 22:31:02.060841    7060 cache.go:96] cache image "registry.k8s.io/etcd:3.4.3-0" -> "C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\etcd_3.4.3-0" took 197.2426ms
	I0108 22:31:02.061455    7060 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.17.0" -> "C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\kube-controller-manager_v1.17.0" took 198.2758ms
	I0108 22:31:02.061519    7060 cache.go:80] save to tar file registry.k8s.io/etcd:3.4.3-0 -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.4.3-0 succeeded
	I0108 22:31:02.061169    7060 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.17.0" -> "C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\kube-scheduler_v1.17.0" took 197.8519ms
	I0108 22:31:02.061519    7060 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.17.0" -> "C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\kube-apiserver_v1.17.0" took 198.3908ms
	I0108 22:31:02.061596    7060 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.17.0 -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler_v1.17.0 succeeded
	I0108 22:31:02.061519    7060 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.17.0 -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager_v1.17.0 succeeded
	I0108 22:31:02.061519    7060 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\cache\\images\\amd64\\gcr.io\\k8s-minikube\\storage-provisioner_v5" took 198.2481ms
	I0108 22:31:02.061758    7060 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner_v5 succeeded
	I0108 22:31:02.061596    7060 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.17.0 -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver_v1.17.0 succeeded
	I0108 22:31:02.061123    7060 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause_3.1 exists
	I0108 22:31:02.064832    7060 cache.go:96] cache image "registry.k8s.io/pause:3.1" -> "C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\pause_3.1" took 201.2334ms
	I0108 22:31:02.064832    7060 cache.go:80] save to tar file registry.k8s.io/pause:3.1 -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause_3.1 succeeded
	I0108 22:31:02.064832    7060 cache.go:87] Successfully saved all images to host disk.
	I0108 22:31:03.811035   10860 main.go:141] libmachine: [stdout =====>] : Running
	
	I0108 22:31:03.811035   10860 main.go:141] libmachine: [stderr =====>] : 
	I0108 22:31:03.811123   10860 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-158500 ).networkadapters[0]).ipaddresses[0]
	I0108 22:31:06.461861   10860 main.go:141] libmachine: [stdout =====>] : 
	I0108 22:31:06.461861   10860 main.go:141] libmachine: [stderr =====>] : 
	I0108 22:31:07.469619   10860 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-158500 ).state
	I0108 22:31:09.550643    5900 ssh_runner.go:235] Completed: sudo systemctl restart docker: (14.1116307s)
	I0108 22:31:09.566069    5900 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0108 22:31:09.599098    5900 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0108 22:31:09.655007    5900 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0108 22:31:09.697116    5900 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0108 22:31:09.985999    5900 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0108 22:31:10.176056    5900 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0108 22:31:10.379582    5900 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0108 22:31:10.428587    5900 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0108 22:31:10.468528    5900 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0108 22:31:10.699558    5900 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0108 22:31:10.831280    5900 start.go:522] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0108 22:31:10.845661    5900 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0108 22:31:10.853716    5900 start.go:543] Will wait 60s for crictl version
	I0108 22:31:10.868150    5900 ssh_runner.go:195] Run: which crictl
	I0108 22:31:10.886119    5900 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0108 22:31:10.983744    5900 start.go:559] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  24.0.7
	RuntimeApiVersion:  v1
	I0108 22:31:10.994625    5900 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0108 22:31:11.047423    5900 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0108 22:31:11.111792    5900 out.go:204] * Preparing Kubernetes v1.28.4 on Docker 24.0.7 ...
	I0108 22:31:11.112232    5900 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0108 22:31:11.117007    5900 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0108 22:31:11.117082    5900 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0108 22:31:11.117082    5900 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0108 22:31:11.117082    5900 ip.go:207] Found interface: {Index:10 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:21:91:d6 Flags:up|broadcast|multicast|running}
	I0108 22:31:11.120438    5900 ip.go:210] interface addr: fe80::2be2:9d7a:5cc4:f25c/64
	I0108 22:31:11.120438    5900 ip.go:210] interface addr: 172.29.96.1/20
	I0108 22:31:11.134003    5900 ssh_runner.go:195] Run: grep 172.29.96.1	host.minikube.internal$ /etc/hosts
	I0108 22:31:11.140370    5900 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0108 22:31:11.151428    5900 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0108 22:31:11.185779    5900 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.28.4
	registry.k8s.io/kube-controller-manager:v1.28.4
	registry.k8s.io/kube-scheduler:v1.28.4
	registry.k8s.io/kube-proxy:v1.28.4
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0108 22:31:11.185883    5900 docker.go:615] Images already preloaded, skipping extraction
	I0108 22:31:11.202940    5900 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0108 22:31:11.232375    5900 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.28.4
	registry.k8s.io/kube-proxy:v1.28.4
	registry.k8s.io/kube-controller-manager:v1.28.4
	registry.k8s.io/kube-scheduler:v1.28.4
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0108 22:31:11.232516    5900 cache_images.go:84] Images are preloaded, skipping loading
	I0108 22:31:11.244184    5900 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0108 22:31:11.282211    5900 cni.go:84] Creating CNI manager for ""
	I0108 22:31:11.282541    5900 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0108 22:31:11.282601    5900 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0108 22:31:11.282677    5900 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.29.109.19 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-810600 NodeName:pause-810600 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.29.109.19"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.29.109.19 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kube
rnetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0108 22:31:11.283029    5900 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.29.109.19
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "pause-810600"
	  kubeletExtraArgs:
	    node-ip: 172.29.109.19
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.29.109.19"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0108 22:31:11.283258    5900 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=pause-810600 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.29.109.19
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:pause-810600 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0108 22:31:11.299792    5900 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0108 22:31:11.318624    5900 binaries.go:44] Found k8s binaries, skipping transfer
	I0108 22:31:11.333144    5900 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0108 22:31:11.350364    5900 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I0108 22:31:11.381309    5900 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0108 22:31:11.411421    5900 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2098 bytes)
	I0108 22:31:11.457623    5900 ssh_runner.go:195] Run: grep 172.29.109.19	control-plane.minikube.internal$ /etc/hosts
	I0108 22:31:11.465188    5900 certs.go:56] Setting up C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\pause-810600 for IP: 172.29.109.19
	I0108 22:31:11.465296    5900 certs.go:190] acquiring lock for shared ca certs: {Name:mk3bd6e475aadf28590677101b36f61c132d4290 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 22:31:11.465952    5900 certs.go:199] skipping minikubeCA CA generation: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.key
	I0108 22:31:11.466504    5900 certs.go:199] skipping proxyClientCA CA generation: C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.key
	I0108 22:31:11.467264    5900 certs.go:315] skipping minikube-user signed cert generation: C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\pause-810600\client.key
	I0108 22:31:11.467652    5900 certs.go:315] skipping minikube signed cert generation: C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\pause-810600\apiserver.key.ba5f2cee
	I0108 22:31:11.468012    5900 certs.go:315] skipping aggregator signed cert generation: C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\pause-810600\proxy-client.key
	I0108 22:31:11.469892    5900 certs.go:437] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\3008.pem (1338 bytes)
	W0108 22:31:11.470222    5900 certs.go:433] ignoring C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\3008_empty.pem, impossibly tiny 0 bytes
	I0108 22:31:11.470338    5900 certs.go:437] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I0108 22:31:11.470711    5900 certs.go:437] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0108 22:31:11.471005    5900 certs.go:437] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0108 22:31:11.471272    5900 certs.go:437] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0108 22:31:11.471707    5900 certs.go:437] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\30082.pem (1708 bytes)
	I0108 22:31:11.473128    5900 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\pause-810600\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0108 22:31:11.517030    5900 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\pause-810600\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0108 22:31:11.558724    5900 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\pause-810600\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0108 22:31:11.600656    5900 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\pause-810600\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0108 22:31:11.643258    5900 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0108 22:31:11.694118    5900 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0108 22:31:09.818638   10860 main.go:141] libmachine: [stdout =====>] : Running
	
	I0108 22:31:09.818638   10860 main.go:141] libmachine: [stderr =====>] : 
	I0108 22:31:09.818853   10860 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-158500 ).networkadapters[0]).ipaddresses[0]
	I0108 22:31:12.536922   10860 main.go:141] libmachine: [stdout =====>] : 
	I0108 22:31:12.537014   10860 main.go:141] libmachine: [stderr =====>] : 
	I0108 22:31:11.739750    5900 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0108 22:31:11.799694    5900 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0108 22:31:11.841052    5900 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\30082.pem --> /usr/share/ca-certificates/30082.pem (1708 bytes)
	I0108 22:31:11.881230    5900 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0108 22:31:11.921524    5900 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\3008.pem --> /usr/share/ca-certificates/3008.pem (1338 bytes)
	I0108 22:31:11.962187    5900 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0108 22:31:12.012097    5900 ssh_runner.go:195] Run: openssl version
	I0108 22:31:12.034498    5900 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/30082.pem && ln -fs /usr/share/ca-certificates/30082.pem /etc/ssl/certs/30082.pem"
	I0108 22:31:12.068702    5900 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/30082.pem
	I0108 22:31:12.076430    5900 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan  8 20:28 /usr/share/ca-certificates/30082.pem
	I0108 22:31:12.090959    5900 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/30082.pem
	I0108 22:31:12.115523    5900 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/30082.pem /etc/ssl/certs/3ec20f2e.0"
	I0108 22:31:12.143949    5900 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0108 22:31:12.174558    5900 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0108 22:31:12.185212    5900 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan  8 20:14 /usr/share/ca-certificates/minikubeCA.pem
	I0108 22:31:12.199792    5900 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0108 22:31:12.222113    5900 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0108 22:31:12.251274    5900 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3008.pem && ln -fs /usr/share/ca-certificates/3008.pem /etc/ssl/certs/3008.pem"
	I0108 22:31:12.288276    5900 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3008.pem
	I0108 22:31:12.296272    5900 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan  8 20:28 /usr/share/ca-certificates/3008.pem
	I0108 22:31:12.310970    5900 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3008.pem
	I0108 22:31:12.334650    5900 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/3008.pem /etc/ssl/certs/51391683.0"
	I0108 22:31:12.366674    5900 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0108 22:31:12.387377    5900 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0108 22:31:12.414712    5900 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0108 22:31:12.444975    5900 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0108 22:31:12.465137    5900 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0108 22:31:12.489496    5900 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0108 22:31:12.521590    5900 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0108 22:31:12.530842    5900 kubeadm.go:404] StartCluster: {Name:pause-810600 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17806/minikube-v1.32.1-1702708929-17806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.28.4 ClusterName:pause-810600 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:172.29.109.19 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvid
ia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I0108 22:31:12.542579    5900 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0108 22:31:12.584788    5900 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0108 22:31:12.600076    5900 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0108 22:31:12.600135    5900 kubeadm.go:636] restartCluster start
	I0108 22:31:12.614003    5900 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0108 22:31:12.657596    5900 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0108 22:31:12.659595    5900 kubeconfig.go:92] found "pause-810600" server: "https://172.29.109.19:8443"
	I0108 22:31:12.663299    5900 kapi.go:59] client config for pause-810600: &rest.Config{Host:"https://172.29.109.19:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\profiles\\pause-810600\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\profiles\\pause-810600\\client.key", CAFile:"C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(
nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x188c140), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0108 22:31:12.681679    5900 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0108 22:31:12.700075    5900 api_server.go:166] Checking apiserver status ...
	I0108 22:31:12.717405    5900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 22:31:12.735780    5900 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 22:31:13.213954    5900 api_server.go:166] Checking apiserver status ...
	I0108 22:31:13.227048    5900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 22:31:13.245843    5900 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 22:31:13.706089    5900 api_server.go:166] Checking apiserver status ...
	I0108 22:31:13.720766    5900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 22:31:13.740196    5900 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 22:31:14.210273    5900 api_server.go:166] Checking apiserver status ...
	I0108 22:31:14.223237    5900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 22:31:14.243180    5900 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 22:31:14.700352    5900 api_server.go:166] Checking apiserver status ...
	I0108 22:31:14.715105    5900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 22:31:14.734877    5900 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 22:31:15.204820    5900 api_server.go:166] Checking apiserver status ...
	I0108 22:31:15.226548    5900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 22:31:15.245427    5900 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 22:31:15.709138    5900 api_server.go:166] Checking apiserver status ...
	I0108 22:31:15.723691    5900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 22:31:15.742970    5900 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 22:31:16.214931    5900 api_server.go:166] Checking apiserver status ...
	I0108 22:31:16.229493    5900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 22:31:16.259897    5900 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 22:31:16.702135    5900 api_server.go:166] Checking apiserver status ...
	I0108 22:31:16.725069    5900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 22:31:16.722361    6868 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m11.9376352s)
	I0108 22:31:16.741515    6868 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0108 22:31:16.798744    6868 out.go:177] 
	W0108 22:31:16.799741    6868 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	-- Journal begins at Mon 2024-01-08 22:20:34 UTC, ends at Mon 2024-01-08 22:31:16 UTC. --
	Jan 08 22:21:28 cert-expiration-785700 systemd[1]: Starting Docker Application Container Engine...
	Jan 08 22:21:28 cert-expiration-785700 dockerd[672]: time="2024-01-08T22:21:28.193198552Z" level=info msg="Starting up"
	Jan 08 22:21:28 cert-expiration-785700 dockerd[672]: time="2024-01-08T22:21:28.194123684Z" level=info msg="containerd not running, starting managed containerd"
	Jan 08 22:21:28 cert-expiration-785700 dockerd[672]: time="2024-01-08T22:21:28.195271123Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=679
	Jan 08 22:21:28 cert-expiration-785700 dockerd[679]: time="2024-01-08T22:21:28.233412041Z" level=info msg="starting containerd" revision=64b8a811b07ba6288238eefc14d898ee0b5b99ba version=v1.7.11
	Jan 08 22:21:28 cert-expiration-785700 dockerd[679]: time="2024-01-08T22:21:28.257365668Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Jan 08 22:21:28 cert-expiration-785700 dockerd[679]: time="2024-01-08T22:21:28.258201797Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Jan 08 22:21:28 cert-expiration-785700 dockerd[679]: time="2024-01-08T22:21:28.260381872Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.57\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Jan 08 22:21:28 cert-expiration-785700 dockerd[679]: time="2024-01-08T22:21:28.260508677Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Jan 08 22:21:28 cert-expiration-785700 dockerd[679]: time="2024-01-08T22:21:28.260800287Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jan 08 22:21:28 cert-expiration-785700 dockerd[679]: time="2024-01-08T22:21:28.260950792Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Jan 08 22:21:28 cert-expiration-785700 dockerd[679]: time="2024-01-08T22:21:28.261049596Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Jan 08 22:21:28 cert-expiration-785700 dockerd[679]: time="2024-01-08T22:21:28.261192900Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Jan 08 22:21:28 cert-expiration-785700 dockerd[679]: time="2024-01-08T22:21:28.261285404Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Jan 08 22:21:28 cert-expiration-785700 dockerd[679]: time="2024-01-08T22:21:28.261395907Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Jan 08 22:21:28 cert-expiration-785700 dockerd[679]: time="2024-01-08T22:21:28.261872024Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Jan 08 22:21:28 cert-expiration-785700 dockerd[679]: time="2024-01-08T22:21:28.262002128Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Jan 08 22:21:28 cert-expiration-785700 dockerd[679]: time="2024-01-08T22:21:28.262020129Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Jan 08 22:21:28 cert-expiration-785700 dockerd[679]: time="2024-01-08T22:21:28.262199935Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jan 08 22:21:28 cert-expiration-785700 dockerd[679]: time="2024-01-08T22:21:28.262290738Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Jan 08 22:21:28 cert-expiration-785700 dockerd[679]: time="2024-01-08T22:21:28.262437343Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Jan 08 22:21:28 cert-expiration-785700 dockerd[679]: time="2024-01-08T22:21:28.262461444Z" level=info msg="metadata content store policy set" policy=shared
	Jan 08 22:21:28 cert-expiration-785700 dockerd[679]: time="2024-01-08T22:21:28.272974407Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Jan 08 22:21:28 cert-expiration-785700 dockerd[679]: time="2024-01-08T22:21:28.273082111Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Jan 08 22:21:28 cert-expiration-785700 dockerd[679]: time="2024-01-08T22:21:28.273104012Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Jan 08 22:21:28 cert-expiration-785700 dockerd[679]: time="2024-01-08T22:21:28.273138013Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Jan 08 22:21:28 cert-expiration-785700 dockerd[679]: time="2024-01-08T22:21:28.273154614Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Jan 08 22:21:28 cert-expiration-785700 dockerd[679]: time="2024-01-08T22:21:28.273165614Z" level=info msg="NRI interface is disabled by configuration."
	Jan 08 22:21:28 cert-expiration-785700 dockerd[679]: time="2024-01-08T22:21:28.273177914Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Jan 08 22:21:28 cert-expiration-785700 dockerd[679]: time="2024-01-08T22:21:28.273607429Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Jan 08 22:21:28 cert-expiration-785700 dockerd[679]: time="2024-01-08T22:21:28.273717833Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Jan 08 22:21:28 cert-expiration-785700 dockerd[679]: time="2024-01-08T22:21:28.273739634Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Jan 08 22:21:28 cert-expiration-785700 dockerd[679]: time="2024-01-08T22:21:28.273754834Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Jan 08 22:21:28 cert-expiration-785700 dockerd[679]: time="2024-01-08T22:21:28.273769235Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Jan 08 22:21:28 cert-expiration-785700 dockerd[679]: time="2024-01-08T22:21:28.273786235Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Jan 08 22:21:28 cert-expiration-785700 dockerd[679]: time="2024-01-08T22:21:28.273799636Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Jan 08 22:21:28 cert-expiration-785700 dockerd[679]: time="2024-01-08T22:21:28.273812736Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Jan 08 22:21:28 cert-expiration-785700 dockerd[679]: time="2024-01-08T22:21:28.273828837Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Jan 08 22:21:28 cert-expiration-785700 dockerd[679]: time="2024-01-08T22:21:28.273842737Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Jan 08 22:21:28 cert-expiration-785700 dockerd[679]: time="2024-01-08T22:21:28.273855238Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Jan 08 22:21:28 cert-expiration-785700 dockerd[679]: time="2024-01-08T22:21:28.273868538Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Jan 08 22:21:28 cert-expiration-785700 dockerd[679]: time="2024-01-08T22:21:28.274022844Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Jan 08 22:21:28 cert-expiration-785700 dockerd[679]: time="2024-01-08T22:21:28.274282553Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Jan 08 22:21:28 cert-expiration-785700 dockerd[679]: time="2024-01-08T22:21:28.274338055Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Jan 08 22:21:28 cert-expiration-785700 dockerd[679]: time="2024-01-08T22:21:28.274355855Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Jan 08 22:21:28 cert-expiration-785700 dockerd[679]: time="2024-01-08T22:21:28.274379356Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Jan 08 22:21:28 cert-expiration-785700 dockerd[679]: time="2024-01-08T22:21:28.274527561Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Jan 08 22:21:28 cert-expiration-785700 dockerd[679]: time="2024-01-08T22:21:28.274555362Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Jan 08 22:21:28 cert-expiration-785700 dockerd[679]: time="2024-01-08T22:21:28.274570563Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Jan 08 22:21:28 cert-expiration-785700 dockerd[679]: time="2024-01-08T22:21:28.274583363Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Jan 08 22:21:28 cert-expiration-785700 dockerd[679]: time="2024-01-08T22:21:28.274596463Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Jan 08 22:21:28 cert-expiration-785700 dockerd[679]: time="2024-01-08T22:21:28.274610264Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Jan 08 22:21:28 cert-expiration-785700 dockerd[679]: time="2024-01-08T22:21:28.274623064Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Jan 08 22:21:28 cert-expiration-785700 dockerd[679]: time="2024-01-08T22:21:28.274635665Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Jan 08 22:21:28 cert-expiration-785700 dockerd[679]: time="2024-01-08T22:21:28.274649265Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Jan 08 22:21:28 cert-expiration-785700 dockerd[679]: time="2024-01-08T22:21:28.274704967Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Jan 08 22:21:28 cert-expiration-785700 dockerd[679]: time="2024-01-08T22:21:28.274830372Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Jan 08 22:21:28 cert-expiration-785700 dockerd[679]: time="2024-01-08T22:21:28.274850672Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Jan 08 22:21:28 cert-expiration-785700 dockerd[679]: time="2024-01-08T22:21:28.274867173Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Jan 08 22:21:28 cert-expiration-785700 dockerd[679]: time="2024-01-08T22:21:28.274881373Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Jan 08 22:21:28 cert-expiration-785700 dockerd[679]: time="2024-01-08T22:21:28.274896374Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Jan 08 22:21:28 cert-expiration-785700 dockerd[679]: time="2024-01-08T22:21:28.274908574Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Jan 08 22:21:28 cert-expiration-785700 dockerd[679]: time="2024-01-08T22:21:28.274919775Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Jan 08 22:21:28 cert-expiration-785700 dockerd[679]: time="2024-01-08T22:21:28.274934175Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	Jan 08 22:21:28 cert-expiration-785700 dockerd[679]: time="2024-01-08T22:21:28.274947276Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Jan 08 22:21:28 cert-expiration-785700 dockerd[679]: time="2024-01-08T22:21:28.274958776Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	Jan 08 22:21:28 cert-expiration-785700 dockerd[679]: time="2024-01-08T22:21:28.275199284Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Jan 08 22:21:28 cert-expiration-785700 dockerd[679]: time="2024-01-08T22:21:28.275337889Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Jan 08 22:21:28 cert-expiration-785700 dockerd[679]: time="2024-01-08T22:21:28.275388891Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Jan 08 22:21:28 cert-expiration-785700 dockerd[679]: time="2024-01-08T22:21:28.275412992Z" level=info msg="containerd successfully booted in 0.044836s"
	Jan 08 22:21:28 cert-expiration-785700 dockerd[672]: time="2024-01-08T22:21:28.318338074Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Jan 08 22:21:28 cert-expiration-785700 dockerd[672]: time="2024-01-08T22:21:28.334201222Z" level=info msg="Loading containers: start."
	Jan 08 22:21:28 cert-expiration-785700 dockerd[672]: time="2024-01-08T22:21:28.551807639Z" level=info msg="Loading containers: done."
	Jan 08 22:21:28 cert-expiration-785700 dockerd[672]: time="2024-01-08T22:21:28.568860428Z" level=warning msg="WARNING: No blkio throttle.read_bps_device support"
	Jan 08 22:21:28 cert-expiration-785700 dockerd[672]: time="2024-01-08T22:21:28.568969332Z" level=warning msg="WARNING: No blkio throttle.write_bps_device support"
	Jan 08 22:21:28 cert-expiration-785700 dockerd[672]: time="2024-01-08T22:21:28.569015134Z" level=warning msg="WARNING: No blkio throttle.read_iops_device support"
	Jan 08 22:21:28 cert-expiration-785700 dockerd[672]: time="2024-01-08T22:21:28.569053435Z" level=warning msg="WARNING: No blkio throttle.write_iops_device support"
	Jan 08 22:21:28 cert-expiration-785700 dockerd[672]: time="2024-01-08T22:21:28.569102537Z" level=info msg="Docker daemon" commit=311b9ff graphdriver=overlay2 version=24.0.7
	Jan 08 22:21:28 cert-expiration-785700 dockerd[672]: time="2024-01-08T22:21:28.569340445Z" level=info msg="Daemon has completed initialization"
	Jan 08 22:21:28 cert-expiration-785700 dockerd[672]: time="2024-01-08T22:21:28.623614920Z" level=info msg="API listen on /var/run/docker.sock"
	Jan 08 22:21:28 cert-expiration-785700 dockerd[672]: time="2024-01-08T22:21:28.623715923Z" level=info msg="API listen on [::]:2376"
	Jan 08 22:21:28 cert-expiration-785700 systemd[1]: Started Docker Application Container Engine.
	Jan 08 22:21:59 cert-expiration-785700 dockerd[672]: time="2024-01-08T22:21:59.212133234Z" level=info msg="Processing signal 'terminated'"
	Jan 08 22:21:59 cert-expiration-785700 systemd[1]: Stopping Docker Application Container Engine...
	Jan 08 22:21:59 cert-expiration-785700 dockerd[672]: time="2024-01-08T22:21:59.214439634Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jan 08 22:21:59 cert-expiration-785700 dockerd[672]: time="2024-01-08T22:21:59.214443934Z" level=info msg="Daemon shutdown complete"
	Jan 08 22:21:59 cert-expiration-785700 dockerd[672]: time="2024-01-08T22:21:59.215317534Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Jan 08 22:21:59 cert-expiration-785700 dockerd[672]: time="2024-01-08T22:21:59.215638834Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jan 08 22:22:00 cert-expiration-785700 systemd[1]: docker.service: Succeeded.
	Jan 08 22:22:00 cert-expiration-785700 systemd[1]: Stopped Docker Application Container Engine.
	Jan 08 22:22:00 cert-expiration-785700 systemd[1]: Starting Docker Application Container Engine...
	Jan 08 22:22:00 cert-expiration-785700 dockerd[1010]: time="2024-01-08T22:22:00.293546134Z" level=info msg="Starting up"
	Jan 08 22:22:00 cert-expiration-785700 dockerd[1010]: time="2024-01-08T22:22:00.294646234Z" level=info msg="containerd not running, starting managed containerd"
	Jan 08 22:22:00 cert-expiration-785700 dockerd[1010]: time="2024-01-08T22:22:00.296068734Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1016
	Jan 08 22:22:00 cert-expiration-785700 dockerd[1016]: time="2024-01-08T22:22:00.335709634Z" level=info msg="starting containerd" revision=64b8a811b07ba6288238eefc14d898ee0b5b99ba version=v1.7.11
	Jan 08 22:22:00 cert-expiration-785700 dockerd[1016]: time="2024-01-08T22:22:00.365911834Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Jan 08 22:22:00 cert-expiration-785700 dockerd[1016]: time="2024-01-08T22:22:00.366051134Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Jan 08 22:22:00 cert-expiration-785700 dockerd[1016]: time="2024-01-08T22:22:00.368846634Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.57\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Jan 08 22:22:00 cert-expiration-785700 dockerd[1016]: time="2024-01-08T22:22:00.368961934Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Jan 08 22:22:00 cert-expiration-785700 dockerd[1016]: time="2024-01-08T22:22:00.369206634Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jan 08 22:22:00 cert-expiration-785700 dockerd[1016]: time="2024-01-08T22:22:00.369349734Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Jan 08 22:22:00 cert-expiration-785700 dockerd[1016]: time="2024-01-08T22:22:00.369384534Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Jan 08 22:22:00 cert-expiration-785700 dockerd[1016]: time="2024-01-08T22:22:00.369411134Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Jan 08 22:22:00 cert-expiration-785700 dockerd[1016]: time="2024-01-08T22:22:00.369424434Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Jan 08 22:22:00 cert-expiration-785700 dockerd[1016]: time="2024-01-08T22:22:00.369655334Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Jan 08 22:22:00 cert-expiration-785700 dockerd[1016]: time="2024-01-08T22:22:00.369850534Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Jan 08 22:22:00 cert-expiration-785700 dockerd[1016]: time="2024-01-08T22:22:00.369953034Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Jan 08 22:22:00 cert-expiration-785700 dockerd[1016]: time="2024-01-08T22:22:00.369973334Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Jan 08 22:22:00 cert-expiration-785700 dockerd[1016]: time="2024-01-08T22:22:00.370351534Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jan 08 22:22:00 cert-expiration-785700 dockerd[1016]: time="2024-01-08T22:22:00.370398834Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Jan 08 22:22:00 cert-expiration-785700 dockerd[1016]: time="2024-01-08T22:22:00.370423734Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Jan 08 22:22:00 cert-expiration-785700 dockerd[1016]: time="2024-01-08T22:22:00.370438134Z" level=info msg="metadata content store policy set" policy=shared
	Jan 08 22:22:00 cert-expiration-785700 dockerd[1016]: time="2024-01-08T22:22:00.370542734Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Jan 08 22:22:00 cert-expiration-785700 dockerd[1016]: time="2024-01-08T22:22:00.370566134Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Jan 08 22:22:00 cert-expiration-785700 dockerd[1016]: time="2024-01-08T22:22:00.370580334Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Jan 08 22:22:00 cert-expiration-785700 dockerd[1016]: time="2024-01-08T22:22:00.370662934Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Jan 08 22:22:00 cert-expiration-785700 dockerd[1016]: time="2024-01-08T22:22:00.370684634Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Jan 08 22:22:00 cert-expiration-785700 dockerd[1016]: time="2024-01-08T22:22:00.370698134Z" level=info msg="NRI interface is disabled by configuration."
	Jan 08 22:22:00 cert-expiration-785700 dockerd[1016]: time="2024-01-08T22:22:00.370712634Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Jan 08 22:22:00 cert-expiration-785700 dockerd[1016]: time="2024-01-08T22:22:00.370763034Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Jan 08 22:22:00 cert-expiration-785700 dockerd[1016]: time="2024-01-08T22:22:00.370801634Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Jan 08 22:22:00 cert-expiration-785700 dockerd[1016]: time="2024-01-08T22:22:00.370820034Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Jan 08 22:22:00 cert-expiration-785700 dockerd[1016]: time="2024-01-08T22:22:00.370835334Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Jan 08 22:22:00 cert-expiration-785700 dockerd[1016]: time="2024-01-08T22:22:00.370851034Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Jan 08 22:22:00 cert-expiration-785700 dockerd[1016]: time="2024-01-08T22:22:00.370868434Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Jan 08 22:22:00 cert-expiration-785700 dockerd[1016]: time="2024-01-08T22:22:00.370883334Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Jan 08 22:22:00 cert-expiration-785700 dockerd[1016]: time="2024-01-08T22:22:00.370898134Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Jan 08 22:22:00 cert-expiration-785700 dockerd[1016]: time="2024-01-08T22:22:00.370912934Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Jan 08 22:22:00 cert-expiration-785700 dockerd[1016]: time="2024-01-08T22:22:00.370928034Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Jan 08 22:22:00 cert-expiration-785700 dockerd[1016]: time="2024-01-08T22:22:00.370942934Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Jan 08 22:22:00 cert-expiration-785700 dockerd[1016]: time="2024-01-08T22:22:00.370957534Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Jan 08 22:22:00 cert-expiration-785700 dockerd[1016]: time="2024-01-08T22:22:00.370996834Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Jan 08 22:22:00 cert-expiration-785700 dockerd[1016]: time="2024-01-08T22:22:00.371358834Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Jan 08 22:22:00 cert-expiration-785700 dockerd[1016]: time="2024-01-08T22:22:00.371489234Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Jan 08 22:22:00 cert-expiration-785700 dockerd[1016]: time="2024-01-08T22:22:00.371514634Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Jan 08 22:22:00 cert-expiration-785700 dockerd[1016]: time="2024-01-08T22:22:00.371539134Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Jan 08 22:22:00 cert-expiration-785700 dockerd[1016]: time="2024-01-08T22:22:00.371660334Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Jan 08 22:22:00 cert-expiration-785700 dockerd[1016]: time="2024-01-08T22:22:00.371761734Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Jan 08 22:22:00 cert-expiration-785700 dockerd[1016]: time="2024-01-08T22:22:00.371783134Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Jan 08 22:22:00 cert-expiration-785700 dockerd[1016]: time="2024-01-08T22:22:00.371797234Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Jan 08 22:22:00 cert-expiration-785700 dockerd[1016]: time="2024-01-08T22:22:00.371811734Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Jan 08 22:22:00 cert-expiration-785700 dockerd[1016]: time="2024-01-08T22:22:00.371825534Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Jan 08 22:22:00 cert-expiration-785700 dockerd[1016]: time="2024-01-08T22:22:00.371838834Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Jan 08 22:22:00 cert-expiration-785700 dockerd[1016]: time="2024-01-08T22:22:00.371851934Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Jan 08 22:22:00 cert-expiration-785700 dockerd[1016]: time="2024-01-08T22:22:00.371871734Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Jan 08 22:22:00 cert-expiration-785700 dockerd[1016]: time="2024-01-08T22:22:00.371906134Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Jan 08 22:22:00 cert-expiration-785700 dockerd[1016]: time="2024-01-08T22:22:00.371922734Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Jan 08 22:22:00 cert-expiration-785700 dockerd[1016]: time="2024-01-08T22:22:00.371939134Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Jan 08 22:22:00 cert-expiration-785700 dockerd[1016]: time="2024-01-08T22:22:00.371953234Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Jan 08 22:22:00 cert-expiration-785700 dockerd[1016]: time="2024-01-08T22:22:00.371967234Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Jan 08 22:22:00 cert-expiration-785700 dockerd[1016]: time="2024-01-08T22:22:00.371982034Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Jan 08 22:22:00 cert-expiration-785700 dockerd[1016]: time="2024-01-08T22:22:00.371995434Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Jan 08 22:22:00 cert-expiration-785700 dockerd[1016]: time="2024-01-08T22:22:00.372008734Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Jan 08 22:22:00 cert-expiration-785700 dockerd[1016]: time="2024-01-08T22:22:00.372024034Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	Jan 08 22:22:00 cert-expiration-785700 dockerd[1016]: time="2024-01-08T22:22:00.372036534Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Jan 08 22:22:00 cert-expiration-785700 dockerd[1016]: time="2024-01-08T22:22:00.372047934Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	Jan 08 22:22:00 cert-expiration-785700 dockerd[1016]: time="2024-01-08T22:22:00.372388834Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Jan 08 22:22:00 cert-expiration-785700 dockerd[1016]: time="2024-01-08T22:22:00.372590934Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Jan 08 22:22:00 cert-expiration-785700 dockerd[1016]: time="2024-01-08T22:22:00.372694934Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Jan 08 22:22:00 cert-expiration-785700 dockerd[1016]: time="2024-01-08T22:22:00.372790434Z" level=info msg="containerd successfully booted in 0.039586s"
	Jan 08 22:22:00 cert-expiration-785700 dockerd[1010]: time="2024-01-08T22:22:00.398779034Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Jan 08 22:22:00 cert-expiration-785700 dockerd[1010]: time="2024-01-08T22:22:00.471986334Z" level=info msg="Loading containers: start."
	Jan 08 22:22:00 cert-expiration-785700 dockerd[1010]: time="2024-01-08T22:22:00.641571534Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Jan 08 22:22:00 cert-expiration-785700 dockerd[1010]: time="2024-01-08T22:22:00.709143734Z" level=info msg="Loading containers: done."
	Jan 08 22:22:00 cert-expiration-785700 dockerd[1010]: time="2024-01-08T22:22:00.729675034Z" level=warning msg="WARNING: No blkio throttle.read_bps_device support"
	Jan 08 22:22:00 cert-expiration-785700 dockerd[1010]: time="2024-01-08T22:22:00.729696734Z" level=warning msg="WARNING: No blkio throttle.write_bps_device support"
	Jan 08 22:22:00 cert-expiration-785700 dockerd[1010]: time="2024-01-08T22:22:00.729704234Z" level=warning msg="WARNING: No blkio throttle.read_iops_device support"
	Jan 08 22:22:00 cert-expiration-785700 dockerd[1010]: time="2024-01-08T22:22:00.729710534Z" level=warning msg="WARNING: No blkio throttle.write_iops_device support"
	Jan 08 22:22:00 cert-expiration-785700 dockerd[1010]: time="2024-01-08T22:22:00.729731034Z" level=info msg="Docker daemon" commit=311b9ff graphdriver=overlay2 version=24.0.7
	Jan 08 22:22:00 cert-expiration-785700 dockerd[1010]: time="2024-01-08T22:22:00.729776034Z" level=info msg="Daemon has completed initialization"
	Jan 08 22:22:00 cert-expiration-785700 dockerd[1010]: time="2024-01-08T22:22:00.782866134Z" level=info msg="API listen on /var/run/docker.sock"
	Jan 08 22:22:00 cert-expiration-785700 dockerd[1010]: time="2024-01-08T22:22:00.783014634Z" level=info msg="API listen on [::]:2376"
	Jan 08 22:22:00 cert-expiration-785700 systemd[1]: Started Docker Application Container Engine.
	Jan 08 22:22:14 cert-expiration-785700 dockerd[1010]: time="2024-01-08T22:22:14.674247734Z" level=info msg="Processing signal 'terminated'"
	Jan 08 22:22:14 cert-expiration-785700 systemd[1]: Stopping Docker Application Container Engine...
	Jan 08 22:22:14 cert-expiration-785700 dockerd[1010]: time="2024-01-08T22:22:14.676458034Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jan 08 22:22:14 cert-expiration-785700 dockerd[1010]: time="2024-01-08T22:22:14.676486534Z" level=info msg="Daemon shutdown complete"
	Jan 08 22:22:14 cert-expiration-785700 dockerd[1010]: time="2024-01-08T22:22:14.677006334Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Jan 08 22:22:14 cert-expiration-785700 dockerd[1010]: time="2024-01-08T22:22:14.677256634Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jan 08 22:22:15 cert-expiration-785700 systemd[1]: docker.service: Succeeded.
	Jan 08 22:22:15 cert-expiration-785700 systemd[1]: Stopped Docker Application Container Engine.
	Jan 08 22:22:15 cert-expiration-785700 systemd[1]: Starting Docker Application Container Engine...
	Jan 08 22:22:15 cert-expiration-785700 dockerd[1315]: time="2024-01-08T22:22:15.753378834Z" level=info msg="Starting up"
	Jan 08 22:22:15 cert-expiration-785700 dockerd[1315]: time="2024-01-08T22:22:15.755776234Z" level=info msg="containerd not running, starting managed containerd"
	Jan 08 22:22:15 cert-expiration-785700 dockerd[1315]: time="2024-01-08T22:22:15.757719634Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1321
	Jan 08 22:22:15 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:22:15.792922934Z" level=info msg="starting containerd" revision=64b8a811b07ba6288238eefc14d898ee0b5b99ba version=v1.7.11
	Jan 08 22:22:15 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:22:15.819834134Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Jan 08 22:22:15 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:22:15.819938334Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Jan 08 22:22:15 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:22:15.822840434Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.57\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Jan 08 22:22:15 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:22:15.822947134Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Jan 08 22:22:15 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:22:15.823208734Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jan 08 22:22:15 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:22:15.823305834Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Jan 08 22:22:15 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:22:15.823337734Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Jan 08 22:22:15 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:22:15.823362734Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Jan 08 22:22:15 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:22:15.823376034Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Jan 08 22:22:15 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:22:15.823400234Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Jan 08 22:22:15 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:22:15.823737834Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Jan 08 22:22:15 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:22:15.823855334Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Jan 08 22:22:15 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:22:15.823907834Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Jan 08 22:22:15 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:22:15.824128834Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jan 08 22:22:15 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:22:15.824221234Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Jan 08 22:22:15 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:22:15.824248434Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Jan 08 22:22:15 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:22:15.824260534Z" level=info msg="metadata content store policy set" policy=shared
	Jan 08 22:22:15 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:22:15.824387934Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Jan 08 22:22:15 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:22:15.824488234Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Jan 08 22:22:15 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:22:15.824508534Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Jan 08 22:22:15 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:22:15.824555634Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Jan 08 22:22:15 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:22:15.824571334Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Jan 08 22:22:15 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:22:15.824582734Z" level=info msg="NRI interface is disabled by configuration."
	Jan 08 22:22:15 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:22:15.824640934Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Jan 08 22:22:15 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:22:15.824694634Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Jan 08 22:22:15 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:22:15.824786734Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Jan 08 22:22:15 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:22:15.824807834Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Jan 08 22:22:15 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:22:15.824827434Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Jan 08 22:22:15 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:22:15.824843834Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Jan 08 22:22:15 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:22:15.824860934Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Jan 08 22:22:15 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:22:15.824875334Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Jan 08 22:22:15 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:22:15.824889534Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Jan 08 22:22:15 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:22:15.824906334Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Jan 08 22:22:15 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:22:15.824920934Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Jan 08 22:22:15 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:22:15.824951334Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Jan 08 22:22:15 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:22:15.824963034Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Jan 08 22:22:15 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:22:15.825075234Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Jan 08 22:22:15 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:22:15.825347734Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Jan 08 22:22:15 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:22:15.825456434Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Jan 08 22:22:15 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:22:15.825477534Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Jan 08 22:22:15 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:22:15.825501434Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Jan 08 22:22:15 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:22:15.825553034Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Jan 08 22:22:15 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:22:15.825691034Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Jan 08 22:22:15 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:22:15.825710434Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Jan 08 22:22:15 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:22:15.825726234Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Jan 08 22:22:15 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:22:15.825739834Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Jan 08 22:22:15 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:22:15.825754434Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Jan 08 22:22:15 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:22:15.825767034Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Jan 08 22:22:15 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:22:15.825778734Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Jan 08 22:22:15 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:22:15.825792334Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Jan 08 22:22:15 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:22:15.825824534Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Jan 08 22:22:15 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:22:15.825840834Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Jan 08 22:22:15 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:22:15.825853034Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Jan 08 22:22:15 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:22:15.825865534Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Jan 08 22:22:15 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:22:15.825877934Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Jan 08 22:22:15 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:22:15.825895934Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Jan 08 22:22:15 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:22:15.825909534Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Jan 08 22:22:15 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:22:15.825922634Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Jan 08 22:22:15 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:22:15.825936634Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	Jan 08 22:22:15 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:22:15.825948334Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Jan 08 22:22:15 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:22:15.825958734Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	Jan 08 22:22:15 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:22:15.826270234Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Jan 08 22:22:15 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:22:15.826403834Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Jan 08 22:22:15 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:22:15.826526734Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Jan 08 22:22:15 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:22:15.826643734Z" level=info msg="containerd successfully booted in 0.034679s"
	Jan 08 22:22:16 cert-expiration-785700 dockerd[1315]: time="2024-01-08T22:22:16.428774034Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Jan 08 22:22:17 cert-expiration-785700 dockerd[1315]: time="2024-01-08T22:22:17.023068134Z" level=info msg="Loading containers: start."
	Jan 08 22:22:17 cert-expiration-785700 dockerd[1315]: time="2024-01-08T22:22:17.190851634Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Jan 08 22:22:17 cert-expiration-785700 dockerd[1315]: time="2024-01-08T22:22:17.257313934Z" level=info msg="Loading containers: done."
	Jan 08 22:22:17 cert-expiration-785700 dockerd[1315]: time="2024-01-08T22:22:17.281708734Z" level=warning msg="WARNING: No blkio throttle.read_bps_device support"
	Jan 08 22:22:17 cert-expiration-785700 dockerd[1315]: time="2024-01-08T22:22:17.281816934Z" level=warning msg="WARNING: No blkio throttle.write_bps_device support"
	Jan 08 22:22:17 cert-expiration-785700 dockerd[1315]: time="2024-01-08T22:22:17.281831234Z" level=warning msg="WARNING: No blkio throttle.read_iops_device support"
	Jan 08 22:22:17 cert-expiration-785700 dockerd[1315]: time="2024-01-08T22:22:17.281838834Z" level=warning msg="WARNING: No blkio throttle.write_iops_device support"
	Jan 08 22:22:17 cert-expiration-785700 dockerd[1315]: time="2024-01-08T22:22:17.281861634Z" level=info msg="Docker daemon" commit=311b9ff graphdriver=overlay2 version=24.0.7
	Jan 08 22:22:17 cert-expiration-785700 dockerd[1315]: time="2024-01-08T22:22:17.281910334Z" level=info msg="Daemon has completed initialization"
	Jan 08 22:22:17 cert-expiration-785700 dockerd[1315]: time="2024-01-08T22:22:17.323263034Z" level=info msg="API listen on /var/run/docker.sock"
	Jan 08 22:22:17 cert-expiration-785700 systemd[1]: Started Docker Application Container Engine.
	Jan 08 22:22:17 cert-expiration-785700 dockerd[1315]: time="2024-01-08T22:22:17.325796734Z" level=info msg="API listen on [::]:2376"
	Jan 08 22:22:25 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:22:25.265577518Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jan 08 22:22:25 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:22:25.266731262Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 08 22:22:25 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:22:25.267137378Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jan 08 22:22:25 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:22:25.267381287Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 08 22:22:25 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:22:25.378441754Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jan 08 22:22:25 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:22:25.378589559Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 08 22:22:25 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:22:25.378687963Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jan 08 22:22:25 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:22:25.378705464Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 08 22:22:25 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:22:25.380351427Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jan 08 22:22:25 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:22:25.380562735Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 08 22:22:25 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:22:25.381584574Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jan 08 22:22:25 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:22:25.381821984Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 08 22:22:25 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:22:25.428892592Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jan 08 22:22:25 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:22:25.429202304Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 08 22:22:25 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:22:25.429303608Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jan 08 22:22:25 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:22:25.429413112Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 08 22:22:26 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:22:26.129440897Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jan 08 22:22:26 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:22:26.129851712Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 08 22:22:26 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:22:26.130056619Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jan 08 22:22:26 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:22:26.130244926Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 08 22:22:26 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:22:26.383784657Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jan 08 22:22:26 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:22:26.384505283Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 08 22:22:26 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:22:26.384727191Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jan 08 22:22:26 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:22:26.384874597Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 08 22:22:26 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:22:26.805398642Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jan 08 22:22:26 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:22:26.805666251Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 08 22:22:26 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:22:26.805771255Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jan 08 22:22:26 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:22:26.805976862Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 08 22:22:27 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:22:27.056908975Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jan 08 22:22:27 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:22:27.057353090Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 08 22:22:27 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:22:27.060792706Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jan 08 22:22:27 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:22:27.060970012Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 08 22:22:46 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:22:46.730904747Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jan 08 22:22:46 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:22:46.730971447Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 08 22:22:46 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:22:46.730990147Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jan 08 22:22:46 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:22:46.731004748Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 08 22:22:46 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:22:46.756114496Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jan 08 22:22:46 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:22:46.756200797Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 08 22:22:46 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:22:46.756232097Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jan 08 22:22:46 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:22:46.756249598Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 08 22:22:47 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:22:47.006898177Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jan 08 22:22:47 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:22:47.007206580Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 08 22:22:47 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:22:47.007322181Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jan 08 22:22:47 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:22:47.007540483Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 08 22:22:47 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:22:47.612067897Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jan 08 22:22:47 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:22:47.612473001Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 08 22:22:47 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:22:47.613565511Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jan 08 22:22:47 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:22:47.614042016Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 08 22:22:48 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:22:48.028000444Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jan 08 22:22:48 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:22:48.028305647Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 08 22:22:48 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:22:48.028379748Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jan 08 22:22:48 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:22:48.028418448Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 08 22:22:48 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:22:48.816560710Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jan 08 22:22:48 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:22:48.817148015Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 08 22:22:48 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:22:48.817282716Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jan 08 22:22:48 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:22:48.817302817Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 08 22:30:04 cert-expiration-785700 dockerd[1315]: time="2024-01-08T22:30:04.816052792Z" level=info msg="Processing signal 'terminated'"
	Jan 08 22:30:04 cert-expiration-785700 systemd[1]: Stopping Docker Application Container Engine...
	Jan 08 22:30:05 cert-expiration-785700 dockerd[1315]: time="2024-01-08T22:30:05.131852499Z" level=info msg="ignoring event" container=f35e0da4b4ec6dae9d7a25ee062170b857f23499c440736930afe77ea62aeccf module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 08 22:30:05 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:30:05.132121600Z" level=info msg="shim disconnected" id=f35e0da4b4ec6dae9d7a25ee062170b857f23499c440736930afe77ea62aeccf namespace=moby
	Jan 08 22:30:05 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:30:05.132283700Z" level=warning msg="cleaning up after shim disconnected" id=f35e0da4b4ec6dae9d7a25ee062170b857f23499c440736930afe77ea62aeccf namespace=moby
	Jan 08 22:30:05 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:30:05.132303300Z" level=info msg="cleaning up dead shim" namespace=moby
	Jan 08 22:30:05 cert-expiration-785700 dockerd[1315]: time="2024-01-08T22:30:05.134417706Z" level=info msg="ignoring event" container=1e859794dd09a4a134ba89fe24c8fd683d82d3d5f205928e1b576a6c18430767 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 08 22:30:05 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:30:05.136326612Z" level=info msg="shim disconnected" id=1e859794dd09a4a134ba89fe24c8fd683d82d3d5f205928e1b576a6c18430767 namespace=moby
	Jan 08 22:30:05 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:30:05.136368112Z" level=warning msg="cleaning up after shim disconnected" id=1e859794dd09a4a134ba89fe24c8fd683d82d3d5f205928e1b576a6c18430767 namespace=moby
	Jan 08 22:30:05 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:30:05.136379512Z" level=info msg="cleaning up dead shim" namespace=moby
	Jan 08 22:30:05 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:30:05.137027914Z" level=info msg="shim disconnected" id=a1a83e8a6210894b8a556485b2a6df18f9247bcb42f7022c74bbf67e34c873c0 namespace=moby
	Jan 08 22:30:05 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:30:05.137072314Z" level=warning msg="cleaning up after shim disconnected" id=a1a83e8a6210894b8a556485b2a6df18f9247bcb42f7022c74bbf67e34c873c0 namespace=moby
	Jan 08 22:30:05 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:30:05.137083714Z" level=info msg="cleaning up dead shim" namespace=moby
	Jan 08 22:30:05 cert-expiration-785700 dockerd[1315]: time="2024-01-08T22:30:05.137966517Z" level=info msg="ignoring event" container=a1a83e8a6210894b8a556485b2a6df18f9247bcb42f7022c74bbf67e34c873c0 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 08 22:30:05 cert-expiration-785700 dockerd[1315]: time="2024-01-08T22:30:05.154409464Z" level=info msg="ignoring event" container=de386a0a973bab42bde9d8877c103167f5a0479a58cfcfdc2e337d0959040ca6 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 08 22:30:05 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:30:05.154946565Z" level=info msg="shim disconnected" id=de386a0a973bab42bde9d8877c103167f5a0479a58cfcfdc2e337d0959040ca6 namespace=moby
	Jan 08 22:30:05 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:30:05.154992466Z" level=warning msg="cleaning up after shim disconnected" id=de386a0a973bab42bde9d8877c103167f5a0479a58cfcfdc2e337d0959040ca6 namespace=moby
	Jan 08 22:30:05 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:30:05.155074066Z" level=info msg="cleaning up dead shim" namespace=moby
	Jan 08 22:30:05 cert-expiration-785700 dockerd[1315]: time="2024-01-08T22:30:05.165106095Z" level=info msg="ignoring event" container=953b566f951bb9432c01fbfd656e52890ebdac05df59b6abf4065532eaefa43c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 08 22:30:05 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:30:05.174232321Z" level=info msg="shim disconnected" id=953b566f951bb9432c01fbfd656e52890ebdac05df59b6abf4065532eaefa43c namespace=moby
	Jan 08 22:30:05 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:30:05.174334921Z" level=warning msg="cleaning up after shim disconnected" id=953b566f951bb9432c01fbfd656e52890ebdac05df59b6abf4065532eaefa43c namespace=moby
	Jan 08 22:30:05 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:30:05.174349321Z" level=info msg="cleaning up dead shim" namespace=moby
	Jan 08 22:30:05 cert-expiration-785700 dockerd[1315]: time="2024-01-08T22:30:05.180943240Z" level=info msg="ignoring event" container=504ed1e085f548241814297002225defded88b3afe543f08d6cd32c4ed42477b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 08 22:30:05 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:30:05.181339641Z" level=info msg="shim disconnected" id=504ed1e085f548241814297002225defded88b3afe543f08d6cd32c4ed42477b namespace=moby
	Jan 08 22:30:05 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:30:05.181494842Z" level=warning msg="cleaning up after shim disconnected" id=504ed1e085f548241814297002225defded88b3afe543f08d6cd32c4ed42477b namespace=moby
	Jan 08 22:30:05 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:30:05.181513642Z" level=info msg="cleaning up dead shim" namespace=moby
	Jan 08 22:30:05 cert-expiration-785700 dockerd[1315]: time="2024-01-08T22:30:05.212464231Z" level=info msg="ignoring event" container=d475444ed114cbd25bf76dd2e12e846d9e489d9fbb36403c4b19c8ac7c310f5f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 08 22:30:05 cert-expiration-785700 dockerd[1315]: time="2024-01-08T22:30:05.212714131Z" level=info msg="ignoring event" container=c67eb3764e72f785d32a4823e8d4ab77b9eb868253cd92265295e7aa1498452d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 08 22:30:05 cert-expiration-785700 dockerd[1315]: time="2024-01-08T22:30:05.212828232Z" level=info msg="ignoring event" container=fa758777e424baa614120a5c1282754590ada8a1fb8d922249f6069de611c942 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 08 22:30:05 cert-expiration-785700 dockerd[1315]: time="2024-01-08T22:30:05.212848032Z" level=info msg="ignoring event" container=0bb9aa34fef2619bee5818a4514e3f2576426df2090a293dda8b10bbefaea427 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 08 22:30:05 cert-expiration-785700 dockerd[1315]: time="2024-01-08T22:30:05.212865532Z" level=info msg="ignoring event" container=22d7cda2998201806029779579d6063512f3da5f49f1ade7ec46aa613cf020d1 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 08 22:30:05 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:30:05.213386633Z" level=info msg="shim disconnected" id=22d7cda2998201806029779579d6063512f3da5f49f1ade7ec46aa613cf020d1 namespace=moby
	Jan 08 22:30:05 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:30:05.213467533Z" level=warning msg="cleaning up after shim disconnected" id=22d7cda2998201806029779579d6063512f3da5f49f1ade7ec46aa613cf020d1 namespace=moby
	Jan 08 22:30:05 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:30:05.213482533Z" level=info msg="cleaning up dead shim" namespace=moby
	Jan 08 22:30:05 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:30:05.213654034Z" level=info msg="shim disconnected" id=fa758777e424baa614120a5c1282754590ada8a1fb8d922249f6069de611c942 namespace=moby
	Jan 08 22:30:05 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:30:05.213722434Z" level=warning msg="cleaning up after shim disconnected" id=fa758777e424baa614120a5c1282754590ada8a1fb8d922249f6069de611c942 namespace=moby
	Jan 08 22:30:05 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:30:05.213736034Z" level=info msg="cleaning up dead shim" namespace=moby
	Jan 08 22:30:05 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:30:05.213848435Z" level=info msg="shim disconnected" id=0bb9aa34fef2619bee5818a4514e3f2576426df2090a293dda8b10bbefaea427 namespace=moby
	Jan 08 22:30:05 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:30:05.213877135Z" level=warning msg="cleaning up after shim disconnected" id=0bb9aa34fef2619bee5818a4514e3f2576426df2090a293dda8b10bbefaea427 namespace=moby
	Jan 08 22:30:05 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:30:05.213901935Z" level=info msg="cleaning up dead shim" namespace=moby
	Jan 08 22:30:05 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:30:05.214106735Z" level=info msg="shim disconnected" id=d475444ed114cbd25bf76dd2e12e846d9e489d9fbb36403c4b19c8ac7c310f5f namespace=moby
	Jan 08 22:30:05 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:30:05.214158535Z" level=warning msg="cleaning up after shim disconnected" id=d475444ed114cbd25bf76dd2e12e846d9e489d9fbb36403c4b19c8ac7c310f5f namespace=moby
	Jan 08 22:30:05 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:30:05.214170735Z" level=info msg="cleaning up dead shim" namespace=moby
	Jan 08 22:30:05 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:30:05.216849443Z" level=info msg="shim disconnected" id=c67eb3764e72f785d32a4823e8d4ab77b9eb868253cd92265295e7aa1498452d namespace=moby
	Jan 08 22:30:05 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:30:05.216893443Z" level=warning msg="cleaning up after shim disconnected" id=c67eb3764e72f785d32a4823e8d4ab77b9eb868253cd92265295e7aa1498452d namespace=moby
	Jan 08 22:30:05 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:30:05.216937643Z" level=info msg="cleaning up dead shim" namespace=moby
	Jan 08 22:30:05 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:30:05.243373719Z" level=info msg="shim disconnected" id=03eb614a5c902b45ed9463f7255a237cfb92b53e9afecf9c9b5fed6ce550bed9 namespace=moby
	Jan 08 22:30:05 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:30:05.248682035Z" level=warning msg="cleaning up after shim disconnected" id=03eb614a5c902b45ed9463f7255a237cfb92b53e9afecf9c9b5fed6ce550bed9 namespace=moby
	Jan 08 22:30:05 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:30:05.248702035Z" level=info msg="cleaning up dead shim" namespace=moby
	Jan 08 22:30:05 cert-expiration-785700 dockerd[1315]: time="2024-01-08T22:30:05.249079536Z" level=info msg="ignoring event" container=03eb614a5c902b45ed9463f7255a237cfb92b53e9afecf9c9b5fed6ce550bed9 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 08 22:30:10 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:30:10.050250521Z" level=info msg="shim disconnected" id=742d6685f01a6bde4f129a0977d8289d3602e5479ab8a7f31554a1b0eae10f41 namespace=moby
	Jan 08 22:30:10 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:30:10.051070323Z" level=warning msg="cleaning up after shim disconnected" id=742d6685f01a6bde4f129a0977d8289d3602e5479ab8a7f31554a1b0eae10f41 namespace=moby
	Jan 08 22:30:10 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:30:10.051160224Z" level=info msg="cleaning up dead shim" namespace=moby
	Jan 08 22:30:10 cert-expiration-785700 dockerd[1315]: time="2024-01-08T22:30:10.052309927Z" level=info msg="ignoring event" container=742d6685f01a6bde4f129a0977d8289d3602e5479ab8a7f31554a1b0eae10f41 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 08 22:30:15 cert-expiration-785700 dockerd[1315]: time="2024-01-08T22:30:15.068932970Z" level=info msg="Container failed to exit within 10s of signal 15 - using the force" container=bb7f8834279bfe095fb51af0415e582862d3cc4ccd1c9f701ec94c1d47e23339
	Jan 08 22:30:15 cert-expiration-785700 dockerd[1315]: time="2024-01-08T22:30:15.126669689Z" level=info msg="ignoring event" container=bb7f8834279bfe095fb51af0415e582862d3cc4ccd1c9f701ec94c1d47e23339 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 08 22:30:15 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:30:15.128287503Z" level=info msg="shim disconnected" id=bb7f8834279bfe095fb51af0415e582862d3cc4ccd1c9f701ec94c1d47e23339 namespace=moby
	Jan 08 22:30:15 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:30:15.128583006Z" level=warning msg="cleaning up after shim disconnected" id=bb7f8834279bfe095fb51af0415e582862d3cc4ccd1c9f701ec94c1d47e23339 namespace=moby
	Jan 08 22:30:15 cert-expiration-785700 dockerd[1321]: time="2024-01-08T22:30:15.128603606Z" level=info msg="cleaning up dead shim" namespace=moby
	Jan 08 22:30:15 cert-expiration-785700 dockerd[1315]: time="2024-01-08T22:30:15.623195052Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jan 08 22:30:15 cert-expiration-785700 dockerd[1315]: time="2024-01-08T22:30:15.623769257Z" level=info msg="Daemon shutdown complete"
	Jan 08 22:30:15 cert-expiration-785700 dockerd[1315]: time="2024-01-08T22:30:15.624021560Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Jan 08 22:30:15 cert-expiration-785700 dockerd[1315]: time="2024-01-08T22:30:15.624311462Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jan 08 22:30:16 cert-expiration-785700 systemd[1]: docker.service: Succeeded.
	Jan 08 22:30:16 cert-expiration-785700 systemd[1]: Stopped Docker Application Container Engine.
	Jan 08 22:30:16 cert-expiration-785700 systemd[1]: Starting Docker Application Container Engine...
	Jan 08 22:30:16 cert-expiration-785700 dockerd[8316]: time="2024-01-08T22:30:16.703936206Z" level=info msg="Starting up"
	Jan 08 22:31:16 cert-expiration-785700 dockerd[8316]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jan 08 22:31:16 cert-expiration-785700 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jan 08 22:31:16 cert-expiration-785700 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jan 08 22:31:16 cert-expiration-785700 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0108 22:31:16.799741    6868 out.go:239] * 
	W0108 22:31:16.802012    6868 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0108 22:31:16.802322    6868 out.go:177] 
	I0108 22:31:13.546692   10860 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-158500 ).state
	I0108 22:31:15.819654   10860 main.go:141] libmachine: [stdout =====>] : Running
	
	I0108 22:31:15.819654   10860 main.go:141] libmachine: [stderr =====>] : 
	I0108 22:31:15.819836   10860 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-158500 ).networkadapters[0]).ipaddresses[0]
	W0108 22:31:16.786408    5900 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 22:31:17.210273    5900 api_server.go:166] Checking apiserver status ...
	I0108 22:31:17.229505    5900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 22:31:17.287981    5900 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 22:31:17.709552    5900 api_server.go:166] Checking apiserver status ...
	I0108 22:31:17.726923    5900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 22:31:17.794110    5900 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 22:31:18.200797    5900 api_server.go:166] Checking apiserver status ...
	I0108 22:31:18.216381    5900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 22:31:18.271144    5900 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 22:31:18.707345    5900 api_server.go:166] Checking apiserver status ...
	I0108 22:31:18.726338    5900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 22:31:18.845112    5900 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/6797/cgroup
	I0108 22:31:18.874812    5900 api_server.go:182] apiserver freezer: "10:freezer:/kubepods/burstable/poda401ddf287ad9f784beb0b1cc332aa50/093895bae6c504321fc95ad4ff69b2c20a6a60e34234714a8453315da995930a"
	I0108 22:31:18.891701    5900 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda401ddf287ad9f784beb0b1cc332aa50/093895bae6c504321fc95ad4ff69b2c20a6a60e34234714a8453315da995930a/freezer.state
	I0108 22:31:18.927370    5900 api_server.go:204] freezer state: "THAWED"
	I0108 22:31:18.927469    5900 api_server.go:253] Checking apiserver healthz at https://172.29.109.19:8443/healthz ...
	I0108 22:31:18.643895   10860 main.go:141] libmachine: [stdout =====>] : 172.29.99.53
	
	I0108 22:31:18.643895   10860 main.go:141] libmachine: [stderr =====>] : 
	I0108 22:31:18.647415   10860 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-158500 ).state
	I0108 22:31:20.977817   10860 main.go:141] libmachine: [stdout =====>] : Running
	
	I0108 22:31:20.977817   10860 main.go:141] libmachine: [stderr =====>] : 
	I0108 22:31:20.977817   10860 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-158500 ).networkadapters[0]).ipaddresses[0]
	I0108 22:31:23.253584    5900 api_server.go:279] https://172.29.109.19:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0108 22:31:23.253909    5900 retry.go:31] will retry after 260.232534ms: https://172.29.109.19:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0108 22:31:23.528856    5900 api_server.go:253] Checking apiserver healthz at https://172.29.109.19:8443/healthz ...
	I0108 22:31:23.543906    5900 api_server.go:279] https://172.29.109.19:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0108 22:31:23.544116    5900 retry.go:31] will retry after 384.951177ms: https://172.29.109.19:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0108 22:31:23.940682    5900 api_server.go:253] Checking apiserver healthz at https://172.29.109.19:8443/healthz ...
	I0108 22:31:23.949019    5900 api_server.go:279] https://172.29.109.19:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0108 22:31:23.949019    5900 retry.go:31] will retry after 441.693409ms: https://172.29.109.19:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0108 22:31:24.403526    5900 api_server.go:253] Checking apiserver healthz at https://172.29.109.19:8443/healthz ...
	I0108 22:31:24.416961    5900 api_server.go:279] https://172.29.109.19:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0108 22:31:24.417100    5900 retry.go:31] will retry after 376.956665ms: https://172.29.109.19:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0108 22:31:24.798868    5900 api_server.go:253] Checking apiserver healthz at https://172.29.109.19:8443/healthz ...
	I0108 22:31:24.807777    5900 api_server.go:279] https://172.29.109.19:8443/healthz returned 200:
	ok
	I0108 22:31:24.837086    5900 system_pods.go:86] 6 kube-system pods found
	I0108 22:31:24.837086    5900 system_pods.go:89] "coredns-5dd5756b68-br7vx" [ffa87870-c6fa-423e-9668-a8d199f188ad] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0108 22:31:24.837086    5900 system_pods.go:89] "etcd-pause-810600" [4c80d858-ebfb-4e1b-a560-b9d327ca3f2d] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0108 22:31:24.837086    5900 system_pods.go:89] "kube-apiserver-pause-810600" [da630821-f106-43cb-ada1-71c0fa7ef20a] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0108 22:31:24.838095    5900 system_pods.go:89] "kube-controller-manager-pause-810600" [7faf9683-3d82-4759-a3f8-af52d0537db0] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0108 22:31:24.838095    5900 system_pods.go:89] "kube-proxy-fdvrb" [7a6304fd-ae2c-4a4b-a8fd-d0627666607d] Running
	I0108 22:31:24.838095    5900 system_pods.go:89] "kube-scheduler-pause-810600" [d0212c43-6b97-4715-959c-2ecab124c44d] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0108 22:31:24.840411    5900 api_server.go:141] control plane version: v1.28.4
	I0108 22:31:24.840411    5900 kubeadm.go:630] The running cluster does not require reconfiguration: 172.29.109.19
	I0108 22:31:24.840411    5900 kubeadm.go:684] Taking a shortcut, as the cluster seems to be properly configured
	I0108 22:31:24.840411    5900 kubeadm.go:640] restartCluster took 12.2402146s
	I0108 22:31:24.841058    5900 kubeadm.go:406] StartCluster complete in 12.3102182s
	I0108 22:31:24.841116    5900 settings.go:142] acquiring lock: {Name:mk2f48f1c2db86c45c5c20d13312e07e9c171d09 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 22:31:24.841241    5900 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	I0108 22:31:24.844604    5900 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\kubeconfig: {Name:mk3b9816be6135fef42a073128e9ed356868417a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 22:31:24.846218    5900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0108 22:31:24.846514    5900 addons.go:505] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0108 22:31:24.847116    5900 config.go:182] Loaded profile config "pause-810600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0108 22:31:24.847816    5900 out.go:177] * Enabled addons: 
	I0108 22:31:24.848701    5900 addons.go:508] enable addons completed in 2.0954ms: enabled=[]
	I0108 22:31:24.863285    5900 kapi.go:59] client config for pause-810600: &rest.Config{Host:"https://172.29.109.19:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\profiles\\pause-810600\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\profiles\\pause-810600\\client.key", CAFile:"C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(
nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x188c140), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0108 22:31:24.869805    5900 kapi.go:248] "coredns" deployment in "kube-system" namespace and "pause-810600" context rescaled to 1 replicas
	I0108 22:31:24.869805    5900 start.go:223] Will wait 6m0s for node &{Name: IP:172.29.109.19 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0108 22:31:24.871562    5900 out.go:177] * Verifying Kubernetes components...
	I0108 22:31:24.890437    5900 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0108 22:31:25.015039    5900 start.go:902] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0108 22:31:25.015039    5900 node_ready.go:35] waiting up to 6m0s for node "pause-810600" to be "Ready" ...
	I0108 22:31:25.020536    5900 node_ready.go:49] node "pause-810600" has status "Ready":"True"
	I0108 22:31:25.020536    5900 node_ready.go:38] duration metric: took 5.4963ms waiting for node "pause-810600" to be "Ready" ...
	I0108 22:31:25.020536    5900 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0108 22:31:25.028538    5900 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-br7vx" in "kube-system" namespace to be "Ready" ...
	I0108 22:31:23.718918   10860 main.go:141] libmachine: [stdout =====>] : 172.29.99.53
	
	I0108 22:31:23.718918   10860 main.go:141] libmachine: [stderr =====>] : 
	I0108 22:31:23.719213   10860 profile.go:148] Saving config to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\kubernetes-upgrade-158500\config.json ...
	I0108 22:31:23.721953   10860 machine.go:88] provisioning docker machine ...
	I0108 22:31:23.722047   10860 buildroot.go:166] provisioning hostname "kubernetes-upgrade-158500"
	I0108 22:31:23.722097   10860 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-158500 ).state
	I0108 22:31:26.018194   10860 main.go:141] libmachine: [stdout =====>] : Running
	
	I0108 22:31:26.018194   10860 main.go:141] libmachine: [stderr =====>] : 
	I0108 22:31:26.018366   10860 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-158500 ).networkadapters[0]).ipaddresses[0]
	
	
	==> Docker <==
	-- Journal begins at Mon 2024-01-08 22:20:34 UTC, ends at Mon 2024-01-08 22:33:17 UTC. --
	Jan 08 22:30:16 cert-expiration-785700 dockerd[8316]: time="2024-01-08T22:30:16.703936206Z" level=info msg="Starting up"
	Jan 08 22:31:16 cert-expiration-785700 dockerd[8316]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jan 08 22:31:16 cert-expiration-785700 cri-dockerd[1206]: time="2024-01-08T22:31:16Z" level=error msg="error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.42/info\": read unix @->/var/run/docker.sock: read: connection reset by peerFailed to get docker info"
	Jan 08 22:31:16 cert-expiration-785700 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jan 08 22:31:16 cert-expiration-785700 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jan 08 22:31:16 cert-expiration-785700 systemd[1]: Failed to start Docker Application Container Engine.
	Jan 08 22:31:16 cert-expiration-785700 systemd[1]: docker.service: Scheduled restart job, restart counter is at 1.
	Jan 08 22:31:16 cert-expiration-785700 systemd[1]: Stopped Docker Application Container Engine.
	Jan 08 22:31:16 cert-expiration-785700 systemd[1]: Starting Docker Application Container Engine...
	Jan 08 22:31:16 cert-expiration-785700 dockerd[8453]: time="2024-01-08T22:31:16.935356593Z" level=info msg="Starting up"
	Jan 08 22:32:16 cert-expiration-785700 dockerd[8453]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jan 08 22:32:16 cert-expiration-785700 cri-dockerd[1206]: time="2024-01-08T22:32:16Z" level=error msg="error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.42/info\": read unix @->/var/run/docker.sock: read: connection reset by peerFailed to get docker info"
	Jan 08 22:32:16 cert-expiration-785700 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jan 08 22:32:16 cert-expiration-785700 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jan 08 22:32:16 cert-expiration-785700 systemd[1]: Failed to start Docker Application Container Engine.
	Jan 08 22:32:16 cert-expiration-785700 cri-dockerd[1206]: time="2024-01-08T22:32:16Z" level=error msg="Unable to get docker version: error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.42/version\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jan 08 22:32:17 cert-expiration-785700 systemd[1]: docker.service: Scheduled restart job, restart counter is at 2.
	Jan 08 22:32:17 cert-expiration-785700 systemd[1]: Stopped Docker Application Container Engine.
	Jan 08 22:32:17 cert-expiration-785700 systemd[1]: Starting Docker Application Container Engine...
	Jan 08 22:32:17 cert-expiration-785700 dockerd[8681]: time="2024-01-08T22:32:17.298887336Z" level=info msg="Starting up"
	Jan 08 22:33:17 cert-expiration-785700 dockerd[8681]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jan 08 22:33:17 cert-expiration-785700 cri-dockerd[1206]: time="2024-01-08T22:33:17Z" level=error msg="error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.42/info\": read unix @->/var/run/docker.sock: read: connection reset by peerFailed to get docker info"
	Jan 08 22:33:17 cert-expiration-785700 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jan 08 22:33:17 cert-expiration-785700 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jan 08 22:33:17 cert-expiration-785700 systemd[1]: Failed to start Docker Application Container Engine.
	
	
	==> container status <==
	
	==> describe nodes <==
	
	==> dmesg <==
	[  +0.000003] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +8.679666] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000017] NFSD: Unable to initialize client recovery tracking! (-2)
	[Jan 8 22:21] systemd-fstab-generator[653]: Ignoring "noauto" for root device
	[  +0.139685] systemd-fstab-generator[664]: Ignoring "noauto" for root device
	[ +30.076045] systemd-fstab-generator[935]: Ignoring "noauto" for root device
	[  +0.618092] systemd-fstab-generator[976]: Ignoring "noauto" for root device
	[  +0.181168] systemd-fstab-generator[987]: Ignoring "noauto" for root device
	[  +0.207311] systemd-fstab-generator[1000]: Ignoring "noauto" for root device
	[Jan 8 22:22] kauditd_printk_skb: 14 callbacks suppressed
	[  +0.437720] systemd-fstab-generator[1161]: Ignoring "noauto" for root device
	[  +0.190964] systemd-fstab-generator[1172]: Ignoring "noauto" for root device
	[  +0.180557] systemd-fstab-generator[1183]: Ignoring "noauto" for root device
	[  +0.248354] systemd-fstab-generator[1198]: Ignoring "noauto" for root device
	[ +12.985418] systemd-fstab-generator[1306]: Ignoring "noauto" for root device
	[  +2.504818] kauditd_printk_skb: 29 callbacks suppressed
	[  +6.880700] systemd-fstab-generator[1694]: Ignoring "noauto" for root device
	[  +0.644492] kauditd_printk_skb: 29 callbacks suppressed
	[  +9.218965] systemd-fstab-generator[2621]: Ignoring "noauto" for root device
	[Jan 8 22:30] systemd-fstab-generator[7807]: Ignoring "noauto" for root device
	[  +0.740817] systemd-fstab-generator[7853]: Ignoring "noauto" for root device
	[  +0.252810] systemd-fstab-generator[7864]: Ignoring "noauto" for root device
	[  +0.274865] systemd-fstab-generator[7877]: Ignoring "noauto" for root device
	[  +5.438113] kauditd_printk_skb: 23 callbacks suppressed
	
	
	==> kernel <==
	 22:34:17 up 13 min,  0 users,  load average: 0.01, 0.19, 0.19
	Linux cert-expiration-785700 5.10.57 #1 SMP Sat Dec 16 11:03:54 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	
	==> kubelet <==
	-- Journal begins at Mon 2024-01-08 22:20:34 UTC, ends at Mon 2024-01-08 22:34:17 UTC. --
	Jan 08 22:34:10 cert-expiration-785700 kubelet[2654]: E0108 22:34:10.155724    2654 kubelet_node_status.go:540] "Error updating node status, will retry" err="error getting node \"cert-expiration-785700\": Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/cert-expiration-785700?timeout=10s\": dial tcp 172.29.100.30:8443: connect: connection refused"
	Jan 08 22:34:10 cert-expiration-785700 kubelet[2654]: E0108 22:34:10.155967    2654 kubelet_node_status.go:540] "Error updating node status, will retry" err="error getting node \"cert-expiration-785700\": Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/cert-expiration-785700?timeout=10s\": dial tcp 172.29.100.30:8443: connect: connection refused"
	Jan 08 22:34:10 cert-expiration-785700 kubelet[2654]: E0108 22:34:10.156150    2654 kubelet_node_status.go:540] "Error updating node status, will retry" err="error getting node \"cert-expiration-785700\": Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/cert-expiration-785700?timeout=10s\": dial tcp 172.29.100.30:8443: connect: connection refused"
	Jan 08 22:34:10 cert-expiration-785700 kubelet[2654]: E0108 22:34:10.156473    2654 kubelet_node_status.go:540] "Error updating node status, will retry" err="error getting node \"cert-expiration-785700\": Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/cert-expiration-785700?timeout=10s\": dial tcp 172.29.100.30:8443: connect: connection refused"
	Jan 08 22:34:10 cert-expiration-785700 kubelet[2654]: E0108 22:34:10.156576    2654 kubelet_node_status.go:527] "Unable to update node status" err="update node status exceeds retry count"
	Jan 08 22:34:11 cert-expiration-785700 kubelet[2654]: E0108 22:34:11.779807    2654 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/cert-expiration-785700?timeout=10s\": dial tcp 172.29.100.30:8443: connect: connection refused" interval="7s"
	Jan 08 22:34:12 cert-expiration-785700 kubelet[2654]: E0108 22:34:12.733850    2654 kubelet.go:2327] "Skipping pod synchronization" err="[container runtime is down, PLEG is not healthy: pleg was last seen active 4m8.171715103s ago; threshold is 3m0s, container runtime not ready: RuntimeReady=false reason:DockerDaemonNotReady message:docker: failed to get docker version: error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.42/version\": read unix @->/var/run/docker.sock: read: connection reset by peer]"
	Jan 08 22:34:13 cert-expiration-785700 kubelet[2654]: E0108 22:34:13.460103    2654 event.go:289] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kube-apiserver-cert-expiration-785700.17a87fd397f7b700", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"kube-system", Name:"kube-apiserver-cert-expiration-785700", UID:"ecffcf1ac0eda13219ae0bb7e986b827", APIVersion:"v1", ResourceVersion:"", FieldPath:"spec.containers{kube-apiserver}"}, Reason:"Unhealthy", Message:"Readiness probe failed: HTTP probe failed with statuscode: 500",
Source:v1.EventSource{Component:"kubelet", Host:"cert-expiration-785700"}, FirstTimestamp:time.Date(2024, time.January, 8, 22, 30, 5, 724153600, time.Local), LastTimestamp:time.Date(2024, time.January, 8, 22, 30, 10, 732610280, time.Local), Count:2, Type:"Warning", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"cert-expiration-785700"}': 'Patch "https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/events/kube-apiserver-cert-expiration-785700.17a87fd397f7b700": dial tcp 172.29.100.30:8443: connect: connection refused'(may retry after sleeping)
	Jan 08 22:34:17 cert-expiration-785700 kubelet[2654]: E0108 22:34:17.573230    2654 remote_runtime.go:294] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.42/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)podsandbox%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="nil"
	Jan 08 22:34:17 cert-expiration-785700 kubelet[2654]: E0108 22:34:17.576082    2654 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.42/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)podsandbox%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jan 08 22:34:17 cert-expiration-785700 kubelet[2654]: E0108 22:34:17.577651    2654 generic.go:238] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.42/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)podsandbox%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jan 08 22:34:17 cert-expiration-785700 kubelet[2654]: E0108 22:34:17.577288    2654 remote_image.go:232] "ImageFsInfo from image service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.42/info\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jan 08 22:34:17 cert-expiration-785700 kubelet[2654]: E0108 22:34:17.578139    2654 eviction_manager.go:258] "Eviction manager: failed to get summary stats" err="failed to get imageFs stats: rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.42/info\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jan 08 22:34:17 cert-expiration-785700 kubelet[2654]: E0108 22:34:17.577378    2654 remote_runtime.go:407] "ListContainers with filter from runtime service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.42/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	Jan 08 22:34:17 cert-expiration-785700 kubelet[2654]: E0108 22:34:17.578678    2654 kuberuntime_container.go:477] "ListContainers failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.42/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jan 08 22:34:17 cert-expiration-785700 kubelet[2654]: E0108 22:34:17.577409    2654 remote_image.go:128] "ListImages with filter from image service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.42/images/json\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="nil"
	Jan 08 22:34:17 cert-expiration-785700 kubelet[2654]: E0108 22:34:17.579276    2654 kuberuntime_image.go:103] "Failed to list images" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.42/images/json\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jan 08 22:34:17 cert-expiration-785700 kubelet[2654]: I0108 22:34:17.579509    2654 image_gc_manager.go:210] "Failed to update image list" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.42/images/json\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jan 08 22:34:17 cert-expiration-785700 kubelet[2654]: E0108 22:34:17.580389    2654 remote_runtime.go:407] "ListContainers with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	Jan 08 22:34:17 cert-expiration-785700 kubelet[2654]: E0108 22:34:17.580595    2654 kuberuntime_container.go:477] "ListContainers failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	Jan 08 22:34:17 cert-expiration-785700 kubelet[2654]: E0108 22:34:17.581202    2654 kubelet.go:1402] "Container garbage collection failed" err="[rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.42/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer, rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?]"
	Jan 08 22:34:17 cert-expiration-785700 kubelet[2654]: E0108 22:34:17.581442    2654 remote_runtime.go:407] "ListContainers with filter from runtime service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.42/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	Jan 08 22:34:17 cert-expiration-785700 kubelet[2654]: E0108 22:34:17.581505    2654 container_log_manager.go:185] "Failed to rotate container logs" err="failed to list containers: rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.42/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jan 08 22:34:17 cert-expiration-785700 kubelet[2654]: E0108 22:34:17.581692    2654 kubelet.go:2865] "Container runtime not ready" runtimeReady="RuntimeReady=false reason:DockerDaemonNotReady message:docker: failed to get docker version: error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.42/version\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jan 08 22:34:17 cert-expiration-785700 kubelet[2654]: E0108 22:34:17.734644    2654 kubelet.go:2327] "Skipping pod synchronization" err="[container runtime is down, PLEG is not healthy: pleg was last seen active 4m13.172509193s ago; threshold is 3m0s, container runtime not ready: RuntimeReady=false reason:DockerDaemonNotReady message:docker: failed to get docker version: error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.42/version\": read unix @->/var/run/docker.sock: read: connection reset by peer]"
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0108 22:31:30.026031    9056 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	E0108 22:32:16.941630    9056 logs.go:281] Failed to list containers for "kube-apiserver": docker: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0108 22:32:16.982343    9056 logs.go:281] Failed to list containers for "etcd": docker: docker ps -a --filter=name=k8s_etcd --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0108 22:32:17.020619    9056 logs.go:281] Failed to list containers for "coredns": docker: docker ps -a --filter=name=k8s_coredns --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0108 22:32:17.054193    9056 logs.go:281] Failed to list containers for "kube-scheduler": docker: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0108 22:32:17.092515    9056 logs.go:281] Failed to list containers for "kube-proxy": docker: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0108 22:32:17.135152    9056 logs.go:281] Failed to list containers for "kube-controller-manager": docker: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0108 22:32:17.178715    9056 logs.go:281] Failed to list containers for "kindnet": docker: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0108 22:33:17.316546    9056 logs.go:281] Failed to list containers for "storage-provisioner": docker: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0108 22:34:17.566318    9056 logs.go:195] command /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a" failed with error: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": Process exited with status 1
	stdout:
	
	stderr:
	time="2024-01-08T22:33:17Z" level=fatal msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///var/run/cri-dockerd.sock\": rpc error: code = Unknown desc = failed to get docker version: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	 output: "\n** stderr ** \ntime=\"2024-01-08T22:33:17Z\" level=fatal msg=\"validate service connection: validate CRI v1 runtime API for endpoint \\\"unix:///var/run/cri-dockerd.sock\\\": rpc error: code = Unknown desc = failed to get docker version: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?\"\nCannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?\n\n** /stderr **"
	E0108 22:34:17.675132    9056 logs.go:195] command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: "\n** stderr ** \nThe connection to the server localhost:8443 was refused - did you specify the right host or port?\n\n** /stderr **"
	! unable to fetch logs for: container status, describe nodes

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p cert-expiration-785700 -n cert-expiration-785700
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p cert-expiration-785700 -n cert-expiration-785700: exit status 2 (13.1157537s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
** stderr ** 
	W0108 22:34:18.403308    3776 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "cert-expiration-785700" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:175: Cleaning up "cert-expiration-785700" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p cert-expiration-785700
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p cert-expiration-785700: (1m7.5087516s)
--- FAIL: TestCertExpiration (1202.38s)

                                                
                                    
x
+
TestErrorSpam/setup (187.24s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-windows-amd64.exe start -p nospam-974700 -n=1 --memory=2250 --wait=false --log_dir=C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-974700 --driver=hyperv
E0108 20:22:52.213262    3008 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-084500\client.crt: The system cannot find the path specified.
E0108 20:22:52.228257    3008 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-084500\client.crt: The system cannot find the path specified.
E0108 20:22:52.243944    3008 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-084500\client.crt: The system cannot find the path specified.
E0108 20:22:52.274582    3008 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-084500\client.crt: The system cannot find the path specified.
E0108 20:22:52.320687    3008 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-084500\client.crt: The system cannot find the path specified.
E0108 20:22:52.413017    3008 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-084500\client.crt: The system cannot find the path specified.
E0108 20:22:52.584273    3008 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-084500\client.crt: The system cannot find the path specified.
E0108 20:22:52.910261    3008 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-084500\client.crt: The system cannot find the path specified.
E0108 20:22:53.562649    3008 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-084500\client.crt: The system cannot find the path specified.
E0108 20:22:54.854527    3008 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-084500\client.crt: The system cannot find the path specified.
E0108 20:22:57.424301    3008 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-084500\client.crt: The system cannot find the path specified.
E0108 20:23:02.556007    3008 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-084500\client.crt: The system cannot find the path specified.
E0108 20:23:12.805536    3008 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-084500\client.crt: The system cannot find the path specified.
E0108 20:23:33.291479    3008 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-084500\client.crt: The system cannot find the path specified.
E0108 20:24:14.255165    3008 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-084500\client.crt: The system cannot find the path specified.
E0108 20:25:36.178910    3008 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-084500\client.crt: The system cannot find the path specified.
error_spam_test.go:81: (dbg) Done: out/minikube-windows-amd64.exe start -p nospam-974700 -n=1 --memory=2250 --wait=false --log_dir=C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-974700 --driver=hyperv: (3m7.2365765s)
error_spam_test.go:96: unexpected stderr: "W0108 20:22:29.956985    7528 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube7\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified."
error_spam_test.go:110: minikube stdout:
* [nospam-974700] minikube v1.32.0 on Microsoft Windows 10 Enterprise N 10.0.19045.3803 Build 19045.3803
- KUBECONFIG=C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=C:\Users\jenkins.minikube7\minikube-integration\.minikube
- MINIKUBE_LOCATION=17907
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
* Using the hyperv driver based on user configuration
* Starting control plane node nospam-974700 in cluster nospam-974700
* Creating hyperv VM (CPUs=2, Memory=2250MB, Disk=20000MB) ...
* Preparing Kubernetes v1.28.4 on Docker 24.0.7 ...
- Generating certificates and keys ...
- Booting up control plane ...
- Configuring RBAC rules ...
* Configuring bridge CNI (Container Networking Interface) ...
* Verifying Kubernetes components...
- Using image gcr.io/k8s-minikube/storage-provisioner:v5
* Enabled addons: storage-provisioner, default-storageclass
* Done! kubectl is now configured to use "nospam-974700" cluster and "default" namespace by default
error_spam_test.go:111: minikube stderr:
W0108 20:22:29.956985    7528 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
--- FAIL: TestErrorSpam/setup (187.24s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-242800 config unset cpus
functional_test.go:1206: expected config error for "out/minikube-windows-amd64.exe -p functional-242800 config unset cpus" to be -""- but got *"W0108 20:37:43.298183   13728 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube7\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified."*
functional_test.go:1195: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-242800 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-242800 config get cpus: exit status 14 (299.1856ms)

                                                
                                                
** stderr ** 
	W0108 20:37:43.655874    5272 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1206: expected config error for "out/minikube-windows-amd64.exe -p functional-242800 config get cpus" to be -"Error: specified key could not be found in config"- but got *"W0108 20:37:43.655874    5272 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube7\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified.\nError: specified key could not be found in config"*
functional_test.go:1195: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-242800 config set cpus 2
functional_test.go:1206: expected config error for "out/minikube-windows-amd64.exe -p functional-242800 config set cpus 2" to be -"! These changes will take effect upon a minikube delete and then a minikube start"- but got *"W0108 20:37:43.955851    1696 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube7\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified.\n! These changes will take effect upon a minikube delete and then a minikube start"*
functional_test.go:1195: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-242800 config get cpus
functional_test.go:1206: expected config error for "out/minikube-windows-amd64.exe -p functional-242800 config get cpus" to be -""- but got *"W0108 20:37:44.318068    9724 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube7\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified."*
functional_test.go:1195: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-242800 config unset cpus
functional_test.go:1206: expected config error for "out/minikube-windows-amd64.exe -p functional-242800 config unset cpus" to be -""- but got *"W0108 20:37:44.674102   14016 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube7\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified."*
functional_test.go:1195: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-242800 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-242800 config get cpus: exit status 14 (312.9467ms)

                                                
                                                
** stderr ** 
	W0108 20:37:45.005766    1076 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1206: expected config error for "out/minikube-windows-amd64.exe -p functional-242800 config get cpus" to be -"Error: specified key could not be found in config"- but got *"W0108 20:37:45.005766    1076 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube7\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified.\nError: specified key could not be found in config"*
--- FAIL: TestFunctional/parallel/ConfigCmd (2.00s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (15.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1508: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-242800 service --namespace=default --https --url hello-node
functional_test.go:1508: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-242800 service --namespace=default --https --url hello-node: exit status 1 (15.0604973s)

                                                
                                                
** stderr ** 
	W0108 20:38:45.812966    9400 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
functional_test.go:1510: failed to get service url. args "out/minikube-windows-amd64.exe -p functional-242800 service --namespace=default --https --url hello-node" : exit status 1
--- FAIL: TestFunctional/parallel/ServiceCmd/HTTPS (15.06s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (15.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1539: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-242800 service hello-node --url --format={{.IP}}
functional_test.go:1539: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-242800 service hello-node --url --format={{.IP}}: exit status 1 (15.0484808s)

                                                
                                                
** stderr ** 
	W0108 20:39:00.899865    9704 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
functional_test.go:1541: failed to get service url with custom format. args "out/minikube-windows-amd64.exe -p functional-242800 service hello-node --url --format={{.IP}}": exit status 1
functional_test.go:1547: "" is not a valid IP
--- FAIL: TestFunctional/parallel/ServiceCmd/Format (15.05s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (15.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1558: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-242800 service hello-node --url
functional_test.go:1558: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-242800 service hello-node --url: exit status 1 (15.05681s)

                                                
                                                
** stderr ** 
	W0108 20:39:15.911739    2076 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
functional_test.go:1560: failed to get service url. args: "out/minikube-windows-amd64.exe -p functional-242800 service hello-node --url": exit status 1
functional_test.go:1564: found endpoint for hello-node: 
functional_test.go:1572: expected scheme to be -"http"- got scheme: *""*
--- FAIL: TestFunctional/parallel/ServiceCmd/URL (15.06s)

                                                
                                    
x
+
TestMinikubeProfile (544.45s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-windows-amd64.exe start -p first-818600 --driver=hyperv
E0108 21:02:58.287768    3008 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-242800\client.crt: The system cannot find the path specified.
E0108 21:03:31.987382    3008 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ingress-addon-legacy-054400\client.crt: The system cannot find the path specified.
E0108 21:04:21.461090    3008 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-242800\client.crt: The system cannot find the path specified.
E0108 21:05:47.999520    3008 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ingress-addon-legacy-054400\client.crt: The system cannot find the path specified.
minikube_profile_test.go:44: (dbg) Done: out/minikube-windows-amd64.exe start -p first-818600 --driver=hyperv: (3m8.8729414s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-windows-amd64.exe start -p second-818600 --driver=hyperv
E0108 21:06:15.834061    3008 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ingress-addon-legacy-054400\client.crt: The system cannot find the path specified.
E0108 21:07:52.235761    3008 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-084500\client.crt: The system cannot find the path specified.
E0108 21:07:58.283139    3008 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-242800\client.crt: The system cannot find the path specified.
minikube_profile_test.go:44: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p second-818600 --driver=hyperv: exit status 90 (3m27.1184346s)

                                                
                                                
-- stdout --
	* [second-818600] minikube v1.32.0 on Microsoft Windows 10 Enterprise N 10.0.19045.3803 Build 19045.3803
	  - KUBECONFIG=C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube7\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=17907
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the hyperv driver based on user configuration
	* Starting control plane node second-818600 in cluster second-818600
	* Creating hyperv VM (CPUs=2, Memory=6000MB, Disk=20000MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0108 21:06:03.624888    4132 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	-- Journal begins at Mon 2024-01-08 21:07:06 UTC, ends at Mon 2024-01-08 21:09:30 UTC. --
	Jan 08 21:07:58 second-818600 systemd[1]: Starting Docker Application Container Engine...
	Jan 08 21:07:58 second-818600 dockerd[679]: time="2024-01-08T21:07:58.490402031Z" level=info msg="Starting up"
	Jan 08 21:07:58 second-818600 dockerd[679]: time="2024-01-08T21:07:58.491257970Z" level=info msg="containerd not running, starting managed containerd"
	Jan 08 21:07:58 second-818600 dockerd[679]: time="2024-01-08T21:07:58.492956349Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=685
	Jan 08 21:07:58 second-818600 dockerd[685]: time="2024-01-08T21:07:58.529766661Z" level=info msg="starting containerd" revision=64b8a811b07ba6288238eefc14d898ee0b5b99ba version=v1.7.11
	Jan 08 21:07:58 second-818600 dockerd[685]: time="2024-01-08T21:07:58.553371058Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Jan 08 21:07:58 second-818600 dockerd[685]: time="2024-01-08T21:07:58.553404559Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Jan 08 21:07:58 second-818600 dockerd[685]: time="2024-01-08T21:07:58.555491156Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.57\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Jan 08 21:07:58 second-818600 dockerd[685]: time="2024-01-08T21:07:58.555583461Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Jan 08 21:07:58 second-818600 dockerd[685]: time="2024-01-08T21:07:58.555833372Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jan 08 21:07:58 second-818600 dockerd[685]: time="2024-01-08T21:07:58.555870974Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Jan 08 21:07:58 second-818600 dockerd[685]: time="2024-01-08T21:07:58.555972679Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Jan 08 21:07:58 second-818600 dockerd[685]: time="2024-01-08T21:07:58.556086784Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Jan 08 21:07:58 second-818600 dockerd[685]: time="2024-01-08T21:07:58.556118886Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Jan 08 21:07:58 second-818600 dockerd[685]: time="2024-01-08T21:07:58.556241991Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Jan 08 21:07:58 second-818600 dockerd[685]: time="2024-01-08T21:07:58.556796617Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Jan 08 21:07:58 second-818600 dockerd[685]: time="2024-01-08T21:07:58.556879721Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Jan 08 21:07:58 second-818600 dockerd[685]: time="2024-01-08T21:07:58.556895722Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Jan 08 21:07:58 second-818600 dockerd[685]: time="2024-01-08T21:07:58.557051629Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jan 08 21:07:58 second-818600 dockerd[685]: time="2024-01-08T21:07:58.557180835Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Jan 08 21:07:58 second-818600 dockerd[685]: time="2024-01-08T21:07:58.557322342Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Jan 08 21:07:58 second-818600 dockerd[685]: time="2024-01-08T21:07:58.557409146Z" level=info msg="metadata content store policy set" policy=shared
	Jan 08 21:07:58 second-818600 dockerd[685]: time="2024-01-08T21:07:58.574335432Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Jan 08 21:07:58 second-818600 dockerd[685]: time="2024-01-08T21:07:58.574810955Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Jan 08 21:07:58 second-818600 dockerd[685]: time="2024-01-08T21:07:58.574863757Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Jan 08 21:07:58 second-818600 dockerd[685]: time="2024-01-08T21:07:58.574944461Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Jan 08 21:07:58 second-818600 dockerd[685]: time="2024-01-08T21:07:58.575033865Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Jan 08 21:07:58 second-818600 dockerd[685]: time="2024-01-08T21:07:58.575093668Z" level=info msg="NRI interface is disabled by configuration."
	Jan 08 21:07:58 second-818600 dockerd[685]: time="2024-01-08T21:07:58.575158271Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Jan 08 21:07:58 second-818600 dockerd[685]: time="2024-01-08T21:07:58.575432883Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Jan 08 21:07:58 second-818600 dockerd[685]: time="2024-01-08T21:07:58.575525588Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Jan 08 21:07:58 second-818600 dockerd[685]: time="2024-01-08T21:07:58.575621792Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Jan 08 21:07:58 second-818600 dockerd[685]: time="2024-01-08T21:07:58.575964008Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Jan 08 21:07:58 second-818600 dockerd[685]: time="2024-01-08T21:07:58.575999010Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Jan 08 21:07:58 second-818600 dockerd[685]: time="2024-01-08T21:07:58.576096814Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Jan 08 21:07:58 second-818600 dockerd[685]: time="2024-01-08T21:07:58.576236421Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Jan 08 21:07:58 second-818600 dockerd[685]: time="2024-01-08T21:07:58.576263922Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Jan 08 21:07:58 second-818600 dockerd[685]: time="2024-01-08T21:07:58.576280023Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Jan 08 21:07:58 second-818600 dockerd[685]: time="2024-01-08T21:07:58.576295224Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Jan 08 21:07:58 second-818600 dockerd[685]: time="2024-01-08T21:07:58.576318425Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Jan 08 21:07:58 second-818600 dockerd[685]: time="2024-01-08T21:07:58.576334925Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Jan 08 21:07:58 second-818600 dockerd[685]: time="2024-01-08T21:07:58.576553136Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Jan 08 21:07:58 second-818600 dockerd[685]: time="2024-01-08T21:07:58.577969701Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Jan 08 21:07:58 second-818600 dockerd[685]: time="2024-01-08T21:07:58.578126409Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Jan 08 21:07:58 second-818600 dockerd[685]: time="2024-01-08T21:07:58.578167011Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Jan 08 21:07:58 second-818600 dockerd[685]: time="2024-01-08T21:07:58.578192412Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Jan 08 21:07:58 second-818600 dockerd[685]: time="2024-01-08T21:07:58.578273416Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Jan 08 21:07:58 second-818600 dockerd[685]: time="2024-01-08T21:07:58.578371120Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Jan 08 21:07:58 second-818600 dockerd[685]: time="2024-01-08T21:07:58.578391121Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Jan 08 21:07:58 second-818600 dockerd[685]: time="2024-01-08T21:07:58.578404422Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Jan 08 21:07:58 second-818600 dockerd[685]: time="2024-01-08T21:07:58.578417722Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Jan 08 21:07:58 second-818600 dockerd[685]: time="2024-01-08T21:07:58.578431623Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Jan 08 21:07:58 second-818600 dockerd[685]: time="2024-01-08T21:07:58.578445423Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Jan 08 21:07:58 second-818600 dockerd[685]: time="2024-01-08T21:07:58.578822741Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Jan 08 21:07:58 second-818600 dockerd[685]: time="2024-01-08T21:07:58.578877844Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Jan 08 21:07:58 second-818600 dockerd[685]: time="2024-01-08T21:07:58.578981648Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Jan 08 21:07:58 second-818600 dockerd[685]: time="2024-01-08T21:07:58.579013550Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Jan 08 21:07:58 second-818600 dockerd[685]: time="2024-01-08T21:07:58.579032051Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Jan 08 21:07:58 second-818600 dockerd[685]: time="2024-01-08T21:07:58.579049552Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Jan 08 21:07:58 second-818600 dockerd[685]: time="2024-01-08T21:07:58.579071653Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Jan 08 21:07:58 second-818600 dockerd[685]: time="2024-01-08T21:07:58.579146756Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Jan 08 21:07:58 second-818600 dockerd[685]: time="2024-01-08T21:07:58.579247661Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Jan 08 21:07:58 second-818600 dockerd[685]: time="2024-01-08T21:07:58.579272062Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Jan 08 21:07:58 second-818600 dockerd[685]: time="2024-01-08T21:07:58.579383567Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	Jan 08 21:07:58 second-818600 dockerd[685]: time="2024-01-08T21:07:58.579473571Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Jan 08 21:07:58 second-818600 dockerd[685]: time="2024-01-08T21:07:58.579499973Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	Jan 08 21:07:58 second-818600 dockerd[685]: time="2024-01-08T21:07:58.580484118Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Jan 08 21:07:58 second-818600 dockerd[685]: time="2024-01-08T21:07:58.580632325Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Jan 08 21:07:58 second-818600 dockerd[685]: time="2024-01-08T21:07:58.580888837Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Jan 08 21:07:58 second-818600 dockerd[685]: time="2024-01-08T21:07:58.581474964Z" level=info msg="containerd successfully booted in 0.054769s"
	Jan 08 21:07:58 second-818600 dockerd[679]: time="2024-01-08T21:07:58.621001202Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Jan 08 21:07:58 second-818600 dockerd[679]: time="2024-01-08T21:07:58.633865300Z" level=info msg="Loading containers: start."
	Jan 08 21:07:58 second-818600 dockerd[679]: time="2024-01-08T21:07:58.844155976Z" level=info msg="Loading containers: done."
	Jan 08 21:07:58 second-818600 dockerd[679]: time="2024-01-08T21:07:58.860820350Z" level=warning msg="WARNING: No blkio throttle.read_bps_device support"
	Jan 08 21:07:58 second-818600 dockerd[679]: time="2024-01-08T21:07:58.860924255Z" level=warning msg="WARNING: No blkio throttle.write_bps_device support"
	Jan 08 21:07:58 second-818600 dockerd[679]: time="2024-01-08T21:07:58.861137065Z" level=warning msg="WARNING: No blkio throttle.read_iops_device support"
	Jan 08 21:07:58 second-818600 dockerd[679]: time="2024-01-08T21:07:58.861322774Z" level=warning msg="WARNING: No blkio throttle.write_iops_device support"
	Jan 08 21:07:58 second-818600 dockerd[679]: time="2024-01-08T21:07:58.861433479Z" level=info msg="Docker daemon" commit=311b9ff graphdriver=overlay2 version=24.0.7
	Jan 08 21:07:58 second-818600 dockerd[679]: time="2024-01-08T21:07:58.861809996Z" level=info msg="Daemon has completed initialization"
	Jan 08 21:07:58 second-818600 dockerd[679]: time="2024-01-08T21:07:58.923348757Z" level=info msg="API listen on /var/run/docker.sock"
	Jan 08 21:07:58 second-818600 systemd[1]: Started Docker Application Container Engine.
	Jan 08 21:07:58 second-818600 dockerd[679]: time="2024-01-08T21:07:58.923812679Z" level=info msg="API listen on [::]:2376"
	Jan 08 21:08:29 second-818600 dockerd[679]: time="2024-01-08T21:08:29.450296956Z" level=info msg="Processing signal 'terminated'"
	Jan 08 21:08:29 second-818600 dockerd[679]: time="2024-01-08T21:08:29.452074556Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jan 08 21:08:29 second-818600 dockerd[679]: time="2024-01-08T21:08:29.452111256Z" level=info msg="Daemon shutdown complete"
	Jan 08 21:08:29 second-818600 dockerd[679]: time="2024-01-08T21:08:29.452143856Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Jan 08 21:08:29 second-818600 dockerd[679]: time="2024-01-08T21:08:29.452182356Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jan 08 21:08:29 second-818600 systemd[1]: Stopping Docker Application Container Engine...
	Jan 08 21:08:30 second-818600 systemd[1]: docker.service: Succeeded.
	Jan 08 21:08:30 second-818600 systemd[1]: Stopped Docker Application Container Engine.
	Jan 08 21:08:30 second-818600 systemd[1]: Starting Docker Application Container Engine...
	Jan 08 21:08:30 second-818600 dockerd[1015]: time="2024-01-08T21:08:30.527998356Z" level=info msg="Starting up"
	Jan 08 21:09:30 second-818600 dockerd[1015]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jan 08 21:09:30 second-818600 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jan 08 21:09:30 second-818600 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jan 08 21:09:30 second-818600 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
minikube_profile_test.go:46: test pre-condition failed. args "out/minikube-windows-amd64.exe start -p second-818600 --driver=hyperv": exit status 90
panic.go:523: *** TestMinikubeProfile FAILED at 2024-01-08 21:09:30.6303578 +0000 UTC m=+3532.520065301
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p second-818600 -n second-818600
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p second-818600 -n second-818600: exit status 6 (12.0740627s)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	W0108 21:09:30.762357    6716 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	E0108 21:09:42.632325    6716 status.go:415] kubeconfig endpoint: extract IP: "second-818600" does not appear in C:\Users\jenkins.minikube7\minikube-integration\kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "second-818600" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:175: Cleaning up "second-818600" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p second-818600
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p second-818600: (1m1.7544284s)
panic.go:523: *** TestMinikubeProfile FAILED at 2024-01-08 21:10:44.4603061 +0000 UTC m=+3606.349644201
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p first-818600 -n first-818600
E0108 21:10:47.997230    3008 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ingress-addon-legacy-054400\client.crt: The system cannot find the path specified.
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p first-818600 -n first-818600: (12.0390288s)
helpers_test.go:244: <<< TestMinikubeProfile FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMinikubeProfile]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p first-818600 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p first-818600 logs -n 25: (8.0644397s)
helpers_test.go:252: TestMinikubeProfile logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------|-----------------------------|-------------------|---------|---------------------|---------------------|
	| Command |                   Args                   |           Profile           |       User        | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------|-----------------------------|-------------------|---------|---------------------|---------------------|
	| delete  | -p functional-242800                     | functional-242800           | minikube7\jenkins | v1.32.0 | 08 Jan 24 20:45 UTC | 08 Jan 24 20:46 UTC |
	| start   | -p image-192800                          | image-192800                | minikube7\jenkins | v1.32.0 | 08 Jan 24 20:46 UTC | 08 Jan 24 20:49 UTC |
	|         | --driver=hyperv                          |                             |                   |         |                     |                     |
	| image   | build -t aaa:latest                      | image-192800                | minikube7\jenkins | v1.32.0 | 08 Jan 24 20:49 UTC | 08 Jan 24 20:49 UTC |
	|         | ./testdata/image-build/test-normal       |                             |                   |         |                     |                     |
	|         | -p image-192800                          |                             |                   |         |                     |                     |
	| image   | build -t aaa:latest                      | image-192800                | minikube7\jenkins | v1.32.0 | 08 Jan 24 20:49 UTC | 08 Jan 24 20:49 UTC |
	|         | --build-opt=build-arg=ENV_A=test_env_str |                             |                   |         |                     |                     |
	|         | --build-opt=no-cache                     |                             |                   |         |                     |                     |
	|         | ./testdata/image-build/test-arg -p       |                             |                   |         |                     |                     |
	|         | image-192800                             |                             |                   |         |                     |                     |
	| image   | build -t aaa:latest                      | image-192800                | minikube7\jenkins | v1.32.0 | 08 Jan 24 20:49 UTC | 08 Jan 24 20:50 UTC |
	|         | ./testdata/image-build/test-normal       |                             |                   |         |                     |                     |
	|         | --build-opt=no-cache -p                  |                             |                   |         |                     |                     |
	|         | image-192800                             |                             |                   |         |                     |                     |
	| image   | build -t aaa:latest                      | image-192800                | minikube7\jenkins | v1.32.0 | 08 Jan 24 20:50 UTC | 08 Jan 24 20:50 UTC |
	|         | -f inner/Dockerfile                      |                             |                   |         |                     |                     |
	|         | ./testdata/image-build/test-f            |                             |                   |         |                     |                     |
	|         | -p image-192800                          |                             |                   |         |                     |                     |
	| delete  | -p image-192800                          | image-192800                | minikube7\jenkins | v1.32.0 | 08 Jan 24 20:50 UTC | 08 Jan 24 20:50 UTC |
	| start   | -p ingress-addon-legacy-054400           | ingress-addon-legacy-054400 | minikube7\jenkins | v1.32.0 | 08 Jan 24 20:50 UTC | 08 Jan 24 20:54 UTC |
	|         | --kubernetes-version=v1.18.20            |                             |                   |         |                     |                     |
	|         | --memory=4096 --wait=true                |                             |                   |         |                     |                     |
	|         | --alsologtostderr -v=5                   |                             |                   |         |                     |                     |
	|         | --driver=hyperv                          |                             |                   |         |                     |                     |
	| addons  | ingress-addon-legacy-054400              | ingress-addon-legacy-054400 | minikube7\jenkins | v1.32.0 | 08 Jan 24 20:54 UTC | 08 Jan 24 20:55 UTC |
	|         | addons enable ingress                    |                             |                   |         |                     |                     |
	|         | --alsologtostderr -v=5                   |                             |                   |         |                     |                     |
	| addons  | ingress-addon-legacy-054400              | ingress-addon-legacy-054400 | minikube7\jenkins | v1.32.0 | 08 Jan 24 20:55 UTC | 08 Jan 24 20:55 UTC |
	|         | addons enable ingress-dns                |                             |                   |         |                     |                     |
	|         | --alsologtostderr -v=5                   |                             |                   |         |                     |                     |
	| ssh     | ingress-addon-legacy-054400              | ingress-addon-legacy-054400 | minikube7\jenkins | v1.32.0 | 08 Jan 24 20:56 UTC | 08 Jan 24 20:56 UTC |
	|         | ssh curl -s http://127.0.0.1/            |                             |                   |         |                     |                     |
	|         | -H 'Host: nginx.example.com'             |                             |                   |         |                     |                     |
	| ip      | ingress-addon-legacy-054400 ip           | ingress-addon-legacy-054400 | minikube7\jenkins | v1.32.0 | 08 Jan 24 20:56 UTC | 08 Jan 24 20:56 UTC |
	| addons  | ingress-addon-legacy-054400              | ingress-addon-legacy-054400 | minikube7\jenkins | v1.32.0 | 08 Jan 24 20:56 UTC | 08 Jan 24 20:56 UTC |
	|         | addons disable ingress-dns               |                             |                   |         |                     |                     |
	|         | --alsologtostderr -v=1                   |                             |                   |         |                     |                     |
	| addons  | ingress-addon-legacy-054400              | ingress-addon-legacy-054400 | minikube7\jenkins | v1.32.0 | 08 Jan 24 20:56 UTC | 08 Jan 24 20:57 UTC |
	|         | addons disable ingress                   |                             |                   |         |                     |                     |
	|         | --alsologtostderr -v=1                   |                             |                   |         |                     |                     |
	| delete  | -p ingress-addon-legacy-054400           | ingress-addon-legacy-054400 | minikube7\jenkins | v1.32.0 | 08 Jan 24 20:57 UTC | 08 Jan 24 20:58 UTC |
	| start   | -p json-output-708000                    | json-output-708000          | testUser          | v1.32.0 | 08 Jan 24 20:58 UTC | 08 Jan 24 21:01 UTC |
	|         | --output=json --user=testUser            |                             |                   |         |                     |                     |
	|         | --memory=2200 --wait=true                |                             |                   |         |                     |                     |
	|         | --driver=hyperv                          |                             |                   |         |                     |                     |
	| pause   | -p json-output-708000                    | json-output-708000          | testUser          | v1.32.0 | 08 Jan 24 21:01 UTC | 08 Jan 24 21:01 UTC |
	|         | --output=json --user=testUser            |                             |                   |         |                     |                     |
	| unpause | -p json-output-708000                    | json-output-708000          | testUser          | v1.32.0 | 08 Jan 24 21:01 UTC | 08 Jan 24 21:02 UTC |
	|         | --output=json --user=testUser            |                             |                   |         |                     |                     |
	| stop    | -p json-output-708000                    | json-output-708000          | testUser          | v1.32.0 | 08 Jan 24 21:02 UTC | 08 Jan 24 21:02 UTC |
	|         | --output=json --user=testUser            |                             |                   |         |                     |                     |
	| delete  | -p json-output-708000                    | json-output-708000          | minikube7\jenkins | v1.32.0 | 08 Jan 24 21:02 UTC | 08 Jan 24 21:02 UTC |
	| start   | -p json-output-error-765100              | json-output-error-765100    | minikube7\jenkins | v1.32.0 | 08 Jan 24 21:02 UTC |                     |
	|         | --memory=2200 --output=json              |                             |                   |         |                     |                     |
	|         | --wait=true --driver=fail                |                             |                   |         |                     |                     |
	| delete  | -p json-output-error-765100              | json-output-error-765100    | minikube7\jenkins | v1.32.0 | 08 Jan 24 21:02 UTC | 08 Jan 24 21:02 UTC |
	| start   | -p first-818600                          | first-818600                | minikube7\jenkins | v1.32.0 | 08 Jan 24 21:02 UTC | 08 Jan 24 21:06 UTC |
	|         | --driver=hyperv                          |                             |                   |         |                     |                     |
	| start   | -p second-818600                         | second-818600               | minikube7\jenkins | v1.32.0 | 08 Jan 24 21:06 UTC |                     |
	|         | --driver=hyperv                          |                             |                   |         |                     |                     |
	| delete  | -p second-818600                         | second-818600               | minikube7\jenkins | v1.32.0 | 08 Jan 24 21:09 UTC | 08 Jan 24 21:10 UTC |
	|---------|------------------------------------------|-----------------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/01/08 21:06:03
	Running on machine: minikube7
	Binary: Built with gc go1.21.5 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0108 21:06:03.698498    4132 out.go:296] Setting OutFile to fd 1080 ...
	I0108 21:06:03.699502    4132 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 21:06:03.699502    4132 out.go:309] Setting ErrFile to fd 1432...
	I0108 21:06:03.699502    4132 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 21:06:03.721537    4132 out.go:303] Setting JSON to false
	I0108 21:06:03.724644    4132 start.go:128] hostinfo: {"hostname":"minikube7","uptime":25905,"bootTime":1704722057,"procs":194,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.3803 Build 19045.3803","kernelVersion":"10.0.19045.3803 Build 19045.3803","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f66ed2ea-6c04-4a6b-8eea-b2fb0953e990"}
	W0108 21:06:03.724721    4132 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0108 21:06:03.725438    4132 out.go:177] * [second-818600] minikube v1.32.0 on Microsoft Windows 10 Enterprise N 10.0.19045.3803 Build 19045.3803
	I0108 21:06:03.726729    4132 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	I0108 21:06:03.727391    4132 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0108 21:06:03.726183    4132 notify.go:220] Checking for updates...
	I0108 21:06:03.727564    4132 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube7\minikube-integration\.minikube
	I0108 21:06:03.728677    4132 out.go:177]   - MINIKUBE_LOCATION=17907
	I0108 21:06:03.730029    4132 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0108 21:06:03.733405    4132 config.go:182] Loaded profile config "first-818600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0108 21:06:03.733987    4132 driver.go:392] Setting default libvirt URI to qemu:///system
	I0108 21:06:09.110698    4132 out.go:177] * Using the hyperv driver based on user configuration
	I0108 21:06:09.111892    4132 start.go:298] selected driver: hyperv
	I0108 21:06:09.111892    4132 start.go:902] validating driver "hyperv" against <nil>
	I0108 21:06:09.112016    4132 start.go:913] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0108 21:06:09.112369    4132 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I0108 21:06:09.160695    4132 start_flags.go:394] Using suggested 6000MB memory alloc based on sys=65534MB, container=0MB
	I0108 21:06:09.161831    4132 start_flags.go:913] Wait components to verify : map[apiserver:true system_pods:true]
	I0108 21:06:09.161831    4132 cni.go:84] Creating CNI manager for ""
	I0108 21:06:09.161831    4132 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0108 21:06:09.161831    4132 start_flags.go:318] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0108 21:06:09.161831    4132 start_flags.go:323] config:
	{Name:second-818600 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:second-818600 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker C
RISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I0108 21:06:09.161831    4132 iso.go:125] acquiring lock: {Name:mk1869c5b5033c1301af33a7fa364ec43dd63efe Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 21:06:09.163919    4132 out.go:177] * Starting control plane node second-818600 in cluster second-818600
	I0108 21:06:09.164676    4132 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0108 21:06:09.164676    4132 preload.go:148] Found local preload: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4
	I0108 21:06:09.164676    4132 cache.go:56] Caching tarball of preloaded images
	I0108 21:06:09.164676    4132 preload.go:174] Found C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0108 21:06:09.164676    4132 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on docker
	I0108 21:06:09.164676    4132 profile.go:148] Saving config to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\second-818600\config.json ...
	I0108 21:06:09.165543    4132 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\second-818600\config.json: {Name:mk4bab50386f05966855aaa54cf0c8ffc2f00ff0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 21:06:09.166543    4132 start.go:365] acquiring machines lock for second-818600: {Name:mk814f158b6187cc9297257c36fdbe0d2871c950 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0108 21:06:09.166543    4132 start.go:369] acquired machines lock for "second-818600" in 0s
	I0108 21:06:09.166543    4132 start.go:93] Provisioning new machine with config: &{Name:second-818600 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17806/minikube-v1.32.1-1702708929-17806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.28.4 ClusterName:second-818600 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bina
ryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0108 21:06:09.166543    4132 start.go:125] createHost starting for "" (driver="hyperv")
	I0108 21:06:09.167567    4132 out.go:204] * Creating hyperv VM (CPUs=2, Memory=6000MB, Disk=20000MB) ...
	I0108 21:06:09.167567    4132 start.go:159] libmachine.API.Create for "second-818600" (driver="hyperv")
	I0108 21:06:09.167567    4132 client.go:168] LocalClient.Create starting
	I0108 21:06:09.168544    4132 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem
	I0108 21:06:09.168544    4132 main.go:141] libmachine: Decoding PEM data...
	I0108 21:06:09.168544    4132 main.go:141] libmachine: Parsing certificate...
	I0108 21:06:09.168544    4132 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem
	I0108 21:06:09.168544    4132 main.go:141] libmachine: Decoding PEM data...
	I0108 21:06:09.168544    4132 main.go:141] libmachine: Parsing certificate...
	I0108 21:06:09.168544    4132 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0108 21:06:11.248192    4132 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0108 21:06:11.248299    4132 main.go:141] libmachine: [stderr =====>] : 
	I0108 21:06:11.248352    4132 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0108 21:06:12.985408    4132 main.go:141] libmachine: [stdout =====>] : False
	
	I0108 21:06:12.985408    4132 main.go:141] libmachine: [stderr =====>] : 
	I0108 21:06:12.985408    4132 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0108 21:06:14.497112    4132 main.go:141] libmachine: [stdout =====>] : True
	
	I0108 21:06:14.497404    4132 main.go:141] libmachine: [stderr =====>] : 
	I0108 21:06:14.497486    4132 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0108 21:06:18.076826    4132 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0108 21:06:18.076826    4132 main.go:141] libmachine: [stderr =====>] : 
	I0108 21:06:18.080315    4132 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube7/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.32.1-1702708929-17806-amd64.iso...
	I0108 21:06:18.519690    4132 main.go:141] libmachine: Creating SSH key...
	I0108 21:06:18.612681    4132 main.go:141] libmachine: Creating VM...
	I0108 21:06:18.612841    4132 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0108 21:06:21.492873    4132 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0108 21:06:21.492873    4132 main.go:141] libmachine: [stderr =====>] : 
	I0108 21:06:21.492873    4132 main.go:141] libmachine: Using switch "Default Switch"
	I0108 21:06:21.492873    4132 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0108 21:06:23.300757    4132 main.go:141] libmachine: [stdout =====>] : True
	
	I0108 21:06:23.300757    4132 main.go:141] libmachine: [stderr =====>] : 
	I0108 21:06:23.300757    4132 main.go:141] libmachine: Creating VHD
	I0108 21:06:23.300757    4132 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\second-818600\fixed.vhd' -SizeBytes 10MB -Fixed
	I0108 21:06:27.096121    4132 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube7
	Path                    : C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\second-818600\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : A7C122F6-DED2-4442-8636-8E91197D39AE
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0108 21:06:27.096121    4132 main.go:141] libmachine: [stderr =====>] : 
	I0108 21:06:27.096212    4132 main.go:141] libmachine: Writing magic tar header
	I0108 21:06:27.096212    4132 main.go:141] libmachine: Writing SSH key tar header
	I0108 21:06:27.104782    4132 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\second-818600\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\second-818600\disk.vhd' -VHDType Dynamic -DeleteSource
	I0108 21:06:30.292629    4132 main.go:141] libmachine: [stdout =====>] : 
	I0108 21:06:30.292689    4132 main.go:141] libmachine: [stderr =====>] : 
	I0108 21:06:30.292689    4132 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\second-818600\disk.vhd' -SizeBytes 20000MB
	I0108 21:06:32.789603    4132 main.go:141] libmachine: [stdout =====>] : 
	I0108 21:06:32.789798    4132 main.go:141] libmachine: [stderr =====>] : 
	I0108 21:06:32.789798    4132 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM second-818600 -Path 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\second-818600' -SwitchName 'Default Switch' -MemoryStartupBytes 6000MB
	I0108 21:06:36.326529    4132 main.go:141] libmachine: [stdout =====>] : 
	Name          State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----          ----- ----------- ----------------- ------   ------             -------
	second-818600 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0108 21:06:36.326529    4132 main.go:141] libmachine: [stderr =====>] : 
	I0108 21:06:36.326529    4132 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName second-818600 -DynamicMemoryEnabled $false
	I0108 21:06:38.540857    4132 main.go:141] libmachine: [stdout =====>] : 
	I0108 21:06:38.540857    4132 main.go:141] libmachine: [stderr =====>] : 
	I0108 21:06:38.540857    4132 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor second-818600 -Count 2
	I0108 21:06:40.689462    4132 main.go:141] libmachine: [stdout =====>] : 
	I0108 21:06:40.689462    4132 main.go:141] libmachine: [stderr =====>] : 
	I0108 21:06:40.689462    4132 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName second-818600 -Path 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\second-818600\boot2docker.iso'
	I0108 21:06:43.261785    4132 main.go:141] libmachine: [stdout =====>] : 
	I0108 21:06:43.261785    4132 main.go:141] libmachine: [stderr =====>] : 
	I0108 21:06:43.262079    4132 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName second-818600 -Path 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\second-818600\disk.vhd'
	I0108 21:06:45.850209    4132 main.go:141] libmachine: [stdout =====>] : 
	I0108 21:06:45.850209    4132 main.go:141] libmachine: [stderr =====>] : 
	I0108 21:06:45.850209    4132 main.go:141] libmachine: Starting VM...
	I0108 21:06:45.850209    4132 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM second-818600
	I0108 21:06:48.762483    4132 main.go:141] libmachine: [stdout =====>] : 
	I0108 21:06:48.762483    4132 main.go:141] libmachine: [stderr =====>] : 
	I0108 21:06:48.762483    4132 main.go:141] libmachine: Waiting for host to start...
	I0108 21:06:48.762483    4132 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM second-818600 ).state
	I0108 21:06:51.068962    4132 main.go:141] libmachine: [stdout =====>] : Running
	
	I0108 21:06:51.069026    4132 main.go:141] libmachine: [stderr =====>] : 
	I0108 21:06:51.069204    4132 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM second-818600 ).networkadapters[0]).ipaddresses[0]
	I0108 21:06:53.672467    4132 main.go:141] libmachine: [stdout =====>] : 
	I0108 21:06:53.672676    4132 main.go:141] libmachine: [stderr =====>] : 
	I0108 21:06:54.674697    4132 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM second-818600 ).state
	I0108 21:06:56.929371    4132 main.go:141] libmachine: [stdout =====>] : Running
	
	I0108 21:06:56.929608    4132 main.go:141] libmachine: [stderr =====>] : 
	I0108 21:06:56.929806    4132 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM second-818600 ).networkadapters[0]).ipaddresses[0]
	I0108 21:06:59.476218    4132 main.go:141] libmachine: [stdout =====>] : 
	I0108 21:06:59.476218    4132 main.go:141] libmachine: [stderr =====>] : 
	I0108 21:07:00.480766    4132 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM second-818600 ).state
	I0108 21:07:02.690788    4132 main.go:141] libmachine: [stdout =====>] : Running
	
	I0108 21:07:02.690788    4132 main.go:141] libmachine: [stderr =====>] : 
	I0108 21:07:02.691090    4132 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM second-818600 ).networkadapters[0]).ipaddresses[0]
	I0108 21:07:05.298737    4132 main.go:141] libmachine: [stdout =====>] : 
	I0108 21:07:05.298737    4132 main.go:141] libmachine: [stderr =====>] : 
	I0108 21:07:06.302212    4132 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM second-818600 ).state
	I0108 21:07:08.503142    4132 main.go:141] libmachine: [stdout =====>] : Running
	
	I0108 21:07:08.503142    4132 main.go:141] libmachine: [stderr =====>] : 
	I0108 21:07:08.503418    4132 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM second-818600 ).networkadapters[0]).ipaddresses[0]
	I0108 21:07:11.098099    4132 main.go:141] libmachine: [stdout =====>] : 
	I0108 21:07:11.098099    4132 main.go:141] libmachine: [stderr =====>] : 
	I0108 21:07:12.100931    4132 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM second-818600 ).state
	I0108 21:07:14.279580    4132 main.go:141] libmachine: [stdout =====>] : Running
	
	I0108 21:07:14.279657    4132 main.go:141] libmachine: [stderr =====>] : 
	I0108 21:07:14.280407    4132 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM second-818600 ).networkadapters[0]).ipaddresses[0]
	I0108 21:07:16.883175    4132 main.go:141] libmachine: [stdout =====>] : 172.29.96.48
	
	I0108 21:07:16.883439    4132 main.go:141] libmachine: [stderr =====>] : 
	I0108 21:07:16.883439    4132 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM second-818600 ).state
	I0108 21:07:19.066055    4132 main.go:141] libmachine: [stdout =====>] : Running
	
	I0108 21:07:19.066055    4132 main.go:141] libmachine: [stderr =====>] : 
	I0108 21:07:19.066055    4132 machine.go:88] provisioning docker machine ...
	I0108 21:07:19.066658    4132 buildroot.go:166] provisioning hostname "second-818600"
	I0108 21:07:19.066658    4132 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM second-818600 ).state
	I0108 21:07:21.249838    4132 main.go:141] libmachine: [stdout =====>] : Running
	
	I0108 21:07:21.249838    4132 main.go:141] libmachine: [stderr =====>] : 
	I0108 21:07:21.250125    4132 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM second-818600 ).networkadapters[0]).ipaddresses[0]
	I0108 21:07:23.826693    4132 main.go:141] libmachine: [stdout =====>] : 172.29.96.48
	
	I0108 21:07:23.826693    4132 main.go:141] libmachine: [stderr =====>] : 
	I0108 21:07:23.834945    4132 main.go:141] libmachine: Using SSH client type: native
	I0108 21:07:23.844348    4132 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4a6120] 0x4a8c60 <nil>  [] 0s} 172.29.96.48 22 <nil> <nil>}
	I0108 21:07:23.844348    4132 main.go:141] libmachine: About to run SSH command:
	sudo hostname second-818600 && echo "second-818600" | sudo tee /etc/hostname
	I0108 21:07:24.008184    4132 main.go:141] libmachine: SSH cmd err, output: <nil>: second-818600
	
	I0108 21:07:24.008184    4132 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM second-818600 ).state
	I0108 21:07:26.115767    4132 main.go:141] libmachine: [stdout =====>] : Running
	
	I0108 21:07:26.115972    4132 main.go:141] libmachine: [stderr =====>] : 
	I0108 21:07:26.115972    4132 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM second-818600 ).networkadapters[0]).ipaddresses[0]
	I0108 21:07:28.644344    4132 main.go:141] libmachine: [stdout =====>] : 172.29.96.48
	
	I0108 21:07:28.644344    4132 main.go:141] libmachine: [stderr =====>] : 
	I0108 21:07:28.649995    4132 main.go:141] libmachine: Using SSH client type: native
	I0108 21:07:28.650703    4132 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4a6120] 0x4a8c60 <nil>  [] 0s} 172.29.96.48 22 <nil> <nil>}
	I0108 21:07:28.650703    4132 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\ssecond-818600' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 second-818600/g' /etc/hosts;
				else 
					echo '127.0.1.1 second-818600' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0108 21:07:28.803682    4132 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0108 21:07:28.803755    4132 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube7\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube7\minikube-integration\.minikube}
	I0108 21:07:28.803800    4132 buildroot.go:174] setting up certificates
	I0108 21:07:28.803841    4132 provision.go:83] configureAuth start
	I0108 21:07:28.803841    4132 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM second-818600 ).state
	I0108 21:07:30.934492    4132 main.go:141] libmachine: [stdout =====>] : Running
	
	I0108 21:07:30.934492    4132 main.go:141] libmachine: [stderr =====>] : 
	I0108 21:07:30.934756    4132 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM second-818600 ).networkadapters[0]).ipaddresses[0]
	I0108 21:07:33.500917    4132 main.go:141] libmachine: [stdout =====>] : 172.29.96.48
	
	I0108 21:07:33.500917    4132 main.go:141] libmachine: [stderr =====>] : 
	I0108 21:07:33.501002    4132 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM second-818600 ).state
	I0108 21:07:35.618491    4132 main.go:141] libmachine: [stdout =====>] : Running
	
	I0108 21:07:35.618491    4132 main.go:141] libmachine: [stderr =====>] : 
	I0108 21:07:35.618576    4132 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM second-818600 ).networkadapters[0]).ipaddresses[0]
	I0108 21:07:38.149353    4132 main.go:141] libmachine: [stdout =====>] : 172.29.96.48
	
	I0108 21:07:38.149353    4132 main.go:141] libmachine: [stderr =====>] : 
	I0108 21:07:38.149353    4132 provision.go:138] copyHostCerts
	I0108 21:07:38.149862    4132 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/ca.pem, removing ...
	I0108 21:07:38.149929    4132 exec_runner.go:203] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.pem
	I0108 21:07:38.150490    4132 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0108 21:07:38.152021    4132 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/cert.pem, removing ...
	I0108 21:07:38.152021    4132 exec_runner.go:203] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cert.pem
	I0108 21:07:38.152375    4132 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0108 21:07:38.153521    4132 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/key.pem, removing ...
	I0108 21:07:38.153521    4132 exec_runner.go:203] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\key.pem
	I0108 21:07:38.153699    4132 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/key.pem (1679 bytes)
	I0108 21:07:38.154395    4132 provision.go:112] generating server cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.second-818600 san=[172.29.96.48 172.29.96.48 localhost 127.0.0.1 minikube second-818600]
	I0108 21:07:38.642875    4132 provision.go:172] copyRemoteCerts
	I0108 21:07:38.657481    4132 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0108 21:07:38.657559    4132 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM second-818600 ).state
	I0108 21:07:40.794775    4132 main.go:141] libmachine: [stdout =====>] : Running
	
	I0108 21:07:40.794775    4132 main.go:141] libmachine: [stderr =====>] : 
	I0108 21:07:40.794775    4132 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM second-818600 ).networkadapters[0]).ipaddresses[0]
	I0108 21:07:43.297332    4132 main.go:141] libmachine: [stdout =====>] : 172.29.96.48
	
	I0108 21:07:43.297570    4132 main.go:141] libmachine: [stderr =====>] : 
	I0108 21:07:43.298178    4132 sshutil.go:53] new ssh client: &{IP:172.29.96.48 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\second-818600\id_rsa Username:docker}
	I0108 21:07:43.406225    4132 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.7486347s)
	I0108 21:07:43.406225    4132 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0108 21:07:43.444837    4132 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1216 bytes)
	I0108 21:07:43.482855    4132 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0108 21:07:43.521121    4132 provision.go:86] duration metric: configureAuth took 14.7172044s
	I0108 21:07:43.521261    4132 buildroot.go:189] setting minikube options for container-runtime
	I0108 21:07:43.521784    4132 config.go:182] Loaded profile config "second-818600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0108 21:07:43.521784    4132 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM second-818600 ).state
	I0108 21:07:45.671692    4132 main.go:141] libmachine: [stdout =====>] : Running
	
	I0108 21:07:45.671692    4132 main.go:141] libmachine: [stderr =====>] : 
	I0108 21:07:45.671893    4132 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM second-818600 ).networkadapters[0]).ipaddresses[0]
	I0108 21:07:48.257739    4132 main.go:141] libmachine: [stdout =====>] : 172.29.96.48
	
	I0108 21:07:48.257739    4132 main.go:141] libmachine: [stderr =====>] : 
	I0108 21:07:48.263387    4132 main.go:141] libmachine: Using SSH client type: native
	I0108 21:07:48.263964    4132 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4a6120] 0x4a8c60 <nil>  [] 0s} 172.29.96.48 22 <nil> <nil>}
	I0108 21:07:48.263964    4132 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0108 21:07:48.405739    4132 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0108 21:07:48.405739    4132 buildroot.go:70] root file system type: tmpfs
	I0108 21:07:48.405739    4132 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0108 21:07:48.405739    4132 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM second-818600 ).state
	I0108 21:07:50.536106    4132 main.go:141] libmachine: [stdout =====>] : Running
	
	I0108 21:07:50.536345    4132 main.go:141] libmachine: [stderr =====>] : 
	I0108 21:07:50.536345    4132 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM second-818600 ).networkadapters[0]).ipaddresses[0]
	I0108 21:07:53.088490    4132 main.go:141] libmachine: [stdout =====>] : 172.29.96.48
	
	I0108 21:07:53.088490    4132 main.go:141] libmachine: [stderr =====>] : 
	I0108 21:07:53.093722    4132 main.go:141] libmachine: Using SSH client type: native
	I0108 21:07:53.094465    4132 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4a6120] 0x4a8c60 <nil>  [] 0s} 172.29.96.48 22 <nil> <nil>}
	I0108 21:07:53.094465    4132 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0108 21:07:53.258583    4132 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0108 21:07:53.259121    4132 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM second-818600 ).state
	I0108 21:07:55.423432    4132 main.go:141] libmachine: [stdout =====>] : Running
	
	I0108 21:07:55.423529    4132 main.go:141] libmachine: [stderr =====>] : 
	I0108 21:07:55.423587    4132 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM second-818600 ).networkadapters[0]).ipaddresses[0]
	I0108 21:07:57.952518    4132 main.go:141] libmachine: [stdout =====>] : 172.29.96.48
	
	I0108 21:07:57.952518    4132 main.go:141] libmachine: [stderr =====>] : 
	I0108 21:07:57.957954    4132 main.go:141] libmachine: Using SSH client type: native
	I0108 21:07:57.958726    4132 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4a6120] 0x4a8c60 <nil>  [] 0s} 172.29.96.48 22 <nil> <nil>}
	I0108 21:07:57.958811    4132 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0108 21:07:58.919373    4132 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0108 21:07:58.919373    4132 machine.go:91] provisioned docker machine in 39.8531127s
	I0108 21:07:58.919373    4132 client.go:171] LocalClient.Create took 1m49.7512374s
	I0108 21:07:58.919373    4132 start.go:167] duration metric: libmachine.API.Create for "second-818600" took 1m49.7512374s
	I0108 21:07:58.919373    4132 start.go:300] post-start starting for "second-818600" (driver="hyperv")
	I0108 21:07:58.919373    4132 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0108 21:07:58.935026    4132 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0108 21:07:58.935026    4132 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM second-818600 ).state
	I0108 21:08:01.074720    4132 main.go:141] libmachine: [stdout =====>] : Running
	
	I0108 21:08:01.074929    4132 main.go:141] libmachine: [stderr =====>] : 
	I0108 21:08:01.074929    4132 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM second-818600 ).networkadapters[0]).ipaddresses[0]
	I0108 21:08:03.618001    4132 main.go:141] libmachine: [stdout =====>] : 172.29.96.48
	
	I0108 21:08:03.618217    4132 main.go:141] libmachine: [stderr =====>] : 
	I0108 21:08:03.618636    4132 sshutil.go:53] new ssh client: &{IP:172.29.96.48 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\second-818600\id_rsa Username:docker}
	I0108 21:08:03.725118    4132 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.7900676s)
	I0108 21:08:03.739301    4132 ssh_runner.go:195] Run: cat /etc/os-release
	I0108 21:08:03.745463    4132 info.go:137] Remote host: Buildroot 2021.02.12
	I0108 21:08:03.745463    4132 filesync.go:126] Scanning C:\Users\jenkins.minikube7\minikube-integration\.minikube\addons for local assets ...
	I0108 21:08:03.746133    4132 filesync.go:126] Scanning C:\Users\jenkins.minikube7\minikube-integration\.minikube\files for local assets ...
	I0108 21:08:03.747777    4132 filesync.go:149] local asset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\30082.pem -> 30082.pem in /etc/ssl/certs
	I0108 21:08:03.763899    4132 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0108 21:08:03.778142    4132 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\30082.pem --> /etc/ssl/certs/30082.pem (1708 bytes)
	I0108 21:08:03.818681    4132 start.go:303] post-start completed in 4.8992826s
	I0108 21:08:03.821948    4132 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM second-818600 ).state
	I0108 21:08:05.986065    4132 main.go:141] libmachine: [stdout =====>] : Running
	
	I0108 21:08:05.986235    4132 main.go:141] libmachine: [stderr =====>] : 
	I0108 21:08:05.986235    4132 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM second-818600 ).networkadapters[0]).ipaddresses[0]
	I0108 21:08:08.519451    4132 main.go:141] libmachine: [stdout =====>] : 172.29.96.48
	
	I0108 21:08:08.519451    4132 main.go:141] libmachine: [stderr =====>] : 
	I0108 21:08:08.519619    4132 profile.go:148] Saving config to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\second-818600\config.json ...
	I0108 21:08:08.522554    4132 start.go:128] duration metric: createHost completed in 1m59.355393s
	I0108 21:08:08.522652    4132 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM second-818600 ).state
	I0108 21:08:10.654367    4132 main.go:141] libmachine: [stdout =====>] : Running
	
	I0108 21:08:10.654367    4132 main.go:141] libmachine: [stderr =====>] : 
	I0108 21:08:10.654367    4132 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM second-818600 ).networkadapters[0]).ipaddresses[0]
	I0108 21:08:13.185919    4132 main.go:141] libmachine: [stdout =====>] : 172.29.96.48
	
	I0108 21:08:13.185919    4132 main.go:141] libmachine: [stderr =====>] : 
	I0108 21:08:13.191213    4132 main.go:141] libmachine: Using SSH client type: native
	I0108 21:08:13.191848    4132 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4a6120] 0x4a8c60 <nil>  [] 0s} 172.29.96.48 22 <nil> <nil>}
	I0108 21:08:13.191848    4132 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0108 21:08:13.330977    4132 main.go:141] libmachine: SSH cmd err, output: <nil>: 1704748093.337199268
	
	I0108 21:08:13.330977    4132 fix.go:206] guest clock: 1704748093.337199268
	I0108 21:08:13.330977    4132 fix.go:219] Guest: 2024-01-08 21:08:13.337199268 +0000 UTC Remote: 2024-01-08 21:08:08.5225546 +0000 UTC m=+124.992046701 (delta=4.814644668s)
	I0108 21:08:13.330977    4132 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM second-818600 ).state
	I0108 21:08:15.430028    4132 main.go:141] libmachine: [stdout =====>] : Running
	
	I0108 21:08:15.430028    4132 main.go:141] libmachine: [stderr =====>] : 
	I0108 21:08:15.430117    4132 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM second-818600 ).networkadapters[0]).ipaddresses[0]
	I0108 21:08:17.960398    4132 main.go:141] libmachine: [stdout =====>] : 172.29.96.48
	
	I0108 21:08:17.960398    4132 main.go:141] libmachine: [stderr =====>] : 
	I0108 21:08:17.966216    4132 main.go:141] libmachine: Using SSH client type: native
	I0108 21:08:17.966699    4132 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4a6120] 0x4a8c60 <nil>  [] 0s} 172.29.96.48 22 <nil> <nil>}
	I0108 21:08:17.966699    4132 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1704748093
	I0108 21:08:18.115787    4132 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Jan  8 21:08:13 UTC 2024
	
	I0108 21:08:18.115787    4132 fix.go:226] clock set: Mon Jan  8 21:08:13 UTC 2024
	 (err=<nil>)
	I0108 21:08:18.115787    4132 start.go:83] releasing machines lock for "second-818600", held for 2m8.9485773s
	I0108 21:08:18.116353    4132 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM second-818600 ).state
	I0108 21:08:20.236579    4132 main.go:141] libmachine: [stdout =====>] : Running
	
	I0108 21:08:20.236579    4132 main.go:141] libmachine: [stderr =====>] : 
	I0108 21:08:20.236579    4132 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM second-818600 ).networkadapters[0]).ipaddresses[0]
	I0108 21:08:22.767968    4132 main.go:141] libmachine: [stdout =====>] : 172.29.96.48
	
	I0108 21:08:22.767968    4132 main.go:141] libmachine: [stderr =====>] : 
	I0108 21:08:22.772783    4132 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0108 21:08:22.772879    4132 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM second-818600 ).state
	I0108 21:08:22.784525    4132 ssh_runner.go:195] Run: cat /version.json
	I0108 21:08:22.784525    4132 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM second-818600 ).state
	I0108 21:08:24.952254    4132 main.go:141] libmachine: [stdout =====>] : Running
	
	I0108 21:08:24.952254    4132 main.go:141] libmachine: [stderr =====>] : 
	I0108 21:08:24.952313    4132 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM second-818600 ).networkadapters[0]).ipaddresses[0]
	I0108 21:08:25.030414    4132 main.go:141] libmachine: [stdout =====>] : Running
	
	I0108 21:08:25.030414    4132 main.go:141] libmachine: [stderr =====>] : 
	I0108 21:08:25.030414    4132 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM second-818600 ).networkadapters[0]).ipaddresses[0]
	I0108 21:08:27.612953    4132 main.go:141] libmachine: [stdout =====>] : 172.29.96.48
	
	I0108 21:08:27.612953    4132 main.go:141] libmachine: [stderr =====>] : 
	I0108 21:08:27.613588    4132 sshutil.go:53] new ssh client: &{IP:172.29.96.48 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\second-818600\id_rsa Username:docker}
	I0108 21:08:27.706705    4132 main.go:141] libmachine: [stdout =====>] : 172.29.96.48
	
	I0108 21:08:27.706833    4132 main.go:141] libmachine: [stderr =====>] : 
	I0108 21:08:27.706833    4132 sshutil.go:53] new ssh client: &{IP:172.29.96.48 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\second-818600\id_rsa Username:docker}
	I0108 21:08:27.825154    4132 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (5.0523452s)
	I0108 21:08:27.825154    4132 ssh_runner.go:235] Completed: cat /version.json: (5.0406035s)
	I0108 21:08:27.839058    4132 ssh_runner.go:195] Run: systemctl --version
	I0108 21:08:27.859923    4132 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0108 21:08:27.868375    4132 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0108 21:08:27.881751    4132 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0108 21:08:27.908915    4132 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0108 21:08:27.908915    4132 start.go:475] detecting cgroup driver to use...
	I0108 21:08:27.908915    4132 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0108 21:08:27.951592    4132 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0108 21:08:27.981072    4132 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0108 21:08:27.998108    4132 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0108 21:08:28.011604    4132 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0108 21:08:28.048152    4132 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0108 21:08:28.078565    4132 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0108 21:08:28.108590    4132 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0108 21:08:28.140171    4132 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0108 21:08:28.168869    4132 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0108 21:08:28.197865    4132 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0108 21:08:28.225652    4132 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0108 21:08:28.256748    4132 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0108 21:08:28.421642    4132 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0108 21:08:28.447430    4132 start.go:475] detecting cgroup driver to use...
	I0108 21:08:28.461872    4132 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0108 21:08:28.500650    4132 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0108 21:08:28.530466    4132 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0108 21:08:28.583956    4132 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0108 21:08:28.621398    4132 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0108 21:08:28.654844    4132 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0108 21:08:28.710495    4132 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0108 21:08:28.729776    4132 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0108 21:08:28.773068    4132 ssh_runner.go:195] Run: which cri-dockerd
	I0108 21:08:28.791939    4132 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0108 21:08:28.807488    4132 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0108 21:08:28.849640    4132 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0108 21:08:29.019033    4132 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0108 21:08:29.188043    4132 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0108 21:08:29.188043    4132 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0108 21:08:29.242046    4132 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0108 21:08:29.422826    4132 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0108 21:09:30.530003    4132 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m1.1068653s)
	I0108 21:09:30.543556    4132 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0108 21:09:30.569668    4132 out.go:177] 
	W0108 21:09:30.570581    4132 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	-- Journal begins at Mon 2024-01-08 21:07:06 UTC, ends at Mon 2024-01-08 21:09:30 UTC. --
	Jan 08 21:07:58 second-818600 systemd[1]: Starting Docker Application Container Engine...
	Jan 08 21:07:58 second-818600 dockerd[679]: time="2024-01-08T21:07:58.490402031Z" level=info msg="Starting up"
	Jan 08 21:07:58 second-818600 dockerd[679]: time="2024-01-08T21:07:58.491257970Z" level=info msg="containerd not running, starting managed containerd"
	Jan 08 21:07:58 second-818600 dockerd[679]: time="2024-01-08T21:07:58.492956349Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=685
	Jan 08 21:07:58 second-818600 dockerd[685]: time="2024-01-08T21:07:58.529766661Z" level=info msg="starting containerd" revision=64b8a811b07ba6288238eefc14d898ee0b5b99ba version=v1.7.11
	Jan 08 21:07:58 second-818600 dockerd[685]: time="2024-01-08T21:07:58.553371058Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Jan 08 21:07:58 second-818600 dockerd[685]: time="2024-01-08T21:07:58.553404559Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Jan 08 21:07:58 second-818600 dockerd[685]: time="2024-01-08T21:07:58.555491156Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.57\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Jan 08 21:07:58 second-818600 dockerd[685]: time="2024-01-08T21:07:58.555583461Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Jan 08 21:07:58 second-818600 dockerd[685]: time="2024-01-08T21:07:58.555833372Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jan 08 21:07:58 second-818600 dockerd[685]: time="2024-01-08T21:07:58.555870974Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Jan 08 21:07:58 second-818600 dockerd[685]: time="2024-01-08T21:07:58.555972679Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Jan 08 21:07:58 second-818600 dockerd[685]: time="2024-01-08T21:07:58.556086784Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Jan 08 21:07:58 second-818600 dockerd[685]: time="2024-01-08T21:07:58.556118886Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Jan 08 21:07:58 second-818600 dockerd[685]: time="2024-01-08T21:07:58.556241991Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Jan 08 21:07:58 second-818600 dockerd[685]: time="2024-01-08T21:07:58.556796617Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Jan 08 21:07:58 second-818600 dockerd[685]: time="2024-01-08T21:07:58.556879721Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Jan 08 21:07:58 second-818600 dockerd[685]: time="2024-01-08T21:07:58.556895722Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Jan 08 21:07:58 second-818600 dockerd[685]: time="2024-01-08T21:07:58.557051629Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jan 08 21:07:58 second-818600 dockerd[685]: time="2024-01-08T21:07:58.557180835Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Jan 08 21:07:58 second-818600 dockerd[685]: time="2024-01-08T21:07:58.557322342Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Jan 08 21:07:58 second-818600 dockerd[685]: time="2024-01-08T21:07:58.557409146Z" level=info msg="metadata content store policy set" policy=shared
	Jan 08 21:07:58 second-818600 dockerd[685]: time="2024-01-08T21:07:58.574335432Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Jan 08 21:07:58 second-818600 dockerd[685]: time="2024-01-08T21:07:58.574810955Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Jan 08 21:07:58 second-818600 dockerd[685]: time="2024-01-08T21:07:58.574863757Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Jan 08 21:07:58 second-818600 dockerd[685]: time="2024-01-08T21:07:58.574944461Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Jan 08 21:07:58 second-818600 dockerd[685]: time="2024-01-08T21:07:58.575033865Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Jan 08 21:07:58 second-818600 dockerd[685]: time="2024-01-08T21:07:58.575093668Z" level=info msg="NRI interface is disabled by configuration."
	Jan 08 21:07:58 second-818600 dockerd[685]: time="2024-01-08T21:07:58.575158271Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Jan 08 21:07:58 second-818600 dockerd[685]: time="2024-01-08T21:07:58.575432883Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Jan 08 21:07:58 second-818600 dockerd[685]: time="2024-01-08T21:07:58.575525588Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Jan 08 21:07:58 second-818600 dockerd[685]: time="2024-01-08T21:07:58.575621792Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Jan 08 21:07:58 second-818600 dockerd[685]: time="2024-01-08T21:07:58.575964008Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Jan 08 21:07:58 second-818600 dockerd[685]: time="2024-01-08T21:07:58.575999010Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Jan 08 21:07:58 second-818600 dockerd[685]: time="2024-01-08T21:07:58.576096814Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Jan 08 21:07:58 second-818600 dockerd[685]: time="2024-01-08T21:07:58.576236421Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Jan 08 21:07:58 second-818600 dockerd[685]: time="2024-01-08T21:07:58.576263922Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Jan 08 21:07:58 second-818600 dockerd[685]: time="2024-01-08T21:07:58.576280023Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Jan 08 21:07:58 second-818600 dockerd[685]: time="2024-01-08T21:07:58.576295224Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Jan 08 21:07:58 second-818600 dockerd[685]: time="2024-01-08T21:07:58.576318425Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Jan 08 21:07:58 second-818600 dockerd[685]: time="2024-01-08T21:07:58.576334925Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Jan 08 21:07:58 second-818600 dockerd[685]: time="2024-01-08T21:07:58.576553136Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Jan 08 21:07:58 second-818600 dockerd[685]: time="2024-01-08T21:07:58.577969701Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Jan 08 21:07:58 second-818600 dockerd[685]: time="2024-01-08T21:07:58.578126409Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Jan 08 21:07:58 second-818600 dockerd[685]: time="2024-01-08T21:07:58.578167011Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Jan 08 21:07:58 second-818600 dockerd[685]: time="2024-01-08T21:07:58.578192412Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Jan 08 21:07:58 second-818600 dockerd[685]: time="2024-01-08T21:07:58.578273416Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Jan 08 21:07:58 second-818600 dockerd[685]: time="2024-01-08T21:07:58.578371120Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Jan 08 21:07:58 second-818600 dockerd[685]: time="2024-01-08T21:07:58.578391121Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Jan 08 21:07:58 second-818600 dockerd[685]: time="2024-01-08T21:07:58.578404422Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Jan 08 21:07:58 second-818600 dockerd[685]: time="2024-01-08T21:07:58.578417722Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Jan 08 21:07:58 second-818600 dockerd[685]: time="2024-01-08T21:07:58.578431623Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Jan 08 21:07:58 second-818600 dockerd[685]: time="2024-01-08T21:07:58.578445423Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Jan 08 21:07:58 second-818600 dockerd[685]: time="2024-01-08T21:07:58.578822741Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Jan 08 21:07:58 second-818600 dockerd[685]: time="2024-01-08T21:07:58.578877844Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Jan 08 21:07:58 second-818600 dockerd[685]: time="2024-01-08T21:07:58.578981648Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Jan 08 21:07:58 second-818600 dockerd[685]: time="2024-01-08T21:07:58.579013550Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Jan 08 21:07:58 second-818600 dockerd[685]: time="2024-01-08T21:07:58.579032051Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Jan 08 21:07:58 second-818600 dockerd[685]: time="2024-01-08T21:07:58.579049552Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Jan 08 21:07:58 second-818600 dockerd[685]: time="2024-01-08T21:07:58.579071653Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Jan 08 21:07:58 second-818600 dockerd[685]: time="2024-01-08T21:07:58.579146756Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Jan 08 21:07:58 second-818600 dockerd[685]: time="2024-01-08T21:07:58.579247661Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Jan 08 21:07:58 second-818600 dockerd[685]: time="2024-01-08T21:07:58.579272062Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Jan 08 21:07:58 second-818600 dockerd[685]: time="2024-01-08T21:07:58.579383567Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	Jan 08 21:07:58 second-818600 dockerd[685]: time="2024-01-08T21:07:58.579473571Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Jan 08 21:07:58 second-818600 dockerd[685]: time="2024-01-08T21:07:58.579499973Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	Jan 08 21:07:58 second-818600 dockerd[685]: time="2024-01-08T21:07:58.580484118Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Jan 08 21:07:58 second-818600 dockerd[685]: time="2024-01-08T21:07:58.580632325Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Jan 08 21:07:58 second-818600 dockerd[685]: time="2024-01-08T21:07:58.580888837Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Jan 08 21:07:58 second-818600 dockerd[685]: time="2024-01-08T21:07:58.581474964Z" level=info msg="containerd successfully booted in 0.054769s"
	Jan 08 21:07:58 second-818600 dockerd[679]: time="2024-01-08T21:07:58.621001202Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Jan 08 21:07:58 second-818600 dockerd[679]: time="2024-01-08T21:07:58.633865300Z" level=info msg="Loading containers: start."
	Jan 08 21:07:58 second-818600 dockerd[679]: time="2024-01-08T21:07:58.844155976Z" level=info msg="Loading containers: done."
	Jan 08 21:07:58 second-818600 dockerd[679]: time="2024-01-08T21:07:58.860820350Z" level=warning msg="WARNING: No blkio throttle.read_bps_device support"
	Jan 08 21:07:58 second-818600 dockerd[679]: time="2024-01-08T21:07:58.860924255Z" level=warning msg="WARNING: No blkio throttle.write_bps_device support"
	Jan 08 21:07:58 second-818600 dockerd[679]: time="2024-01-08T21:07:58.861137065Z" level=warning msg="WARNING: No blkio throttle.read_iops_device support"
	Jan 08 21:07:58 second-818600 dockerd[679]: time="2024-01-08T21:07:58.861322774Z" level=warning msg="WARNING: No blkio throttle.write_iops_device support"
	Jan 08 21:07:58 second-818600 dockerd[679]: time="2024-01-08T21:07:58.861433479Z" level=info msg="Docker daemon" commit=311b9ff graphdriver=overlay2 version=24.0.7
	Jan 08 21:07:58 second-818600 dockerd[679]: time="2024-01-08T21:07:58.861809996Z" level=info msg="Daemon has completed initialization"
	Jan 08 21:07:58 second-818600 dockerd[679]: time="2024-01-08T21:07:58.923348757Z" level=info msg="API listen on /var/run/docker.sock"
	Jan 08 21:07:58 second-818600 systemd[1]: Started Docker Application Container Engine.
	Jan 08 21:07:58 second-818600 dockerd[679]: time="2024-01-08T21:07:58.923812679Z" level=info msg="API listen on [::]:2376"
	Jan 08 21:08:29 second-818600 dockerd[679]: time="2024-01-08T21:08:29.450296956Z" level=info msg="Processing signal 'terminated'"
	Jan 08 21:08:29 second-818600 dockerd[679]: time="2024-01-08T21:08:29.452074556Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jan 08 21:08:29 second-818600 dockerd[679]: time="2024-01-08T21:08:29.452111256Z" level=info msg="Daemon shutdown complete"
	Jan 08 21:08:29 second-818600 dockerd[679]: time="2024-01-08T21:08:29.452143856Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Jan 08 21:08:29 second-818600 dockerd[679]: time="2024-01-08T21:08:29.452182356Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jan 08 21:08:29 second-818600 systemd[1]: Stopping Docker Application Container Engine...
	Jan 08 21:08:30 second-818600 systemd[1]: docker.service: Succeeded.
	Jan 08 21:08:30 second-818600 systemd[1]: Stopped Docker Application Container Engine.
	Jan 08 21:08:30 second-818600 systemd[1]: Starting Docker Application Container Engine...
	Jan 08 21:08:30 second-818600 dockerd[1015]: time="2024-01-08T21:08:30.527998356Z" level=info msg="Starting up"
	Jan 08 21:09:30 second-818600 dockerd[1015]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jan 08 21:09:30 second-818600 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jan 08 21:09:30 second-818600 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jan 08 21:09:30 second-818600 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0108 21:09:30.570581    4132 out.go:239] * 
	W0108 21:09:30.571399    4132 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0108 21:09:30.573329    4132 out.go:177] 
	
	
	==> Docker <==
	-- Journal begins at Mon 2024-01-08 21:03:57 UTC, ends at Mon 2024-01-08 21:11:03 UTC. --
	Jan 08 21:06:05 first-818600 dockerd[1327]: time="2024-01-08T21:06:05.580199899Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 08 21:06:05 first-818600 cri-dockerd[1210]: time="2024-01-08T21:06:05Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/b404de5eb525b15882fab68f81222ce97bcb3a0f3b534fb4b48f5442f004ee9a/resolv.conf as [nameserver 172.29.96.1]"
	Jan 08 21:06:05 first-818600 dockerd[1327]: time="2024-01-08T21:06:05.810805003Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jan 08 21:06:05 first-818600 dockerd[1327]: time="2024-01-08T21:06:05.811033405Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 08 21:06:05 first-818600 dockerd[1327]: time="2024-01-08T21:06:05.811137806Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jan 08 21:06:05 first-818600 dockerd[1327]: time="2024-01-08T21:06:05.811152906Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 08 21:06:05 first-818600 dockerd[1327]: time="2024-01-08T21:06:05.820328386Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jan 08 21:06:05 first-818600 dockerd[1327]: time="2024-01-08T21:06:05.820414386Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 08 21:06:05 first-818600 dockerd[1327]: time="2024-01-08T21:06:05.821921100Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jan 08 21:06:05 first-818600 dockerd[1327]: time="2024-01-08T21:06:05.821956700Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 08 21:06:06 first-818600 dockerd[1327]: time="2024-01-08T21:06:06.266403418Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jan 08 21:06:06 first-818600 dockerd[1327]: time="2024-01-08T21:06:06.266744321Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 08 21:06:06 first-818600 dockerd[1327]: time="2024-01-08T21:06:06.268047531Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jan 08 21:06:06 first-818600 dockerd[1327]: time="2024-01-08T21:06:06.268184533Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 08 21:06:06 first-818600 cri-dockerd[1210]: time="2024-01-08T21:06:06Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/a889ccd29e09ae039154230d449ae27d9b8f24a57fae9b644acac05893db2684/resolv.conf as [nameserver 172.29.96.1]"
	Jan 08 21:06:06 first-818600 dockerd[1327]: time="2024-01-08T21:06:06.756536912Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jan 08 21:06:06 first-818600 dockerd[1327]: time="2024-01-08T21:06:06.756690913Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 08 21:06:06 first-818600 dockerd[1327]: time="2024-01-08T21:06:06.756714013Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jan 08 21:06:06 first-818600 dockerd[1327]: time="2024-01-08T21:06:06.756731213Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 08 21:06:07 first-818600 cri-dockerd[1210]: time="2024-01-08T21:06:07Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/ab4297d0e76dee4a13ce69d2fcd460049910afb68d691950d3cbb8a79bf35ac3/resolv.conf as [nameserver 172.29.96.1]"
	Jan 08 21:06:07 first-818600 dockerd[1327]: time="2024-01-08T21:06:07.155193181Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jan 08 21:06:07 first-818600 dockerd[1327]: time="2024-01-08T21:06:07.155384882Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 08 21:06:07 first-818600 dockerd[1327]: time="2024-01-08T21:06:07.155413183Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jan 08 21:06:07 first-818600 dockerd[1327]: time="2024-01-08T21:06:07.155429883Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 08 21:06:13 first-818600 cri-dockerd[1210]: time="2024-01-08T21:06:13Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:10.244.0.0/24,},}"
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	e730c2848021b       6e38f40d628db       4 minutes ago       Running             storage-provisioner       0                   ab4297d0e76de       storage-provisioner
	a2ea6e10c564c       ead0a4a53df89       4 minutes ago       Running             coredns                   0                   a889ccd29e09a       coredns-5dd5756b68-zq24b
	e307be4ccd41b       83f6cc407eed8       4 minutes ago       Running             kube-proxy                0                   b404de5eb525b       kube-proxy-mcs2d
	85ececa4e0728       e3db313c6dbc0       5 minutes ago       Running             kube-scheduler            0                   c122b24fcc0d0       kube-scheduler-first-818600
	aee58df36f328       73deb9a3f7025       5 minutes ago       Running             etcd                      0                   fc273856c38c7       etcd-first-818600
	d8ece02874a67       d058aa5ab969c       5 minutes ago       Running             kube-controller-manager   0                   45838f3f565d9       kube-controller-manager-first-818600
	dbb271804824d       7fe0e6f37db33       5 minutes ago       Running             kube-apiserver            0                   cac9575214c9c       kube-apiserver-first-818600
	
	
	==> coredns [a2ea6e10c564] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = ecb7ac485f9c2b1ea9804efa09f1e19321672736f367e944ec746de174838ff4ac13f0ea72d0f91eb72162a02d709deb909d06018a457ac2adfe17d34b3613d8
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:47975 - 16900 "HINFO IN 4664554832093803988.5842086717852159093. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.033095253s
	
	
	==> describe nodes <==
	Name:               first-818600
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=first-818600
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=255792ad75c0218cbe9d2c7121633a1b1d442f28
	                    minikube.k8s.io/name=first-818600
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_01_08T21_05_53_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 08 Jan 2024 21:05:49 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  first-818600
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 08 Jan 2024 21:10:59 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 08 Jan 2024 21:06:13 +0000   Mon, 08 Jan 2024 21:05:47 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 08 Jan 2024 21:06:13 +0000   Mon, 08 Jan 2024 21:05:47 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 08 Jan 2024 21:06:13 +0000   Mon, 08 Jan 2024 21:05:47 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 08 Jan 2024 21:06:13 +0000   Mon, 08 Jan 2024 21:05:56 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.29.101.67
	  Hostname:    first-818600
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             5925712Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             5925712Ki
	  pods:               110
	System Info:
	  Machine ID:                 339923587d18431889da04d8c1da0f64
	  System UUID:                a6f65821-324a-f84d-bdbf-946d7a801f26
	  Boot ID:                    7423fd9d-8240-471e-89a3-7e0b26b27dbf
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://24.0.7
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-5dd5756b68-zq24b                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (2%!)(MISSING)     4m59s
	  kube-system                 etcd-first-818600                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         5m12s
	  kube-system                 kube-apiserver-first-818600             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m11s
	  kube-system                 kube-controller-manager-first-818600    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m14s
	  kube-system                 kube-proxy-mcs2d                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m59s
	  kube-system                 kube-scheduler-first-818600             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m11s
	  kube-system                 storage-provisioner                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m3s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (2%!)(MISSING)  170Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m57s                  kube-proxy       
	  Normal  Starting                 5m20s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  5m20s (x8 over 5m20s)  kubelet          Node first-818600 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m20s (x8 over 5m20s)  kubelet          Node first-818600 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m20s (x7 over 5m20s)  kubelet          Node first-818600 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m20s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 5m11s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  5m11s                  kubelet          Node first-818600 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m11s                  kubelet          Node first-818600 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m11s                  kubelet          Node first-818600 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m11s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                5m8s                   kubelet          Node first-818600 status is now: NodeReady
	  Normal  RegisteredNode           5m                     node-controller  Node first-818600 event: Registered Node first-818600 in Controller
	
	
	==> dmesg <==
	              on the kernel command line
	[  +0.000159] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +0.641785] psmouse serio1: trackpoint: failed to get extended button data, assuming 3 buttons
	[  +1.663886] systemd-fstab-generator[113]: Ignoring "noauto" for root device
	[  +1.156833] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000003] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[Jan 8 21:04] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[ +42.049687] systemd-fstab-generator[659]: Ignoring "noauto" for root device
	[  +0.146207] systemd-fstab-generator[670]: Ignoring "noauto" for root device
	[Jan 8 21:05] systemd-fstab-generator[942]: Ignoring "noauto" for root device
	[  +0.539879] systemd-fstab-generator[982]: Ignoring "noauto" for root device
	[  +0.175898] systemd-fstab-generator[993]: Ignoring "noauto" for root device
	[  +0.186436] systemd-fstab-generator[1006]: Ignoring "noauto" for root device
	[  +1.351620] kauditd_printk_skb: 14 callbacks suppressed
	[  +0.403199] systemd-fstab-generator[1165]: Ignoring "noauto" for root device
	[  +0.173503] systemd-fstab-generator[1176]: Ignoring "noauto" for root device
	[  +0.152802] systemd-fstab-generator[1187]: Ignoring "noauto" for root device
	[  +0.242096] systemd-fstab-generator[1202]: Ignoring "noauto" for root device
	[  +7.371280] systemd-fstab-generator[1312]: Ignoring "noauto" for root device
	[  +8.747079] kauditd_printk_skb: 29 callbacks suppressed
	[  +6.488904] systemd-fstab-generator[1690]: Ignoring "noauto" for root device
	[  +1.026698] kauditd_printk_skb: 29 callbacks suppressed
	[  +8.285756] systemd-fstab-generator[2625]: Ignoring "noauto" for root device
	
	
	==> etcd [aee58df36f32] <==
	{"level":"info","ts":"2024-01-08T21:05:47.107156Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"172.29.101.67:2380"}
	{"level":"info","ts":"2024-01-08T21:05:47.105683Z","caller":"etcdserver/server.go:738","msg":"started as single-node; fast-forwarding election ticks","local-member-id":"e257769bc898cf3f","forward-ticks":9,"forward-duration":"900ms","election-ticks":10,"election-timeout":"1s"}
	{"level":"info","ts":"2024-01-08T21:05:47.105717Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-01-08T21:05:47.107683Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-01-08T21:05:47.107802Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-01-08T21:05:47.10586Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e257769bc898cf3f switched to configuration voters=(16309634987003006783)"}
	{"level":"info","ts":"2024-01-08T21:05:47.112653Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"5390f3fcccf4a81f","local-member-id":"e257769bc898cf3f","added-peer-id":"e257769bc898cf3f","added-peer-peer-urls":["https://172.29.101.67:2380"]}
	{"level":"info","ts":"2024-01-08T21:05:47.153505Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e257769bc898cf3f is starting a new election at term 1"}
	{"level":"info","ts":"2024-01-08T21:05:47.153724Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e257769bc898cf3f became pre-candidate at term 1"}
	{"level":"info","ts":"2024-01-08T21:05:47.15393Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e257769bc898cf3f received MsgPreVoteResp from e257769bc898cf3f at term 1"}
	{"level":"info","ts":"2024-01-08T21:05:47.154209Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e257769bc898cf3f became candidate at term 2"}
	{"level":"info","ts":"2024-01-08T21:05:47.154514Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e257769bc898cf3f received MsgVoteResp from e257769bc898cf3f at term 2"}
	{"level":"info","ts":"2024-01-08T21:05:47.154642Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e257769bc898cf3f became leader at term 2"}
	{"level":"info","ts":"2024-01-08T21:05:47.154844Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: e257769bc898cf3f elected leader e257769bc898cf3f at term 2"}
	{"level":"info","ts":"2024-01-08T21:05:47.160605Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-01-08T21:05:47.165723Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"e257769bc898cf3f","local-member-attributes":"{Name:first-818600 ClientURLs:[https://172.29.101.67:2379]}","request-path":"/0/members/e257769bc898cf3f/attributes","cluster-id":"5390f3fcccf4a81f","publish-timeout":"7s"}
	{"level":"info","ts":"2024-01-08T21:05:47.165825Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-01-08T21:05:47.167223Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"172.29.101.67:2379"}
	{"level":"info","ts":"2024-01-08T21:05:47.165842Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-01-08T21:05:47.175869Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-01-08T21:05:47.176274Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-01-08T21:05:47.176955Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"5390f3fcccf4a81f","local-member-id":"e257769bc898cf3f","cluster-version":"3.5"}
	{"level":"info","ts":"2024-01-08T21:05:47.179525Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-01-08T21:05:47.182509Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-01-08T21:05:47.183664Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 21:11:04 up 7 min,  0 users,  load average: 0.83, 0.54, 0.27
	Linux first-818600 5.10.57 #1 SMP Sat Dec 16 11:03:54 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	
	==> kube-apiserver [dbb271804824] <==
	I0108 21:05:49.572224       1 shared_informer.go:318] Caches are synced for configmaps
	I0108 21:05:49.576722       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0108 21:05:49.578287       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I0108 21:05:49.578314       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I0108 21:05:49.578417       1 aggregator.go:166] initial CRD sync complete...
	I0108 21:05:49.578476       1 autoregister_controller.go:141] Starting autoregister controller
	I0108 21:05:49.578579       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0108 21:05:49.578603       1 cache.go:39] Caches are synced for autoregister controller
	I0108 21:05:49.591761       1 shared_informer.go:318] Caches are synced for node_authorizer
	I0108 21:05:49.603908       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0108 21:05:50.374560       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0108 21:05:50.383167       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0108 21:05:50.383363       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0108 21:05:51.140593       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0108 21:05:51.200359       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0108 21:05:51.312903       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0108 21:05:51.320708       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [172.29.101.67]
	I0108 21:05:51.321625       1 controller.go:624] quota admission added evaluator for: endpoints
	I0108 21:05:51.327541       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0108 21:05:51.471410       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0108 21:05:53.024400       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0108 21:05:53.039104       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0108 21:05:53.051606       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0108 21:06:04.625929       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I0108 21:06:05.125631       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	
	
	==> kube-controller-manager [d8ece02874a6] <==
	I0108 21:06:04.353373       1 shared_informer.go:318] Caches are synced for crt configmap
	I0108 21:06:04.358319       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kubelet-serving
	I0108 21:06:04.362292       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kubelet-client
	I0108 21:06:04.367137       1 shared_informer.go:318] Caches are synced for endpoint
	I0108 21:06:04.428749       1 shared_informer.go:318] Caches are synced for resource quota
	I0108 21:06:04.462901       1 shared_informer.go:318] Caches are synced for ephemeral
	I0108 21:06:04.468698       1 shared_informer.go:318] Caches are synced for PVC protection
	I0108 21:06:04.481949       1 shared_informer.go:318] Caches are synced for attach detach
	I0108 21:06:04.482606       1 shared_informer.go:318] Caches are synced for stateful set
	I0108 21:06:04.492520       1 shared_informer.go:318] Caches are synced for expand
	I0108 21:06:04.519247       1 shared_informer.go:318] Caches are synced for persistent volume
	I0108 21:06:04.528006       1 shared_informer.go:318] Caches are synced for resource quota
	I0108 21:06:04.633423       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-5dd5756b68 to 1"
	I0108 21:06:04.865336       1 shared_informer.go:318] Caches are synced for garbage collector
	I0108 21:06:04.865432       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I0108 21:06:04.886666       1 shared_informer.go:318] Caches are synced for garbage collector
	I0108 21:06:05.140997       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-mcs2d"
	I0108 21:06:05.329027       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-zq24b"
	I0108 21:06:05.345516       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="712.125702ms"
	I0108 21:06:05.367588       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="21.461087ms"
	I0108 21:06:05.368235       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="100.7µs"
	I0108 21:06:05.391272       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="54.701µs"
	I0108 21:06:08.119848       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="445.203µs"
	I0108 21:06:08.161982       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="16.540818ms"
	I0108 21:06:08.162410       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="85.2µs"
	
	
	==> kube-proxy [e307be4ccd41] <==
	I0108 21:06:06.080237       1 server_others.go:69] "Using iptables proxy"
	I0108 21:06:06.105628       1 node.go:141] Successfully retrieved node IP: 172.29.101.67
	I0108 21:06:06.244047       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0108 21:06:06.244071       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0108 21:06:06.249011       1 server_others.go:152] "Using iptables Proxier"
	I0108 21:06:06.249078       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0108 21:06:06.249271       1 server.go:846] "Version info" version="v1.28.4"
	I0108 21:06:06.249283       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0108 21:06:06.252267       1 config.go:188] "Starting service config controller"
	I0108 21:06:06.252327       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0108 21:06:06.252378       1 config.go:97] "Starting endpoint slice config controller"
	I0108 21:06:06.252384       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0108 21:06:06.257764       1 config.go:315] "Starting node config controller"
	I0108 21:06:06.257780       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0108 21:06:06.352973       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0108 21:06:06.353391       1 shared_informer.go:318] Caches are synced for service config
	I0108 21:06:06.359316       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [85ececa4e072] <==
	E0108 21:05:49.554008       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0108 21:05:49.554029       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0108 21:05:50.411965       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0108 21:05:50.412011       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0108 21:05:50.474016       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0108 21:05:50.474139       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0108 21:05:50.615603       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0108 21:05:50.615669       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0108 21:05:50.687244       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0108 21:05:50.687382       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0108 21:05:50.757624       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0108 21:05:50.757951       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0108 21:05:50.771148       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0108 21:05:50.771237       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0108 21:05:50.798718       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0108 21:05:50.798906       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0108 21:05:50.815878       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0108 21:05:50.815923       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0108 21:05:50.831522       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0108 21:05:50.832082       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0108 21:05:50.892929       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0108 21:05:50.893075       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0108 21:05:50.895353       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0108 21:05:50.895666       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0108 21:05:53.225109       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	-- Journal begins at Mon 2024-01-08 21:03:57 UTC, ends at Mon 2024-01-08 21:11:04 UTC. --
	Jan 08 21:06:08 first-818600 kubelet[2649]: I0108 21:06:08.096511    2649 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-mcs2d" podStartSLOduration=3.096418625 podCreationTimestamp="2024-01-08 21:06:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-01-08 21:06:06.919169137 +0000 UTC m=+13.938345292" watchObservedRunningTime="2024-01-08 21:06:08.096418625 +0000 UTC m=+15.115594780"
	Jan 08 21:06:08 first-818600 kubelet[2649]: I0108 21:06:08.118746    2649 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=7.118688584 podCreationTimestamp="2024-01-08 21:06:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-01-08 21:06:08.096879128 +0000 UTC m=+15.116055283" watchObservedRunningTime="2024-01-08 21:06:08.118688584 +0000 UTC m=+15.137864839"
	Jan 08 21:06:08 first-818600 kubelet[2649]: I0108 21:06:08.144803    2649 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-zq24b" podStartSLOduration=3.144762171 podCreationTimestamp="2024-01-08 21:06:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-01-08 21:06:08.121938007 +0000 UTC m=+15.141114262" watchObservedRunningTime="2024-01-08 21:06:08.144762171 +0000 UTC m=+15.163938426"
	Jan 08 21:06:13 first-818600 kubelet[2649]: I0108 21:06:13.546720    2649 kuberuntime_manager.go:1528] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Jan 08 21:06:13 first-818600 kubelet[2649]: I0108 21:06:13.547744    2649 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Jan 08 21:06:53 first-818600 kubelet[2649]: E0108 21:06:53.451619    2649 iptables.go:575] "Could not set up iptables canary" err=<
	Jan 08 21:06:53 first-818600 kubelet[2649]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jan 08 21:06:53 first-818600 kubelet[2649]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jan 08 21:06:53 first-818600 kubelet[2649]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jan 08 21:07:53 first-818600 kubelet[2649]: E0108 21:07:53.451144    2649 iptables.go:575] "Could not set up iptables canary" err=<
	Jan 08 21:07:53 first-818600 kubelet[2649]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jan 08 21:07:53 first-818600 kubelet[2649]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jan 08 21:07:53 first-818600 kubelet[2649]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jan 08 21:08:53 first-818600 kubelet[2649]: E0108 21:08:53.451213    2649 iptables.go:575] "Could not set up iptables canary" err=<
	Jan 08 21:08:53 first-818600 kubelet[2649]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jan 08 21:08:53 first-818600 kubelet[2649]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jan 08 21:08:53 first-818600 kubelet[2649]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jan 08 21:09:53 first-818600 kubelet[2649]: E0108 21:09:53.451604    2649 iptables.go:575] "Could not set up iptables canary" err=<
	Jan 08 21:09:53 first-818600 kubelet[2649]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jan 08 21:09:53 first-818600 kubelet[2649]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jan 08 21:09:53 first-818600 kubelet[2649]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jan 08 21:10:53 first-818600 kubelet[2649]: E0108 21:10:53.451678    2649 iptables.go:575] "Could not set up iptables canary" err=<
	Jan 08 21:10:53 first-818600 kubelet[2649]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jan 08 21:10:53 first-818600 kubelet[2649]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jan 08 21:10:53 first-818600 kubelet[2649]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	
	
	==> storage-provisioner [e730c2848021] <==
	I0108 21:06:07.250774       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0108 21:06:07.265133       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0108 21:06:07.265195       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0108 21:06:07.275288       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0108 21:06:07.275939       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_first-818600_5b2627b6-c99a-4589-bdc6-c278d7a2d2f0!
	I0108 21:06:07.280277       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"065399c3-c3a5-47f0-8c56-170a8823ed0e", APIVersion:"v1", ResourceVersion:"397", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' first-818600_5b2627b6-c99a-4589-bdc6-c278d7a2d2f0 became leader
	I0108 21:06:07.376936       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_first-818600_5b2627b6-c99a-4589-bdc6-c278d7a2d2f0!
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0108 21:10:56.613957    5688 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p first-818600 -n first-818600
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p first-818600 -n first-818600: (11.9040798s)
helpers_test.go:261: (dbg) Run:  kubectl --context first-818600 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMinikubeProfile FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "first-818600" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p first-818600
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p first-818600: (41.9669548s)
--- FAIL: TestMinikubeProfile (544.45s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (57.46s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:580: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-554300 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:588: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-554300 -- exec busybox-5bc68d56bd-hrhnw -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:599: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-554300 -- exec busybox-5bc68d56bd-hrhnw -- sh -c "ping -c 1 172.29.96.1"
multinode_test.go:599: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p multinode-554300 -- exec busybox-5bc68d56bd-hrhnw -- sh -c "ping -c 1 172.29.96.1": exit status 1 (10.5755364s)

                                                
                                                
-- stdout --
	PING 172.29.96.1 (172.29.96.1): 56 data bytes
	
	--- 172.29.96.1 ping statistics ---
	1 packets transmitted, 0 packets received, 100% packet loss

                                                
                                                
-- /stdout --
** stderr ** 
	W0108 21:27:35.864040    3540 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	command terminated with exit code 1

                                                
                                                
** /stderr **
multinode_test.go:600: Failed to ping host (172.29.96.1) from pod (busybox-5bc68d56bd-hrhnw): exit status 1
multinode_test.go:588: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-554300 -- exec busybox-5bc68d56bd-w2zbn -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:599: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-554300 -- exec busybox-5bc68d56bd-w2zbn -- sh -c "ping -c 1 172.29.96.1"
E0108 21:27:52.227498    3008 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-084500\client.crt: The system cannot find the path specified.
multinode_test.go:599: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p multinode-554300 -- exec busybox-5bc68d56bd-w2zbn -- sh -c "ping -c 1 172.29.96.1": exit status 1 (10.4986713s)

                                                
                                                
-- stdout --
	PING 172.29.96.1 (172.29.96.1): 56 data bytes
	
	--- 172.29.96.1 ping statistics ---
	1 packets transmitted, 0 packets received, 100% packet loss

                                                
                                                
-- /stdout --
** stderr ** 
	W0108 21:27:46.956921    7616 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	command terminated with exit code 1

                                                
                                                
** /stderr **
multinode_test.go:600: Failed to ping host (172.29.96.1) from pod (busybox-5bc68d56bd-w2zbn): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-554300 -n multinode-554300
E0108 21:27:58.289598    3008 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-242800\client.crt: The system cannot find the path specified.
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-554300 -n multinode-554300: (12.0238896s)
helpers_test.go:244: <<< TestMultiNode/serial/PingHostFrom2Pods FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/PingHostFrom2Pods]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-554300 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-554300 logs -n 25: (8.5102898s)
helpers_test.go:252: TestMultiNode/serial/PingHostFrom2Pods logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	| Command |                       Args                        |       Profile        |       User        | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	| ssh     | mount-start-2-474500 ssh -- ls                    | mount-start-2-474500 | minikube7\jenkins | v1.32.0 | 08 Jan 24 21:17 UTC | 08 Jan 24 21:17 UTC |
	|         | /minikube-host                                    |                      |                   |         |                     |                     |
	| delete  | -p mount-start-1-474500                           | mount-start-1-474500 | minikube7\jenkins | v1.32.0 | 08 Jan 24 21:17 UTC | 08 Jan 24 21:17 UTC |
	|         | --alsologtostderr -v=5                            |                      |                   |         |                     |                     |
	| ssh     | mount-start-2-474500 ssh -- ls                    | mount-start-2-474500 | minikube7\jenkins | v1.32.0 | 08 Jan 24 21:17 UTC | 08 Jan 24 21:17 UTC |
	|         | /minikube-host                                    |                      |                   |         |                     |                     |
	| stop    | -p mount-start-2-474500                           | mount-start-2-474500 | minikube7\jenkins | v1.32.0 | 08 Jan 24 21:17 UTC | 08 Jan 24 21:18 UTC |
	| start   | -p mount-start-2-474500                           | mount-start-2-474500 | minikube7\jenkins | v1.32.0 | 08 Jan 24 21:18 UTC | 08 Jan 24 21:19 UTC |
	| mount   | C:\Users\jenkins.minikube7:/minikube-host         | mount-start-2-474500 | minikube7\jenkins | v1.32.0 | 08 Jan 24 21:19 UTC |                     |
	|         | --profile mount-start-2-474500 --v 0              |                      |                   |         |                     |                     |
	|         | --9p-version 9p2000.L --gid 0 --ip                |                      |                   |         |                     |                     |
	|         | --msize 6543 --port 46465 --type 9p --uid         |                      |                   |         |                     |                     |
	|         |                                                 0 |                      |                   |         |                     |                     |
	| ssh     | mount-start-2-474500 ssh -- ls                    | mount-start-2-474500 | minikube7\jenkins | v1.32.0 | 08 Jan 24 21:19 UTC | 08 Jan 24 21:20 UTC |
	|         | /minikube-host                                    |                      |                   |         |                     |                     |
	| delete  | -p mount-start-2-474500                           | mount-start-2-474500 | minikube7\jenkins | v1.32.0 | 08 Jan 24 21:20 UTC | 08 Jan 24 21:20 UTC |
	| delete  | -p mount-start-1-474500                           | mount-start-1-474500 | minikube7\jenkins | v1.32.0 | 08 Jan 24 21:20 UTC | 08 Jan 24 21:20 UTC |
	| start   | -p multinode-554300                               | multinode-554300     | minikube7\jenkins | v1.32.0 | 08 Jan 24 21:20 UTC | 08 Jan 24 21:27 UTC |
	|         | --wait=true --memory=2200                         |                      |                   |         |                     |                     |
	|         | --nodes=2 -v=8                                    |                      |                   |         |                     |                     |
	|         | --alsologtostderr                                 |                      |                   |         |                     |                     |
	|         | --driver=hyperv                                   |                      |                   |         |                     |                     |
	| kubectl | -p multinode-554300 -- apply -f                   | multinode-554300     | minikube7\jenkins | v1.32.0 | 08 Jan 24 21:27 UTC | 08 Jan 24 21:27 UTC |
	|         | ./testdata/multinodes/multinode-pod-dns-test.yaml |                      |                   |         |                     |                     |
	| kubectl | -p multinode-554300 -- rollout                    | multinode-554300     | minikube7\jenkins | v1.32.0 | 08 Jan 24 21:27 UTC | 08 Jan 24 21:27 UTC |
	|         | status deployment/busybox                         |                      |                   |         |                     |                     |
	| kubectl | -p multinode-554300 -- get pods -o                | multinode-554300     | minikube7\jenkins | v1.32.0 | 08 Jan 24 21:27 UTC | 08 Jan 24 21:27 UTC |
	|         | jsonpath='{.items[*].status.podIP}'               |                      |                   |         |                     |                     |
	| kubectl | -p multinode-554300 -- get pods -o                | multinode-554300     | minikube7\jenkins | v1.32.0 | 08 Jan 24 21:27 UTC | 08 Jan 24 21:27 UTC |
	|         | jsonpath='{.items[*].metadata.name}'              |                      |                   |         |                     |                     |
	| kubectl | -p multinode-554300 -- exec                       | multinode-554300     | minikube7\jenkins | v1.32.0 | 08 Jan 24 21:27 UTC | 08 Jan 24 21:27 UTC |
	|         | busybox-5bc68d56bd-hrhnw --                       |                      |                   |         |                     |                     |
	|         | nslookup kubernetes.io                            |                      |                   |         |                     |                     |
	| kubectl | -p multinode-554300 -- exec                       | multinode-554300     | minikube7\jenkins | v1.32.0 | 08 Jan 24 21:27 UTC | 08 Jan 24 21:27 UTC |
	|         | busybox-5bc68d56bd-w2zbn --                       |                      |                   |         |                     |                     |
	|         | nslookup kubernetes.io                            |                      |                   |         |                     |                     |
	| kubectl | -p multinode-554300 -- exec                       | multinode-554300     | minikube7\jenkins | v1.32.0 | 08 Jan 24 21:27 UTC | 08 Jan 24 21:27 UTC |
	|         | busybox-5bc68d56bd-hrhnw --                       |                      |                   |         |                     |                     |
	|         | nslookup kubernetes.default                       |                      |                   |         |                     |                     |
	| kubectl | -p multinode-554300 -- exec                       | multinode-554300     | minikube7\jenkins | v1.32.0 | 08 Jan 24 21:27 UTC | 08 Jan 24 21:27 UTC |
	|         | busybox-5bc68d56bd-w2zbn --                       |                      |                   |         |                     |                     |
	|         | nslookup kubernetes.default                       |                      |                   |         |                     |                     |
	| kubectl | -p multinode-554300 -- exec                       | multinode-554300     | minikube7\jenkins | v1.32.0 | 08 Jan 24 21:27 UTC | 08 Jan 24 21:27 UTC |
	|         | busybox-5bc68d56bd-hrhnw -- nslookup              |                      |                   |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |                   |         |                     |                     |
	| kubectl | -p multinode-554300 -- exec                       | multinode-554300     | minikube7\jenkins | v1.32.0 | 08 Jan 24 21:27 UTC | 08 Jan 24 21:27 UTC |
	|         | busybox-5bc68d56bd-w2zbn -- nslookup              |                      |                   |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |                   |         |                     |                     |
	| kubectl | -p multinode-554300 -- get pods -o                | multinode-554300     | minikube7\jenkins | v1.32.0 | 08 Jan 24 21:27 UTC | 08 Jan 24 21:27 UTC |
	|         | jsonpath='{.items[*].metadata.name}'              |                      |                   |         |                     |                     |
	| kubectl | -p multinode-554300 -- exec                       | multinode-554300     | minikube7\jenkins | v1.32.0 | 08 Jan 24 21:27 UTC | 08 Jan 24 21:27 UTC |
	|         | busybox-5bc68d56bd-hrhnw                          |                      |                   |         |                     |                     |
	|         | -- sh -c nslookup                                 |                      |                   |         |                     |                     |
	|         | host.minikube.internal | awk                      |                      |                   |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                      |                   |         |                     |                     |
	| kubectl | -p multinode-554300 -- exec                       | multinode-554300     | minikube7\jenkins | v1.32.0 | 08 Jan 24 21:27 UTC |                     |
	|         | busybox-5bc68d56bd-hrhnw -- sh                    |                      |                   |         |                     |                     |
	|         | -c ping -c 1 172.29.96.1                          |                      |                   |         |                     |                     |
	| kubectl | -p multinode-554300 -- exec                       | multinode-554300     | minikube7\jenkins | v1.32.0 | 08 Jan 24 21:27 UTC | 08 Jan 24 21:27 UTC |
	|         | busybox-5bc68d56bd-w2zbn                          |                      |                   |         |                     |                     |
	|         | -- sh -c nslookup                                 |                      |                   |         |                     |                     |
	|         | host.minikube.internal | awk                      |                      |                   |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                      |                   |         |                     |                     |
	| kubectl | -p multinode-554300 -- exec                       | multinode-554300     | minikube7\jenkins | v1.32.0 | 08 Jan 24 21:27 UTC |                     |
	|         | busybox-5bc68d56bd-w2zbn -- sh                    |                      |                   |         |                     |                     |
	|         | -c ping -c 1 172.29.96.1                          |                      |                   |         |                     |                     |
	|---------|---------------------------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/01/08 21:20:35
	Running on machine: minikube7
	Binary: Built with gc go1.21.5 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0108 21:20:35.903111    5636 out.go:296] Setting OutFile to fd 1100 ...
	I0108 21:20:35.903581    5636 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 21:20:35.903581    5636 out.go:309] Setting ErrFile to fd 1456...
	I0108 21:20:35.903581    5636 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 21:20:35.927426    5636 out.go:303] Setting JSON to false
	I0108 21:20:35.930010    5636 start.go:128] hostinfo: {"hostname":"minikube7","uptime":26777,"bootTime":1704722057,"procs":191,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.3803 Build 19045.3803","kernelVersion":"10.0.19045.3803 Build 19045.3803","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f66ed2ea-6c04-4a6b-8eea-b2fb0953e990"}
	W0108 21:20:35.930010    5636 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0108 21:20:35.931034    5636 out.go:177] * [multinode-554300] minikube v1.32.0 on Microsoft Windows 10 Enterprise N 10.0.19045.3803 Build 19045.3803
	I0108 21:20:35.932139    5636 notify.go:220] Checking for updates...
	I0108 21:20:35.933259    5636 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	I0108 21:20:35.933855    5636 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0108 21:20:35.934611    5636 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube7\minikube-integration\.minikube
	I0108 21:20:35.935322    5636 out.go:177]   - MINIKUBE_LOCATION=17907
	I0108 21:20:35.935900    5636 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0108 21:20:35.937684    5636 driver.go:392] Setting default libvirt URI to qemu:///system
	I0108 21:20:41.193851    5636 out.go:177] * Using the hyperv driver based on user configuration
	I0108 21:20:41.194983    5636 start.go:298] selected driver: hyperv
	I0108 21:20:41.194983    5636 start.go:902] validating driver "hyperv" against <nil>
	I0108 21:20:41.195072    5636 start.go:913] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0108 21:20:41.247898    5636 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I0108 21:20:41.249111    5636 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0108 21:20:41.249263    5636 cni.go:84] Creating CNI manager for ""
	I0108 21:20:41.249263    5636 cni.go:136] 0 nodes found, recommending kindnet
	I0108 21:20:41.249263    5636 start_flags.go:318] Found "CNI" CNI - setting NetworkPlugin=cni
	I0108 21:20:41.249356    5636 start_flags.go:323] config:
	{Name:multinode-554300 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-554300 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:do
cker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I0108 21:20:41.249777    5636 iso.go:125] acquiring lock: {Name:mk1869c5b5033c1301af33a7fa364ec43dd63efe Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 21:20:41.251546    5636 out.go:177] * Starting control plane node multinode-554300 in cluster multinode-554300
	I0108 21:20:41.252042    5636 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0108 21:20:41.252229    5636 preload.go:148] Found local preload: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4
	I0108 21:20:41.252318    5636 cache.go:56] Caching tarball of preloaded images
	I0108 21:20:41.252627    5636 preload.go:174] Found C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0108 21:20:41.252838    5636 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on docker
	I0108 21:20:41.253488    5636 profile.go:148] Saving config to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-554300\config.json ...
	I0108 21:20:41.253488    5636 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-554300\config.json: {Name:mk475867b7ba15e8d34fa49bffa7b0032e0b76ff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 21:20:41.255539    5636 start.go:365] acquiring machines lock for multinode-554300: {Name:mk814f158b6187cc9297257c36fdbe0d2871c950 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0108 21:20:41.255539    5636 start.go:369] acquired machines lock for "multinode-554300" in 0s
	I0108 21:20:41.255539    5636 start.go:93] Provisioning new machine with config: &{Name:multinode-554300 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17806/minikube-v1.32.1-1702708929-17806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-554300 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP:
MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0108 21:20:41.255539    5636 start.go:125] createHost starting for "" (driver="hyperv")
	I0108 21:20:41.256855    5636 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0108 21:20:41.257037    5636 start.go:159] libmachine.API.Create for "multinode-554300" (driver="hyperv")
	I0108 21:20:41.257037    5636 client.go:168] LocalClient.Create starting
	I0108 21:20:41.257037    5636 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem
	I0108 21:20:41.257037    5636 main.go:141] libmachine: Decoding PEM data...
	I0108 21:20:41.257037    5636 main.go:141] libmachine: Parsing certificate...
	I0108 21:20:41.257037    5636 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem
	I0108 21:20:41.258105    5636 main.go:141] libmachine: Decoding PEM data...
	I0108 21:20:41.258105    5636 main.go:141] libmachine: Parsing certificate...
	I0108 21:20:41.258293    5636 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0108 21:20:43.312209    5636 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0108 21:20:43.312209    5636 main.go:141] libmachine: [stderr =====>] : 
	I0108 21:20:43.312209    5636 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0108 21:20:45.051395    5636 main.go:141] libmachine: [stdout =====>] : False
	
	I0108 21:20:45.051452    5636 main.go:141] libmachine: [stderr =====>] : 
	I0108 21:20:45.051452    5636 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0108 21:20:46.547207    5636 main.go:141] libmachine: [stdout =====>] : True
	
	I0108 21:20:46.547398    5636 main.go:141] libmachine: [stderr =====>] : 
	I0108 21:20:46.547568    5636 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0108 21:20:50.055834    5636 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0108 21:20:50.055834    5636 main.go:141] libmachine: [stderr =====>] : 
	I0108 21:20:50.055834    5636 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube7/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.32.1-1702708929-17806-amd64.iso...
	I0108 21:20:50.510870    5636 main.go:141] libmachine: Creating SSH key...
	I0108 21:20:51.125533    5636 main.go:141] libmachine: Creating VM...
	I0108 21:20:51.126539    5636 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0108 21:20:53.974854    5636 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0108 21:20:53.974977    5636 main.go:141] libmachine: [stderr =====>] : 
	I0108 21:20:53.975175    5636 main.go:141] libmachine: Using switch "Default Switch"
	I0108 21:20:53.975274    5636 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0108 21:20:55.693394    5636 main.go:141] libmachine: [stdout =====>] : True
	
	I0108 21:20:55.693481    5636 main.go:141] libmachine: [stderr =====>] : 
	I0108 21:20:55.693481    5636 main.go:141] libmachine: Creating VHD
	I0108 21:20:55.693481    5636 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-554300\fixed.vhd' -SizeBytes 10MB -Fixed
	I0108 21:20:59.399008    5636 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube7
	Path                    : C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-554300\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : E3CE1F9C-D2C8-40C9-BBCE-7F49674DAB4E
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0108 21:20:59.399253    5636 main.go:141] libmachine: [stderr =====>] : 
	I0108 21:20:59.399354    5636 main.go:141] libmachine: Writing magic tar header
	I0108 21:20:59.399601    5636 main.go:141] libmachine: Writing SSH key tar header
	I0108 21:20:59.408017    5636 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-554300\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-554300\disk.vhd' -VHDType Dynamic -DeleteSource
	I0108 21:21:02.547829    5636 main.go:141] libmachine: [stdout =====>] : 
	I0108 21:21:02.548024    5636 main.go:141] libmachine: [stderr =====>] : 
	I0108 21:21:02.548024    5636 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-554300\disk.vhd' -SizeBytes 20000MB
	I0108 21:21:05.086164    5636 main.go:141] libmachine: [stdout =====>] : 
	I0108 21:21:05.086164    5636 main.go:141] libmachine: [stderr =====>] : 
	I0108 21:21:05.086164    5636 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM multinode-554300 -Path 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-554300' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0108 21:21:08.618310    5636 main.go:141] libmachine: [stdout =====>] : 
	Name             State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----             ----- ----------- ----------------- ------   ------             -------
	multinode-554300 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0108 21:21:08.618310    5636 main.go:141] libmachine: [stderr =====>] : 
	I0108 21:21:08.618391    5636 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName multinode-554300 -DynamicMemoryEnabled $false
	I0108 21:21:10.815356    5636 main.go:141] libmachine: [stdout =====>] : 
	I0108 21:21:10.815639    5636 main.go:141] libmachine: [stderr =====>] : 
	I0108 21:21:10.815879    5636 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor multinode-554300 -Count 2
	I0108 21:21:12.979458    5636 main.go:141] libmachine: [stdout =====>] : 
	I0108 21:21:12.979458    5636 main.go:141] libmachine: [stderr =====>] : 
	I0108 21:21:12.979542    5636 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName multinode-554300 -Path 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-554300\boot2docker.iso'
	I0108 21:21:15.476914    5636 main.go:141] libmachine: [stdout =====>] : 
	I0108 21:21:15.476914    5636 main.go:141] libmachine: [stderr =====>] : 
	I0108 21:21:15.476914    5636 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName multinode-554300 -Path 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-554300\disk.vhd'
	I0108 21:21:18.036389    5636 main.go:141] libmachine: [stdout =====>] : 
	I0108 21:21:18.036487    5636 main.go:141] libmachine: [stderr =====>] : 
	I0108 21:21:18.036487    5636 main.go:141] libmachine: Starting VM...
	I0108 21:21:18.036540    5636 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-554300
	I0108 21:21:20.850822    5636 main.go:141] libmachine: [stdout =====>] : 
	I0108 21:21:20.850861    5636 main.go:141] libmachine: [stderr =====>] : 
	I0108 21:21:20.850861    5636 main.go:141] libmachine: Waiting for host to start...
	I0108 21:21:20.850917    5636 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-554300 ).state
	I0108 21:21:23.139924    5636 main.go:141] libmachine: [stdout =====>] : Running
	
	I0108 21:21:23.139924    5636 main.go:141] libmachine: [stderr =====>] : 
	I0108 21:21:23.139924    5636 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-554300 ).networkadapters[0]).ipaddresses[0]
	I0108 21:21:25.633484    5636 main.go:141] libmachine: [stdout =====>] : 
	I0108 21:21:25.633520    5636 main.go:141] libmachine: [stderr =====>] : 
	I0108 21:21:26.635662    5636 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-554300 ).state
	I0108 21:21:28.816413    5636 main.go:141] libmachine: [stdout =====>] : Running
	
	I0108 21:21:28.816413    5636 main.go:141] libmachine: [stderr =====>] : 
	I0108 21:21:28.816536    5636 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-554300 ).networkadapters[0]).ipaddresses[0]
	I0108 21:21:31.325895    5636 main.go:141] libmachine: [stdout =====>] : 
	I0108 21:21:31.326064    5636 main.go:141] libmachine: [stderr =====>] : 
	I0108 21:21:32.328617    5636 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-554300 ).state
	I0108 21:21:34.535585    5636 main.go:141] libmachine: [stdout =====>] : Running
	
	I0108 21:21:34.535585    5636 main.go:141] libmachine: [stderr =====>] : 
	I0108 21:21:34.535808    5636 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-554300 ).networkadapters[0]).ipaddresses[0]
	I0108 21:21:37.076068    5636 main.go:141] libmachine: [stdout =====>] : 
	I0108 21:21:37.076250    5636 main.go:141] libmachine: [stderr =====>] : 
	I0108 21:21:38.079097    5636 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-554300 ).state
	I0108 21:21:40.280870    5636 main.go:141] libmachine: [stdout =====>] : Running
	
	I0108 21:21:40.281082    5636 main.go:141] libmachine: [stderr =====>] : 
	I0108 21:21:40.281133    5636 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-554300 ).networkadapters[0]).ipaddresses[0]
	I0108 21:21:42.793814    5636 main.go:141] libmachine: [stdout =====>] : 
	I0108 21:21:42.793814    5636 main.go:141] libmachine: [stderr =====>] : 
	I0108 21:21:43.799362    5636 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-554300 ).state
	I0108 21:21:46.011977    5636 main.go:141] libmachine: [stdout =====>] : Running
	
	I0108 21:21:46.012254    5636 main.go:141] libmachine: [stderr =====>] : 
	I0108 21:21:46.012341    5636 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-554300 ).networkadapters[0]).ipaddresses[0]
	I0108 21:21:48.557150    5636 main.go:141] libmachine: [stdout =====>] : 172.29.107.59
	
	I0108 21:21:48.557150    5636 main.go:141] libmachine: [stderr =====>] : 
	I0108 21:21:48.557401    5636 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-554300 ).state
	I0108 21:21:50.629202    5636 main.go:141] libmachine: [stdout =====>] : Running
	
	I0108 21:21:50.629202    5636 main.go:141] libmachine: [stderr =====>] : 
	I0108 21:21:50.629280    5636 machine.go:88] provisioning docker machine ...
	I0108 21:21:50.629383    5636 buildroot.go:166] provisioning hostname "multinode-554300"
	I0108 21:21:50.629519    5636 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-554300 ).state
	I0108 21:21:52.781973    5636 main.go:141] libmachine: [stdout =====>] : Running
	
	I0108 21:21:52.782143    5636 main.go:141] libmachine: [stderr =====>] : 
	I0108 21:21:52.782212    5636 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-554300 ).networkadapters[0]).ipaddresses[0]
	I0108 21:21:55.259506    5636 main.go:141] libmachine: [stdout =====>] : 172.29.107.59
	
	I0108 21:21:55.259506    5636 main.go:141] libmachine: [stderr =====>] : 
	I0108 21:21:55.266248    5636 main.go:141] libmachine: Using SSH client type: native
	I0108 21:21:55.275514    5636 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4a6120] 0x4a8c60 <nil>  [] 0s} 172.29.107.59 22 <nil> <nil>}
	I0108 21:21:55.275514    5636 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-554300 && echo "multinode-554300" | sudo tee /etc/hostname
	I0108 21:21:55.462844    5636 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-554300
	
	I0108 21:21:55.462966    5636 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-554300 ).state
	I0108 21:21:57.564752    5636 main.go:141] libmachine: [stdout =====>] : Running
	
	I0108 21:21:57.564963    5636 main.go:141] libmachine: [stderr =====>] : 
	I0108 21:21:57.564963    5636 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-554300 ).networkadapters[0]).ipaddresses[0]
	I0108 21:22:00.048441    5636 main.go:141] libmachine: [stdout =====>] : 172.29.107.59
	
	I0108 21:22:00.048619    5636 main.go:141] libmachine: [stderr =====>] : 
	I0108 21:22:00.054430    5636 main.go:141] libmachine: Using SSH client type: native
	I0108 21:22:00.055145    5636 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4a6120] 0x4a8c60 <nil>  [] 0s} 172.29.107.59 22 <nil> <nil>}
	I0108 21:22:00.055145    5636 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-554300' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-554300/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-554300' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0108 21:22:00.224160    5636 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0108 21:22:00.224229    5636 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube7\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube7\minikube-integration\.minikube}
	I0108 21:22:00.224229    5636 buildroot.go:174] setting up certificates
	I0108 21:22:00.224229    5636 provision.go:83] configureAuth start
	I0108 21:22:00.224229    5636 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-554300 ).state
	I0108 21:22:02.327159    5636 main.go:141] libmachine: [stdout =====>] : Running
	
	I0108 21:22:02.327159    5636 main.go:141] libmachine: [stderr =====>] : 
	I0108 21:22:02.327159    5636 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-554300 ).networkadapters[0]).ipaddresses[0]
	I0108 21:22:04.816362    5636 main.go:141] libmachine: [stdout =====>] : 172.29.107.59
	
	I0108 21:22:04.816417    5636 main.go:141] libmachine: [stderr =====>] : 
	I0108 21:22:04.816417    5636 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-554300 ).state
	I0108 21:22:06.888015    5636 main.go:141] libmachine: [stdout =====>] : Running
	
	I0108 21:22:06.888015    5636 main.go:141] libmachine: [stderr =====>] : 
	I0108 21:22:06.888261    5636 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-554300 ).networkadapters[0]).ipaddresses[0]
	I0108 21:22:09.388013    5636 main.go:141] libmachine: [stdout =====>] : 172.29.107.59
	
	I0108 21:22:09.388013    5636 main.go:141] libmachine: [stderr =====>] : 
	I0108 21:22:09.388013    5636 provision.go:138] copyHostCerts
	I0108 21:22:09.388512    5636 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube7\minikube-integration\.minikube/key.pem
	I0108 21:22:09.389134    5636 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/key.pem, removing ...
	I0108 21:22:09.389223    5636 exec_runner.go:203] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\key.pem
	I0108 21:22:09.389798    5636 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/key.pem (1679 bytes)
	I0108 21:22:09.391093    5636 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube7\minikube-integration\.minikube/ca.pem
	I0108 21:22:09.391453    5636 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/ca.pem, removing ...
	I0108 21:22:09.391453    5636 exec_runner.go:203] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.pem
	I0108 21:22:09.391947    5636 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0108 21:22:09.393088    5636 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube7\minikube-integration\.minikube/cert.pem
	I0108 21:22:09.393202    5636 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/cert.pem, removing ...
	I0108 21:22:09.393202    5636 exec_runner.go:203] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cert.pem
	I0108 21:22:09.393202    5636 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0108 21:22:09.394717    5636 provision.go:112] generating server cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-554300 san=[172.29.107.59 172.29.107.59 localhost 127.0.0.1 minikube multinode-554300]
	I0108 21:22:09.473378    5636 provision.go:172] copyRemoteCerts
	I0108 21:22:09.486399    5636 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0108 21:22:09.486399    5636 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-554300 ).state
	I0108 21:22:11.568869    5636 main.go:141] libmachine: [stdout =====>] : Running
	
	I0108 21:22:11.568869    5636 main.go:141] libmachine: [stderr =====>] : 
	I0108 21:22:11.568869    5636 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-554300 ).networkadapters[0]).ipaddresses[0]
	I0108 21:22:14.051179    5636 main.go:141] libmachine: [stdout =====>] : 172.29.107.59
	
	I0108 21:22:14.051586    5636 main.go:141] libmachine: [stderr =====>] : 
	I0108 21:22:14.052075    5636 sshutil.go:53] new ssh client: &{IP:172.29.107.59 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-554300\id_rsa Username:docker}
	I0108 21:22:14.159842    5636 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.6734195s)
	I0108 21:22:14.159940    5636 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0108 21:22:14.160073    5636 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0108 21:22:14.203933    5636 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0108 21:22:14.204355    5636 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1224 bytes)
	I0108 21:22:14.244927    5636 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0108 21:22:14.245394    5636 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0108 21:22:14.282565    5636 provision.go:86] duration metric: configureAuth took 14.0582641s
	I0108 21:22:14.282565    5636 buildroot.go:189] setting minikube options for container-runtime
	I0108 21:22:14.283189    5636 config.go:182] Loaded profile config "multinode-554300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0108 21:22:14.283189    5636 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-554300 ).state
	I0108 21:22:16.414747    5636 main.go:141] libmachine: [stdout =====>] : Running
	
	I0108 21:22:16.415227    5636 main.go:141] libmachine: [stderr =====>] : 
	I0108 21:22:16.415275    5636 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-554300 ).networkadapters[0]).ipaddresses[0]
	I0108 21:22:18.904796    5636 main.go:141] libmachine: [stdout =====>] : 172.29.107.59
	
	I0108 21:22:18.904796    5636 main.go:141] libmachine: [stderr =====>] : 
	I0108 21:22:18.911540    5636 main.go:141] libmachine: Using SSH client type: native
	I0108 21:22:18.912414    5636 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4a6120] 0x4a8c60 <nil>  [] 0s} 172.29.107.59 22 <nil> <nil>}
	I0108 21:22:18.912414    5636 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0108 21:22:19.070331    5636 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0108 21:22:19.070331    5636 buildroot.go:70] root file system type: tmpfs
	I0108 21:22:19.070331    5636 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0108 21:22:19.070331    5636 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-554300 ).state
	I0108 21:22:21.158954    5636 main.go:141] libmachine: [stdout =====>] : Running
	
	I0108 21:22:21.158954    5636 main.go:141] libmachine: [stderr =====>] : 
	I0108 21:22:21.158954    5636 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-554300 ).networkadapters[0]).ipaddresses[0]
	I0108 21:22:23.657280    5636 main.go:141] libmachine: [stdout =====>] : 172.29.107.59
	
	I0108 21:22:23.657552    5636 main.go:141] libmachine: [stderr =====>] : 
	I0108 21:22:23.663135    5636 main.go:141] libmachine: Using SSH client type: native
	I0108 21:22:23.663907    5636 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4a6120] 0x4a8c60 <nil>  [] 0s} 172.29.107.59 22 <nil> <nil>}
	I0108 21:22:23.663907    5636 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0108 21:22:23.839813    5636 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0108 21:22:23.839813    5636 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-554300 ).state
	I0108 21:22:25.914263    5636 main.go:141] libmachine: [stdout =====>] : Running
	
	I0108 21:22:25.914577    5636 main.go:141] libmachine: [stderr =====>] : 
	I0108 21:22:25.915417    5636 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-554300 ).networkadapters[0]).ipaddresses[0]
	I0108 21:22:28.438826    5636 main.go:141] libmachine: [stdout =====>] : 172.29.107.59
	
	I0108 21:22:28.439105    5636 main.go:141] libmachine: [stderr =====>] : 
	I0108 21:22:28.444851    5636 main.go:141] libmachine: Using SSH client type: native
	I0108 21:22:28.445126    5636 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4a6120] 0x4a8c60 <nil>  [] 0s} 172.29.107.59 22 <nil> <nil>}
	I0108 21:22:28.445716    5636 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0108 21:22:29.430043    5636 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0108 21:22:29.430043    5636 machine.go:91] provisioned docker machine in 38.8005627s
	I0108 21:22:29.430043    5636 client.go:171] LocalClient.Create took 1m48.1724516s
	I0108 21:22:29.430043    5636 start.go:167] duration metric: libmachine.API.Create for "multinode-554300" took 1m48.1724516s
	I0108 21:22:29.430043    5636 start.go:300] post-start starting for "multinode-554300" (driver="hyperv")
	I0108 21:22:29.430043    5636 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0108 21:22:29.446891    5636 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0108 21:22:29.447411    5636 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-554300 ).state
	I0108 21:22:31.535959    5636 main.go:141] libmachine: [stdout =====>] : Running
	
	I0108 21:22:31.535959    5636 main.go:141] libmachine: [stderr =====>] : 
	I0108 21:22:31.536092    5636 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-554300 ).networkadapters[0]).ipaddresses[0]
	I0108 21:22:34.041385    5636 main.go:141] libmachine: [stdout =====>] : 172.29.107.59
	
	I0108 21:22:34.041458    5636 main.go:141] libmachine: [stderr =====>] : 
	I0108 21:22:34.041625    5636 sshutil.go:53] new ssh client: &{IP:172.29.107.59 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-554300\id_rsa Username:docker}
	I0108 21:22:34.167660    5636 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.7202246s)
	I0108 21:22:34.181625    5636 ssh_runner.go:195] Run: cat /etc/os-release
	I0108 21:22:34.187447    5636 command_runner.go:130] > NAME=Buildroot
	I0108 21:22:34.187447    5636 command_runner.go:130] > VERSION=2021.02.12-1-gae27a7b-dirty
	I0108 21:22:34.187447    5636 command_runner.go:130] > ID=buildroot
	I0108 21:22:34.187447    5636 command_runner.go:130] > VERSION_ID=2021.02.12
	I0108 21:22:34.187447    5636 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
	I0108 21:22:34.188201    5636 info.go:137] Remote host: Buildroot 2021.02.12
	I0108 21:22:34.188201    5636 filesync.go:126] Scanning C:\Users\jenkins.minikube7\minikube-integration\.minikube\addons for local assets ...
	I0108 21:22:34.188201    5636 filesync.go:126] Scanning C:\Users\jenkins.minikube7\minikube-integration\.minikube\files for local assets ...
	I0108 21:22:34.189074    5636 filesync.go:149] local asset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\30082.pem -> 30082.pem in /etc/ssl/certs
	I0108 21:22:34.189074    5636 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\30082.pem -> /etc/ssl/certs/30082.pem
	I0108 21:22:34.203030    5636 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0108 21:22:34.216378    5636 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\30082.pem --> /etc/ssl/certs/30082.pem (1708 bytes)
	I0108 21:22:34.252816    5636 start.go:303] post-start completed in 4.8227478s
	I0108 21:22:34.255829    5636 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-554300 ).state
	I0108 21:22:36.396096    5636 main.go:141] libmachine: [stdout =====>] : Running
	
	I0108 21:22:36.396140    5636 main.go:141] libmachine: [stderr =====>] : 
	I0108 21:22:36.396140    5636 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-554300 ).networkadapters[0]).ipaddresses[0]
	I0108 21:22:38.968458    5636 main.go:141] libmachine: [stdout =====>] : 172.29.107.59
	
	I0108 21:22:38.968532    5636 main.go:141] libmachine: [stderr =====>] : 
	I0108 21:22:38.968755    5636 profile.go:148] Saving config to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-554300\config.json ...
	I0108 21:22:38.972149    5636 start.go:128] duration metric: createHost completed in 1m57.7159518s
	I0108 21:22:38.972255    5636 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-554300 ).state
	I0108 21:22:41.066758    5636 main.go:141] libmachine: [stdout =====>] : Running
	
	I0108 21:22:41.066758    5636 main.go:141] libmachine: [stderr =====>] : 
	I0108 21:22:41.066976    5636 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-554300 ).networkadapters[0]).ipaddresses[0]
	I0108 21:22:43.587441    5636 main.go:141] libmachine: [stdout =====>] : 172.29.107.59
	
	I0108 21:22:43.587441    5636 main.go:141] libmachine: [stderr =====>] : 
	I0108 21:22:43.593267    5636 main.go:141] libmachine: Using SSH client type: native
	I0108 21:22:43.593431    5636 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4a6120] 0x4a8c60 <nil>  [] 0s} 172.29.107.59 22 <nil> <nil>}
	I0108 21:22:43.594004    5636 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0108 21:22:43.749949    5636 main.go:141] libmachine: SSH cmd err, output: <nil>: 1704748963.751480015
	
	I0108 21:22:43.749949    5636 fix.go:206] guest clock: 1704748963.751480015
	I0108 21:22:43.749949    5636 fix.go:219] Guest: 2024-01-08 21:22:43.751480015 +0000 UTC Remote: 2024-01-08 21:22:38.9721499 +0000 UTC m=+123.241787001 (delta=4.779330115s)
	I0108 21:22:43.750516    5636 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-554300 ).state
	I0108 21:22:45.834533    5636 main.go:141] libmachine: [stdout =====>] : Running
	
	I0108 21:22:45.834533    5636 main.go:141] libmachine: [stderr =====>] : 
	I0108 21:22:45.834764    5636 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-554300 ).networkadapters[0]).ipaddresses[0]
	I0108 21:22:48.332782    5636 main.go:141] libmachine: [stdout =====>] : 172.29.107.59
	
	I0108 21:22:48.332878    5636 main.go:141] libmachine: [stderr =====>] : 
	I0108 21:22:48.338498    5636 main.go:141] libmachine: Using SSH client type: native
	I0108 21:22:48.339337    5636 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4a6120] 0x4a8c60 <nil>  [] 0s} 172.29.107.59 22 <nil> <nil>}
	I0108 21:22:48.339337    5636 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1704748963
	I0108 21:22:48.511571    5636 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Jan  8 21:22:43 UTC 2024
	
	I0108 21:22:48.511571    5636 fix.go:226] clock set: Mon Jan  8 21:22:43 UTC 2024
	 (err=<nil>)
	I0108 21:22:48.511632    5636 start.go:83] releasing machines lock for "multinode-554300", held for 2m7.2553787s
	I0108 21:22:48.511793    5636 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-554300 ).state
	I0108 21:22:50.582429    5636 main.go:141] libmachine: [stdout =====>] : Running
	
	I0108 21:22:50.582487    5636 main.go:141] libmachine: [stderr =====>] : 
	I0108 21:22:50.582487    5636 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-554300 ).networkadapters[0]).ipaddresses[0]
	I0108 21:22:53.154739    5636 main.go:141] libmachine: [stdout =====>] : 172.29.107.59
	
	I0108 21:22:53.155049    5636 main.go:141] libmachine: [stderr =====>] : 
	I0108 21:22:53.159722    5636 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0108 21:22:53.159971    5636 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-554300 ).state
	I0108 21:22:53.172201    5636 ssh_runner.go:195] Run: cat /version.json
	I0108 21:22:53.172201    5636 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-554300 ).state
	I0108 21:22:55.319900    5636 main.go:141] libmachine: [stdout =====>] : Running
	
	I0108 21:22:55.320068    5636 main.go:141] libmachine: [stderr =====>] : 
	I0108 21:22:55.320068    5636 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-554300 ).networkadapters[0]).ipaddresses[0]
	I0108 21:22:55.350981    5636 main.go:141] libmachine: [stdout =====>] : Running
	
	I0108 21:22:55.350981    5636 main.go:141] libmachine: [stderr =====>] : 
	I0108 21:22:55.350981    5636 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-554300 ).networkadapters[0]).ipaddresses[0]
	I0108 21:22:57.914884    5636 main.go:141] libmachine: [stdout =====>] : 172.29.107.59
	
	I0108 21:22:57.914961    5636 main.go:141] libmachine: [stderr =====>] : 
	I0108 21:22:57.915455    5636 sshutil.go:53] new ssh client: &{IP:172.29.107.59 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-554300\id_rsa Username:docker}
	I0108 21:22:57.992107    5636 main.go:141] libmachine: [stdout =====>] : 172.29.107.59
	
	I0108 21:22:57.992107    5636 main.go:141] libmachine: [stderr =====>] : 
	I0108 21:22:57.992762    5636 sshutil.go:53] new ssh client: &{IP:172.29.107.59 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-554300\id_rsa Username:docker}
	I0108 21:22:58.013021    5636 command_runner.go:130] > {"iso_version": "v1.32.1-1702708929-17806", "kicbase_version": "v0.0.42-1702660877-17806", "minikube_version": "v1.32.0", "commit": "957da21b08687cca2533dd65b67e68ead277b79e"}
	I0108 21:22:58.013087    5636 ssh_runner.go:235] Completed: cat /version.json: (4.8407953s)
	I0108 21:22:58.027284    5636 ssh_runner.go:195] Run: systemctl --version
	I0108 21:22:58.035446    5636 command_runner.go:130] > systemd 247 (247)
	I0108 21:22:58.035446    5636 command_runner.go:130] > -PAM -AUDIT -SELINUX -IMA -APPARMOR -SMACK -SYSVINIT -UTMP -LIBCRYPTSETUP -GCRYPT -GNUTLS +ACL +XZ +LZ4 -ZSTD +SECCOMP +BLKID -ELFUTILS +KMOD -IDN2 -IDN -PCRE2 default-hierarchy=hybrid
	I0108 21:22:58.049356    5636 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0108 21:22:58.187901    5636 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0108 21:22:58.188390    5636 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	I0108 21:22:58.188390    5636 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (5.0285024s)
	W0108 21:22:58.188390    5636 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0108 21:22:58.201640    5636 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0108 21:22:58.223910    5636 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0108 21:22:58.224354    5636 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0108 21:22:58.224492    5636 start.go:475] detecting cgroup driver to use...
	I0108 21:22:58.224710    5636 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0108 21:22:58.252453    5636 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0108 21:22:58.266788    5636 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0108 21:22:58.303548    5636 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0108 21:22:58.319123    5636 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0108 21:22:58.331830    5636 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0108 21:22:58.361549    5636 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0108 21:22:58.391533    5636 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0108 21:22:58.424232    5636 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0108 21:22:58.459437    5636 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0108 21:22:58.493817    5636 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0108 21:22:58.524147    5636 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0108 21:22:58.537340    5636 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0108 21:22:58.551560    5636 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0108 21:22:58.581600    5636 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0108 21:22:58.740196    5636 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0108 21:22:58.769406    5636 start.go:475] detecting cgroup driver to use...
	I0108 21:22:58.782627    5636 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0108 21:22:58.803041    5636 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0108 21:22:58.803079    5636 command_runner.go:130] > [Unit]
	I0108 21:22:58.803079    5636 command_runner.go:130] > Description=Docker Application Container Engine
	I0108 21:22:58.803079    5636 command_runner.go:130] > Documentation=https://docs.docker.com
	I0108 21:22:58.803141    5636 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0108 21:22:58.803141    5636 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0108 21:22:58.803141    5636 command_runner.go:130] > StartLimitBurst=3
	I0108 21:22:58.803141    5636 command_runner.go:130] > StartLimitIntervalSec=60
	I0108 21:22:58.803141    5636 command_runner.go:130] > [Service]
	I0108 21:22:58.803209    5636 command_runner.go:130] > Type=notify
	I0108 21:22:58.803209    5636 command_runner.go:130] > Restart=on-failure
	I0108 21:22:58.803209    5636 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0108 21:22:58.803209    5636 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0108 21:22:58.803209    5636 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0108 21:22:58.803209    5636 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0108 21:22:58.803209    5636 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0108 21:22:58.803301    5636 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0108 21:22:58.803301    5636 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0108 21:22:58.803301    5636 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0108 21:22:58.803301    5636 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0108 21:22:58.803301    5636 command_runner.go:130] > ExecStart=
	I0108 21:22:58.803379    5636 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0108 21:22:58.803379    5636 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0108 21:22:58.803419    5636 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0108 21:22:58.803419    5636 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0108 21:22:58.803419    5636 command_runner.go:130] > LimitNOFILE=infinity
	I0108 21:22:58.803466    5636 command_runner.go:130] > LimitNPROC=infinity
	I0108 21:22:58.803544    5636 command_runner.go:130] > LimitCORE=infinity
	I0108 21:22:58.803544    5636 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0108 21:22:58.803544    5636 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0108 21:22:58.803642    5636 command_runner.go:130] > TasksMax=infinity
	I0108 21:22:58.803668    5636 command_runner.go:130] > TimeoutStartSec=0
	I0108 21:22:58.803668    5636 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0108 21:22:58.803668    5636 command_runner.go:130] > Delegate=yes
	I0108 21:22:58.803668    5636 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0108 21:22:58.803713    5636 command_runner.go:130] > KillMode=process
	I0108 21:22:58.803713    5636 command_runner.go:130] > [Install]
	I0108 21:22:58.803713    5636 command_runner.go:130] > WantedBy=multi-user.target
	I0108 21:22:58.818685    5636 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0108 21:22:58.849948    5636 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0108 21:22:58.884272    5636 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0108 21:22:58.918380    5636 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0108 21:22:58.951004    5636 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0108 21:22:59.002511    5636 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0108 21:22:59.022789    5636 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0108 21:22:59.054709    5636 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0108 21:22:59.069672    5636 ssh_runner.go:195] Run: which cri-dockerd
	I0108 21:22:59.079141    5636 command_runner.go:130] > /usr/bin/cri-dockerd
	I0108 21:22:59.093290    5636 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0108 21:22:59.111472    5636 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0108 21:22:59.152144    5636 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0108 21:22:59.327037    5636 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0108 21:22:59.476144    5636 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0108 21:22:59.476407    5636 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0108 21:22:59.518361    5636 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0108 21:22:59.684798    5636 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0108 21:23:01.167592    5636 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.4827861s)
	I0108 21:23:01.181016    5636 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0108 21:23:01.211140    5636 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0108 21:23:01.245225    5636 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0108 21:23:01.406951    5636 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0108 21:23:01.558168    5636 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0108 21:23:01.718386    5636 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0108 21:23:01.755656    5636 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0108 21:23:01.791794    5636 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0108 21:23:01.962437    5636 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0108 21:23:02.061829    5636 start.go:522] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0108 21:23:02.078551    5636 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0108 21:23:02.086755    5636 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0108 21:23:02.086865    5636 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0108 21:23:02.086865    5636 command_runner.go:130] > Device: 16h/22d	Inode: 924         Links: 1
	I0108 21:23:02.086919    5636 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0108 21:23:02.086947    5636 command_runner.go:130] > Access: 2024-01-08 21:23:01.984184599 +0000
	I0108 21:23:02.086947    5636 command_runner.go:130] > Modify: 2024-01-08 21:23:01.984184599 +0000
	I0108 21:23:02.086947    5636 command_runner.go:130] > Change: 2024-01-08 21:23:01.988184599 +0000
	I0108 21:23:02.086947    5636 command_runner.go:130] >  Birth: -
	I0108 21:23:02.086947    5636 start.go:543] Will wait 60s for crictl version
	I0108 21:23:02.100308    5636 ssh_runner.go:195] Run: which crictl
	I0108 21:23:02.105461    5636 command_runner.go:130] > /usr/bin/crictl
	I0108 21:23:02.116487    5636 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0108 21:23:02.184703    5636 command_runner.go:130] > Version:  0.1.0
	I0108 21:23:02.184703    5636 command_runner.go:130] > RuntimeName:  docker
	I0108 21:23:02.184703    5636 command_runner.go:130] > RuntimeVersion:  24.0.7
	I0108 21:23:02.184703    5636 command_runner.go:130] > RuntimeApiVersion:  v1
	I0108 21:23:02.184703    5636 start.go:559] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  24.0.7
	RuntimeApiVersion:  v1
	I0108 21:23:02.195314    5636 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0108 21:23:02.223424    5636 command_runner.go:130] > 24.0.7
	I0108 21:23:02.236993    5636 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0108 21:23:02.265376    5636 command_runner.go:130] > 24.0.7
	I0108 21:23:02.267423    5636 out.go:204] * Preparing Kubernetes v1.28.4 on Docker 24.0.7 ...
	I0108 21:23:02.267614    5636 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0108 21:23:02.272212    5636 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0108 21:23:02.272212    5636 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0108 21:23:02.272212    5636 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0108 21:23:02.272212    5636 ip.go:207] Found interface: {Index:10 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:21:91:d6 Flags:up|broadcast|multicast|running}
	I0108 21:23:02.275195    5636 ip.go:210] interface addr: fe80::2be2:9d7a:5cc4:f25c/64
	I0108 21:23:02.275195    5636 ip.go:210] interface addr: 172.29.96.1/20
	I0108 21:23:02.287723    5636 ssh_runner.go:195] Run: grep 172.29.96.1	host.minikube.internal$ /etc/hosts
	I0108 21:23:02.293781    5636 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.29.96.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0108 21:23:02.312059    5636 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0108 21:23:02.324132    5636 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0108 21:23:02.348010    5636 docker.go:685] Got preloaded images: 
	I0108 21:23:02.348010    5636 docker.go:691] registry.k8s.io/kube-apiserver:v1.28.4 wasn't preloaded
	I0108 21:23:02.360284    5636 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0108 21:23:02.374688    5636 command_runner.go:139] > {"Repositories":{}}
	I0108 21:23:02.388765    5636 ssh_runner.go:195] Run: which lz4
	I0108 21:23:02.395230    5636 command_runner.go:130] > /usr/bin/lz4
	I0108 21:23:02.395390    5636 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0108 21:23:02.408639    5636 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0108 21:23:02.414222    5636 command_runner.go:130] ! stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0108 21:23:02.414347    5636 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0108 21:23:02.414347    5636 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (423165415 bytes)
	I0108 21:23:04.650321    5636 docker.go:649] Took 2.254667 seconds to copy over tarball
	I0108 21:23:04.663228    5636 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0108 21:23:13.746869    5636 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (9.0829139s)
	I0108 21:23:13.746937    5636 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0108 21:23:13.812657    5636 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0108 21:23:13.826623    5636 command_runner.go:139] > {"Repositories":{"gcr.io/k8s-minikube/storage-provisioner":{"gcr.io/k8s-minikube/storage-provisioner:v5":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562"},"registry.k8s.io/coredns/coredns":{"registry.k8s.io/coredns/coredns:v1.10.1":"sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc","registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e":"sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc"},"registry.k8s.io/etcd":{"registry.k8s.io/etcd:3.5.9-0":"sha256:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9","registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3":"sha256:73deb9a3f702532592a4167455f8
bf2e5f5d900bcc959ba2fd2d35c321de1af9"},"registry.k8s.io/kube-apiserver":{"registry.k8s.io/kube-apiserver:v1.28.4":"sha256:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257","registry.k8s.io/kube-apiserver@sha256:5b28a364467cf7e134343bb3ee2c6d40682b473a743a72142c7bbe25767d36eb":"sha256:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257"},"registry.k8s.io/kube-controller-manager":{"registry.k8s.io/kube-controller-manager:v1.28.4":"sha256:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591","registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c":"sha256:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591"},"registry.k8s.io/kube-proxy":{"registry.k8s.io/kube-proxy:v1.28.4":"sha256:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e","registry.k8s.io/kube-proxy@sha256:e63408a0f5068a7e9d4b34fd72b4a2b0e5512509b53cd2123a37fc991b0ef532":"sha256:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021
a3a2899304398e"},"registry.k8s.io/kube-scheduler":{"registry.k8s.io/kube-scheduler:v1.28.4":"sha256:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1","registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba":"sha256:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1"},"registry.k8s.io/pause":{"registry.k8s.io/pause:3.9":"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097":"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c"}}}
	I0108 21:23:13.826927    5636 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2629 bytes)
	I0108 21:23:13.869875    5636 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0108 21:23:14.032888    5636 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0108 21:23:16.523606    5636 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.4907059s)
	I0108 21:23:16.533562    5636 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0108 21:23:16.557373    5636 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.28.4
	I0108 21:23:16.557373    5636 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.28.4
	I0108 21:23:16.557373    5636 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.28.4
	I0108 21:23:16.557373    5636 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.28.4
	I0108 21:23:16.557373    5636 command_runner.go:130] > registry.k8s.io/etcd:3.5.9-0
	I0108 21:23:16.557373    5636 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.10.1
	I0108 21:23:16.557373    5636 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0108 21:23:16.557373    5636 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0108 21:23:16.557373    5636 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.28.4
	registry.k8s.io/kube-controller-manager:v1.28.4
	registry.k8s.io/kube-scheduler:v1.28.4
	registry.k8s.io/kube-proxy:v1.28.4
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0108 21:23:16.557373    5636 cache_images.go:84] Images are preloaded, skipping loading
	I0108 21:23:16.565941    5636 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0108 21:23:16.598167    5636 command_runner.go:130] > cgroupfs
	I0108 21:23:16.599158    5636 cni.go:84] Creating CNI manager for ""
	I0108 21:23:16.599471    5636 cni.go:136] 1 nodes found, recommending kindnet
	I0108 21:23:16.599506    5636 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0108 21:23:16.599542    5636 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.29.107.59 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-554300 NodeName:multinode-554300 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.29.107.59"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.29.107.59 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0108 21:23:16.599573    5636 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.29.107.59
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "multinode-554300"
	  kubeletExtraArgs:
	    node-ip: 172.29.107.59
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.29.107.59"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0108 21:23:16.599573    5636 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=multinode-554300 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.29.107.59
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:multinode-554300 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0108 21:23:16.613747    5636 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0108 21:23:16.628309    5636 command_runner.go:130] > kubeadm
	I0108 21:23:16.628309    5636 command_runner.go:130] > kubectl
	I0108 21:23:16.628309    5636 command_runner.go:130] > kubelet
	I0108 21:23:16.629395    5636 binaries.go:44] Found k8s binaries, skipping transfer
	I0108 21:23:16.642331    5636 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0108 21:23:16.656277    5636 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I0108 21:23:16.680892    5636 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0108 21:23:16.706251    5636 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2102 bytes)
	I0108 21:23:16.747934    5636 ssh_runner.go:195] Run: grep 172.29.107.59	control-plane.minikube.internal$ /etc/hosts
	I0108 21:23:16.753448    5636 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.29.107.59	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0108 21:23:16.769725    5636 certs.go:56] Setting up C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-554300 for IP: 172.29.107.59
	I0108 21:23:16.769818    5636 certs.go:190] acquiring lock for shared ca certs: {Name:mk3bd6e475aadf28590677101b36f61c132d4290 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 21:23:16.770206    5636 certs.go:199] skipping minikubeCA CA generation: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.key
	I0108 21:23:16.771103    5636 certs.go:199] skipping proxyClientCA CA generation: C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.key
	I0108 21:23:16.771722    5636 certs.go:319] generating minikube-user signed cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-554300\client.key
	I0108 21:23:16.771722    5636 crypto.go:68] Generating cert C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-554300\client.crt with IP's: []
	I0108 21:23:16.901905    5636 crypto.go:156] Writing cert to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-554300\client.crt ...
	I0108 21:23:16.901905    5636 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-554300\client.crt: {Name:mk49d3e42d21455fd577131b32bba6b574ee72f7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 21:23:16.903865    5636 crypto.go:164] Writing key to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-554300\client.key ...
	I0108 21:23:16.903865    5636 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-554300\client.key: {Name:mkee40c0c38cdd680ed7f1262cf812607321bafc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 21:23:16.905330    5636 certs.go:319] generating minikube signed cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-554300\apiserver.key.389290a8
	I0108 21:23:16.905330    5636 crypto.go:68] Generating cert C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-554300\apiserver.crt.389290a8 with IP's: [172.29.107.59 10.96.0.1 127.0.0.1 10.0.0.1]
	I0108 21:23:17.147440    5636 crypto.go:156] Writing cert to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-554300\apiserver.crt.389290a8 ...
	I0108 21:23:17.147440    5636 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-554300\apiserver.crt.389290a8: {Name:mkf6ecfd13822cf68f98d9ffff060256a3df503d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 21:23:17.149521    5636 crypto.go:164] Writing key to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-554300\apiserver.key.389290a8 ...
	I0108 21:23:17.149521    5636 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-554300\apiserver.key.389290a8: {Name:mkc513f643c1b1904a08293b99a02ee3a48677f6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 21:23:17.149815    5636 certs.go:337] copying C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-554300\apiserver.crt.389290a8 -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-554300\apiserver.crt
	I0108 21:23:17.160888    5636 certs.go:341] copying C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-554300\apiserver.key.389290a8 -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-554300\apiserver.key
	I0108 21:23:17.163038    5636 certs.go:319] generating aggregator signed cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-554300\proxy-client.key
	I0108 21:23:17.163038    5636 crypto.go:68] Generating cert C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-554300\proxy-client.crt with IP's: []
	I0108 21:23:17.340965    5636 crypto.go:156] Writing cert to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-554300\proxy-client.crt ...
	I0108 21:23:17.340965    5636 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-554300\proxy-client.crt: {Name:mkfd5b1b8d43d840ea63458bcafdb0288a11b2ce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 21:23:17.342455    5636 crypto.go:164] Writing key to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-554300\proxy-client.key ...
	I0108 21:23:17.342455    5636 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-554300\proxy-client.key: {Name:mk3e68baecc3847caa6f93e7eee78e7f96ea88b5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 21:23:17.344399    5636 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-554300\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0108 21:23:17.344399    5636 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-554300\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0108 21:23:17.344399    5636 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-554300\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0108 21:23:17.352637    5636 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-554300\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0108 21:23:17.352637    5636 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0108 21:23:17.353569    5636 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0108 21:23:17.353750    5636 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0108 21:23:17.353936    5636 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0108 21:23:17.354111    5636 certs.go:437] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\3008.pem (1338 bytes)
	W0108 21:23:17.354887    5636 certs.go:433] ignoring C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\3008_empty.pem, impossibly tiny 0 bytes
	I0108 21:23:17.354887    5636 certs.go:437] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I0108 21:23:17.354887    5636 certs.go:437] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0108 21:23:17.354887    5636 certs.go:437] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0108 21:23:17.354887    5636 certs.go:437] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0108 21:23:17.356144    5636 certs.go:437] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\30082.pem (1708 bytes)
	I0108 21:23:17.356568    5636 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0108 21:23:17.356697    5636 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\3008.pem -> /usr/share/ca-certificates/3008.pem
	I0108 21:23:17.356898    5636 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\30082.pem -> /usr/share/ca-certificates/30082.pem
	I0108 21:23:17.358061    5636 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-554300\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0108 21:23:17.401415    5636 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-554300\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0108 21:23:17.437962    5636 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-554300\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0108 21:23:17.473831    5636 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-554300\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0108 21:23:17.515416    5636 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0108 21:23:17.551246    5636 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0108 21:23:17.588050    5636 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0108 21:23:17.629385    5636 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0108 21:23:17.666508    5636 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0108 21:23:17.700657    5636 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\3008.pem --> /usr/share/ca-certificates/3008.pem (1338 bytes)
	I0108 21:23:17.742379    5636 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\30082.pem --> /usr/share/ca-certificates/30082.pem (1708 bytes)
	I0108 21:23:17.777859    5636 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0108 21:23:17.818162    5636 ssh_runner.go:195] Run: openssl version
	I0108 21:23:17.827639    5636 command_runner.go:130] > OpenSSL 1.1.1n  15 Mar 2022
	I0108 21:23:17.841344    5636 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3008.pem && ln -fs /usr/share/ca-certificates/3008.pem /etc/ssl/certs/3008.pem"
	I0108 21:23:17.872021    5636 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3008.pem
	I0108 21:23:17.877679    5636 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Jan  8 20:28 /usr/share/ca-certificates/3008.pem
	I0108 21:23:17.878671    5636 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan  8 20:28 /usr/share/ca-certificates/3008.pem
	I0108 21:23:17.893002    5636 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3008.pem
	I0108 21:23:17.900789    5636 command_runner.go:130] > 51391683
	I0108 21:23:17.913134    5636 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/3008.pem /etc/ssl/certs/51391683.0"
	I0108 21:23:17.942019    5636 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/30082.pem && ln -fs /usr/share/ca-certificates/30082.pem /etc/ssl/certs/30082.pem"
	I0108 21:23:17.971626    5636 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/30082.pem
	I0108 21:23:17.976394    5636 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Jan  8 20:28 /usr/share/ca-certificates/30082.pem
	I0108 21:23:17.976394    5636 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan  8 20:28 /usr/share/ca-certificates/30082.pem
	I0108 21:23:17.992214    5636 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/30082.pem
	I0108 21:23:17.999337    5636 command_runner.go:130] > 3ec20f2e
	I0108 21:23:18.012733    5636 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/30082.pem /etc/ssl/certs/3ec20f2e.0"
	I0108 21:23:18.044625    5636 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0108 21:23:18.078224    5636 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0108 21:23:18.083910    5636 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Jan  8 20:14 /usr/share/ca-certificates/minikubeCA.pem
	I0108 21:23:18.083910    5636 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan  8 20:14 /usr/share/ca-certificates/minikubeCA.pem
	I0108 21:23:18.096064    5636 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0108 21:23:18.103558    5636 command_runner.go:130] > b5213941
	I0108 21:23:18.116281    5636 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0108 21:23:18.152827    5636 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0108 21:23:18.158557    5636 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0108 21:23:18.159468    5636 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0108 21:23:18.159468    5636 kubeadm.go:404] StartCluster: {Name:multinode-554300 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17806/minikube-v1.32.1-1702708929-17806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersi
on:v1.28.4 ClusterName:multinode-554300 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:172.29.107.59 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSiz
e:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I0108 21:23:18.168334    5636 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0108 21:23:18.207597    5636 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0108 21:23:18.220872    5636 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/kubeadm-flags.env': No such file or directory
	I0108 21:23:18.221765    5636 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/config.yaml': No such file or directory
	I0108 21:23:18.221818    5636 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/etcd': No such file or directory
	I0108 21:23:18.234265    5636 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0108 21:23:18.263185    5636 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0108 21:23:18.276701    5636 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I0108 21:23:18.276701    5636 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I0108 21:23:18.276701    5636 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I0108 21:23:18.276701    5636 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0108 21:23:18.276701    5636 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0108 21:23:18.276701    5636 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0108 21:23:19.049641    5636 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0108 21:23:19.049641    5636 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0108 21:23:32.157647    5636 kubeadm.go:322] [init] Using Kubernetes version: v1.28.4
	I0108 21:23:32.157725    5636 command_runner.go:130] > [init] Using Kubernetes version: v1.28.4
	I0108 21:23:32.157865    5636 kubeadm.go:322] [preflight] Running pre-flight checks
	I0108 21:23:32.157935    5636 command_runner.go:130] > [preflight] Running pre-flight checks
	I0108 21:23:32.158092    5636 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0108 21:23:32.158151    5636 command_runner.go:130] > [preflight] Pulling images required for setting up a Kubernetes cluster
	I0108 21:23:32.158292    5636 command_runner.go:130] > [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0108 21:23:32.158385    5636 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0108 21:23:32.158642    5636 command_runner.go:130] > [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0108 21:23:32.158642    5636 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0108 21:23:32.158642    5636 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0108 21:23:32.158642    5636 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0108 21:23:32.159617    5636 out.go:204]   - Generating certificates and keys ...
	I0108 21:23:32.159931    5636 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0108 21:23:32.160031    5636 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0108 21:23:32.160031    5636 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0108 21:23:32.160123    5636 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0108 21:23:32.160250    5636 command_runner.go:130] > [certs] Generating "apiserver-kubelet-client" certificate and key
	I0108 21:23:32.160332    5636 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0108 21:23:32.160491    5636 command_runner.go:130] > [certs] Generating "front-proxy-ca" certificate and key
	I0108 21:23:32.160491    5636 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0108 21:23:32.160660    5636 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0108 21:23:32.160737    5636 command_runner.go:130] > [certs] Generating "front-proxy-client" certificate and key
	I0108 21:23:32.160806    5636 command_runner.go:130] > [certs] Generating "etcd/ca" certificate and key
	I0108 21:23:32.160806    5636 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0108 21:23:32.160951    5636 command_runner.go:130] > [certs] Generating "etcd/server" certificate and key
	I0108 21:23:32.160951    5636 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0108 21:23:32.161258    5636 command_runner.go:130] > [certs] etcd/server serving cert is signed for DNS names [localhost multinode-554300] and IPs [172.29.107.59 127.0.0.1 ::1]
	I0108 21:23:32.161324    5636 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [localhost multinode-554300] and IPs [172.29.107.59 127.0.0.1 ::1]
	I0108 21:23:32.161555    5636 command_runner.go:130] > [certs] Generating "etcd/peer" certificate and key
	I0108 21:23:32.161555    5636 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0108 21:23:32.161867    5636 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-554300] and IPs [172.29.107.59 127.0.0.1 ::1]
	I0108 21:23:32.161867    5636 command_runner.go:130] > [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-554300] and IPs [172.29.107.59 127.0.0.1 ::1]
	I0108 21:23:32.162054    5636 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0108 21:23:32.162113    5636 command_runner.go:130] > [certs] Generating "etcd/healthcheck-client" certificate and key
	I0108 21:23:32.162293    5636 command_runner.go:130] > [certs] Generating "apiserver-etcd-client" certificate and key
	I0108 21:23:32.162293    5636 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0108 21:23:32.162383    5636 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0108 21:23:32.162383    5636 command_runner.go:130] > [certs] Generating "sa" key and public key
	I0108 21:23:32.162383    5636 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0108 21:23:32.162383    5636 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0108 21:23:32.162383    5636 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I0108 21:23:32.162383    5636 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0108 21:23:32.162383    5636 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0108 21:23:32.162383    5636 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0108 21:23:32.162922    5636 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0108 21:23:32.163009    5636 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0108 21:23:32.163109    5636 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0108 21:23:32.163109    5636 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0108 21:23:32.163109    5636 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0108 21:23:32.163109    5636 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0108 21:23:32.163656    5636 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0108 21:23:32.163857    5636 out.go:204]   - Booting up control plane ...
	I0108 21:23:32.163704    5636 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0108 21:23:32.164530    5636 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0108 21:23:32.164530    5636 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0108 21:23:32.164530    5636 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0108 21:23:32.164530    5636 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0108 21:23:32.164530    5636 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0108 21:23:32.164530    5636 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0108 21:23:32.165075    5636 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0108 21:23:32.165118    5636 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0108 21:23:32.165277    5636 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0108 21:23:32.165277    5636 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0108 21:23:32.165277    5636 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0108 21:23:32.165277    5636 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0108 21:23:32.165277    5636 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0108 21:23:32.165277    5636 command_runner.go:130] > [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0108 21:23:32.165277    5636 command_runner.go:130] > [apiclient] All control plane components are healthy after 8.011386 seconds
	I0108 21:23:32.165277    5636 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.011386 seconds
	I0108 21:23:32.166115    5636 command_runner.go:130] > [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0108 21:23:32.166115    5636 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0108 21:23:32.166115    5636 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0108 21:23:32.166115    5636 command_runner.go:130] > [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0108 21:23:32.166115    5636 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0108 21:23:32.166115    5636 command_runner.go:130] > [upload-certs] Skipping phase. Please see --upload-certs
	I0108 21:23:32.166115    5636 command_runner.go:130] > [mark-control-plane] Marking the node multinode-554300 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0108 21:23:32.166115    5636 kubeadm.go:322] [mark-control-plane] Marking the node multinode-554300 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0108 21:23:32.167126    5636 kubeadm.go:322] [bootstrap-token] Using token: oxsnob.3tjj9m44fxw3lbwj
	I0108 21:23:32.167126    5636 out.go:204]   - Configuring RBAC rules ...
	I0108 21:23:32.167126    5636 command_runner.go:130] > [bootstrap-token] Using token: oxsnob.3tjj9m44fxw3lbwj
	I0108 21:23:32.167126    5636 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0108 21:23:32.167126    5636 command_runner.go:130] > [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0108 21:23:32.168161    5636 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0108 21:23:32.168161    5636 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0108 21:23:32.168161    5636 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0108 21:23:32.168161    5636 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0108 21:23:32.168161    5636 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0108 21:23:32.168161    5636 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0108 21:23:32.169122    5636 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0108 21:23:32.169122    5636 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0108 21:23:32.169122    5636 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0108 21:23:32.169122    5636 command_runner.go:130] > [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0108 21:23:32.169122    5636 command_runner.go:130] > [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0108 21:23:32.169122    5636 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0108 21:23:32.169122    5636 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0108 21:23:32.169122    5636 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0108 21:23:32.169122    5636 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0108 21:23:32.169122    5636 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0108 21:23:32.169122    5636 kubeadm.go:322] 
	I0108 21:23:32.169122    5636 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0108 21:23:32.169122    5636 command_runner.go:130] > Your Kubernetes control-plane has initialized successfully!
	I0108 21:23:32.169122    5636 kubeadm.go:322] 
	I0108 21:23:32.170132    5636 command_runner.go:130] > To start using your cluster, you need to run the following as a regular user:
	I0108 21:23:32.170132    5636 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0108 21:23:32.170132    5636 kubeadm.go:322] 
	I0108 21:23:32.170132    5636 command_runner.go:130] >   mkdir -p $HOME/.kube
	I0108 21:23:32.170132    5636 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0108 21:23:32.170132    5636 command_runner.go:130] >   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0108 21:23:32.170132    5636 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0108 21:23:32.170132    5636 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0108 21:23:32.170132    5636 command_runner.go:130] >   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0108 21:23:32.170132    5636 kubeadm.go:322] 
	I0108 21:23:32.170132    5636 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0108 21:23:32.170132    5636 command_runner.go:130] > Alternatively, if you are the root user, you can run:
	I0108 21:23:32.170132    5636 kubeadm.go:322] 
	I0108 21:23:32.170132    5636 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0108 21:23:32.170132    5636 command_runner.go:130] >   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0108 21:23:32.170132    5636 kubeadm.go:322] 
	I0108 21:23:32.170132    5636 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0108 21:23:32.170132    5636 command_runner.go:130] > You should now deploy a pod network to the cluster.
	I0108 21:23:32.171122    5636 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0108 21:23:32.171122    5636 command_runner.go:130] > Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0108 21:23:32.171122    5636 command_runner.go:130] >   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0108 21:23:32.171122    5636 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0108 21:23:32.171122    5636 kubeadm.go:322] 
	I0108 21:23:32.171122    5636 command_runner.go:130] > You can now join any number of control-plane nodes by copying certificate authorities
	I0108 21:23:32.171122    5636 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0108 21:23:32.171122    5636 command_runner.go:130] > and service account keys on each node and then running the following as root:
	I0108 21:23:32.171122    5636 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0108 21:23:32.171122    5636 kubeadm.go:322] 
	I0108 21:23:32.171122    5636 command_runner.go:130] >   kubeadm join control-plane.minikube.internal:8443 --token oxsnob.3tjj9m44fxw3lbwj \
	I0108 21:23:32.171122    5636 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token oxsnob.3tjj9m44fxw3lbwj \
	I0108 21:23:32.172120    5636 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:5d4a576aad216c3b59d844299451f9173aefcd7f6ecd29b777d9935fab24b02c \
	I0108 21:23:32.172120    5636 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:5d4a576aad216c3b59d844299451f9173aefcd7f6ecd29b777d9935fab24b02c \
	I0108 21:23:32.172120    5636 command_runner.go:130] > 	--control-plane 
	I0108 21:23:32.172120    5636 kubeadm.go:322] 	--control-plane 
	I0108 21:23:32.172120    5636 kubeadm.go:322] 
	I0108 21:23:32.172120    5636 command_runner.go:130] > Then you can join any number of worker nodes by running the following on each as root:
	I0108 21:23:32.172120    5636 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0108 21:23:32.172120    5636 kubeadm.go:322] 
	I0108 21:23:32.172120    5636 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token oxsnob.3tjj9m44fxw3lbwj \
	I0108 21:23:32.172120    5636 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token oxsnob.3tjj9m44fxw3lbwj \
	I0108 21:23:32.172120    5636 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:5d4a576aad216c3b59d844299451f9173aefcd7f6ecd29b777d9935fab24b02c 
	I0108 21:23:32.172120    5636 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:5d4a576aad216c3b59d844299451f9173aefcd7f6ecd29b777d9935fab24b02c 
	I0108 21:23:32.172120    5636 cni.go:84] Creating CNI manager for ""
	I0108 21:23:32.172120    5636 cni.go:136] 1 nodes found, recommending kindnet
	I0108 21:23:32.173143    5636 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0108 21:23:32.186724    5636 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0108 21:23:32.199909    5636 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0108 21:23:32.200010    5636 command_runner.go:130] >   Size: 2685752   	Blocks: 5248       IO Block: 4096   regular file
	I0108 21:23:32.200010    5636 command_runner.go:130] > Device: 11h/17d	Inode: 3544        Links: 1
	I0108 21:23:32.200010    5636 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0108 21:23:32.200010    5636 command_runner.go:130] > Access: 2024-01-08 21:21:45.568293000 +0000
	I0108 21:23:32.200010    5636 command_runner.go:130] > Modify: 2023-12-16 11:53:47.000000000 +0000
	I0108 21:23:32.200010    5636 command_runner.go:130] > Change: 2024-01-08 21:21:36.017000000 +0000
	I0108 21:23:32.200010    5636 command_runner.go:130] >  Birth: -
	I0108 21:23:32.200079    5636 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I0108 21:23:32.200079    5636 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0108 21:23:32.259752    5636 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0108 21:23:33.720288    5636 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet created
	I0108 21:23:33.720288    5636 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet created
	I0108 21:23:33.720288    5636 command_runner.go:130] > serviceaccount/kindnet created
	I0108 21:23:33.720394    5636 command_runner.go:130] > daemonset.apps/kindnet created
	I0108 21:23:33.720458    5636 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.4606983s)
	I0108 21:23:33.720511    5636 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0108 21:23:33.737751    5636 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:23:33.738644    5636 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=255792ad75c0218cbe9d2c7121633a1b1d442f28 minikube.k8s.io/name=multinode-554300 minikube.k8s.io/updated_at=2024_01_08T21_23_33_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:23:33.741629    5636 command_runner.go:130] > -16
	I0108 21:23:33.741629    5636 ops.go:34] apiserver oom_adj: -16
	I0108 21:23:33.895402    5636 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/minikube-rbac created
	I0108 21:23:33.908977    5636 command_runner.go:130] > node/multinode-554300 labeled
	I0108 21:23:33.918603    5636 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:23:34.021120    5636 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0108 21:23:34.431843    5636 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:23:34.545244    5636 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0108 21:23:34.937469    5636 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:23:35.049467    5636 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0108 21:23:35.420784    5636 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:23:35.541745    5636 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0108 21:23:35.926605    5636 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:23:36.046870    5636 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0108 21:23:36.429087    5636 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:23:36.545435    5636 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0108 21:23:36.927429    5636 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:23:37.046551    5636 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0108 21:23:37.441279    5636 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:23:37.560839    5636 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0108 21:23:37.921595    5636 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:23:38.039473    5636 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0108 21:23:38.428454    5636 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:23:38.546322    5636 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0108 21:23:38.929222    5636 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:23:39.047568    5636 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0108 21:23:39.433021    5636 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:23:39.546030    5636 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0108 21:23:39.927316    5636 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:23:40.054262    5636 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0108 21:23:40.426073    5636 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:23:40.543797    5636 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0108 21:23:40.929548    5636 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:23:41.063253    5636 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0108 21:23:41.429408    5636 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:23:41.555597    5636 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0108 21:23:41.932273    5636 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:23:42.058543    5636 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0108 21:23:42.424186    5636 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:23:42.659385    5636 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0108 21:23:42.931787    5636 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:23:43.050874    5636 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0108 21:23:43.428290    5636 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:23:43.537401    5636 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0108 21:23:43.932453    5636 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:23:44.073634    5636 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0108 21:23:44.420607    5636 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:23:44.535495    5636 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0108 21:23:44.928599    5636 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:23:45.201367    5636 command_runner.go:130] > NAME      SECRETS   AGE
	I0108 21:23:45.201444    5636 command_runner.go:130] > default   0         1s
	I0108 21:23:45.201536    5636 kubeadm.go:1088] duration metric: took 11.4808653s to wait for elevateKubeSystemPrivileges.
	I0108 21:23:45.201536    5636 kubeadm.go:406] StartCluster complete in 27.0419302s
	I0108 21:23:45.201634    5636 settings.go:142] acquiring lock: {Name:mk2f48f1c2db86c45c5c20d13312e07e9c171d09 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 21:23:45.201854    5636 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	I0108 21:23:45.203135    5636 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\kubeconfig: {Name:mk3b9816be6135fef42a073128e9ed356868417a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 21:23:45.204135    5636 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0108 21:23:45.205128    5636 addons.go:505] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0108 21:23:45.205128    5636 addons.go:69] Setting storage-provisioner=true in profile "multinode-554300"
	I0108 21:23:45.205128    5636 addons.go:237] Setting addon storage-provisioner=true in "multinode-554300"
	I0108 21:23:45.205128    5636 addons.go:69] Setting default-storageclass=true in profile "multinode-554300"
	I0108 21:23:45.205128    5636 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "multinode-554300"
	I0108 21:23:45.205128    5636 host.go:66] Checking if "multinode-554300" exists ...
	I0108 21:23:45.205128    5636 config.go:182] Loaded profile config "multinode-554300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0108 21:23:45.206133    5636 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-554300 ).state
	I0108 21:23:45.207201    5636 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-554300 ).state
	I0108 21:23:45.221146    5636 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	I0108 21:23:45.223154    5636 kapi.go:59] client config for multinode-554300: &rest.Config{Host:"https://172.29.107.59:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\profiles\\multinode-554300\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\profiles\\multinode-554300\\client.key", CAFile:"C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAD
ata:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x188c140), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0108 21:23:45.225130    5636 cert_rotation.go:137] Starting client certificate rotation controller
	I0108 21:23:45.226156    5636 round_trippers.go:463] GET https://172.29.107.59:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0108 21:23:45.226156    5636 round_trippers.go:469] Request Headers:
	I0108 21:23:45.226156    5636 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:23:45.226156    5636 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0108 21:23:45.240024    5636 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I0108 21:23:45.240972    5636 round_trippers.go:577] Response Headers:
	I0108 21:23:45.240972    5636 round_trippers.go:580]     Content-Type: application/json
	I0108 21:23:45.240972    5636 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9ebd4a85-36a3-4fdc-bd57-aab45239cbb7
	I0108 21:23:45.240972    5636 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 15bc4c65-0928-4725-8730-353e6d4075e7
	I0108 21:23:45.240972    5636 round_trippers.go:580]     Content-Length: 291
	I0108 21:23:45.240972    5636 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:23:45 GMT
	I0108 21:23:45.241039    5636 round_trippers.go:580]     Audit-Id: bc61e325-a6d5-4ef3-bf07-542673d84c21
	I0108 21:23:45.241039    5636 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:23:45.241039    5636 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"a05171e3-49ee-4610-ae26-e93c6a171dfe","resourceVersion":"379","creationTimestamp":"2024-01-08T21:23:32Z"},"spec":{"replicas":2},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0108 21:23:45.241824    5636 request.go:1212] Request Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"a05171e3-49ee-4610-ae26-e93c6a171dfe","resourceVersion":"379","creationTimestamp":"2024-01-08T21:23:32Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0108 21:23:45.241886    5636 round_trippers.go:463] PUT https://172.29.107.59:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0108 21:23:45.241952    5636 round_trippers.go:469] Request Headers:
	I0108 21:23:45.241952    5636 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:23:45.241952    5636 round_trippers.go:473]     Content-Type: application/json
	I0108 21:23:45.241952    5636 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0108 21:23:45.253762    5636 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0108 21:23:45.254624    5636 round_trippers.go:577] Response Headers:
	I0108 21:23:45.254624    5636 round_trippers.go:580]     Audit-Id: 11d61540-8be6-43d6-97c2-eee6c4122eee
	I0108 21:23:45.254624    5636 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:23:45.254624    5636 round_trippers.go:580]     Content-Type: application/json
	I0108 21:23:45.254695    5636 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9ebd4a85-36a3-4fdc-bd57-aab45239cbb7
	I0108 21:23:45.254695    5636 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 15bc4c65-0928-4725-8730-353e6d4075e7
	I0108 21:23:45.254769    5636 round_trippers.go:580]     Content-Length: 291
	I0108 21:23:45.254769    5636 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:23:45 GMT
	I0108 21:23:45.254819    5636 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"a05171e3-49ee-4610-ae26-e93c6a171dfe","resourceVersion":"380","creationTimestamp":"2024-01-08T21:23:32Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0108 21:23:45.705832    5636 command_runner.go:130] > apiVersion: v1
	I0108 21:23:45.705937    5636 command_runner.go:130] > data:
	I0108 21:23:45.705937    5636 command_runner.go:130] >   Corefile: |
	I0108 21:23:45.705937    5636 command_runner.go:130] >     .:53 {
	I0108 21:23:45.705937    5636 command_runner.go:130] >         errors
	I0108 21:23:45.706022    5636 command_runner.go:130] >         health {
	I0108 21:23:45.706022    5636 command_runner.go:130] >            lameduck 5s
	I0108 21:23:45.706022    5636 command_runner.go:130] >         }
	I0108 21:23:45.706022    5636 command_runner.go:130] >         ready
	I0108 21:23:45.706022    5636 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I0108 21:23:45.706138    5636 command_runner.go:130] >            pods insecure
	I0108 21:23:45.706138    5636 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I0108 21:23:45.706138    5636 command_runner.go:130] >            ttl 30
	I0108 21:23:45.706138    5636 command_runner.go:130] >         }
	I0108 21:23:45.706271    5636 command_runner.go:130] >         prometheus :9153
	I0108 21:23:45.706271    5636 command_runner.go:130] >         forward . /etc/resolv.conf {
	I0108 21:23:45.706271    5636 command_runner.go:130] >            max_concurrent 1000
	I0108 21:23:45.706352    5636 command_runner.go:130] >         }
	I0108 21:23:45.706352    5636 command_runner.go:130] >         cache 30
	I0108 21:23:45.706388    5636 command_runner.go:130] >         loop
	I0108 21:23:45.706388    5636 command_runner.go:130] >         reload
	I0108 21:23:45.706388    5636 command_runner.go:130] >         loadbalance
	I0108 21:23:45.706388    5636 command_runner.go:130] >     }
	I0108 21:23:45.706462    5636 command_runner.go:130] > kind: ConfigMap
	I0108 21:23:45.706503    5636 command_runner.go:130] > metadata:
	I0108 21:23:45.706554    5636 command_runner.go:130] >   creationTimestamp: "2024-01-08T21:23:32Z"
	I0108 21:23:45.706598    5636 command_runner.go:130] >   name: coredns
	I0108 21:23:45.706627    5636 command_runner.go:130] >   namespace: kube-system
	I0108 21:23:45.706627    5636 command_runner.go:130] >   resourceVersion: "257"
	I0108 21:23:45.706627    5636 command_runner.go:130] >   uid: 85d0c8c5-2dbc-4b73-acd3-3db46ce68b2b
	I0108 21:23:45.708752    5636 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           172.29.96.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0108 21:23:45.726167    5636 round_trippers.go:463] GET https://172.29.107.59:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0108 21:23:45.726248    5636 round_trippers.go:469] Request Headers:
	I0108 21:23:45.726248    5636 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0108 21:23:45.726248    5636 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:23:45.730772    5636 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0108 21:23:45.730772    5636 round_trippers.go:577] Response Headers:
	I0108 21:23:45.730772    5636 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 15bc4c65-0928-4725-8730-353e6d4075e7
	I0108 21:23:45.730772    5636 round_trippers.go:580]     Content-Length: 291
	I0108 21:23:45.730772    5636 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:23:45 GMT
	I0108 21:23:45.730772    5636 round_trippers.go:580]     Audit-Id: e1a28ec7-7a30-4342-b990-26d64f5039cb
	I0108 21:23:45.730772    5636 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:23:45.730772    5636 round_trippers.go:580]     Content-Type: application/json
	I0108 21:23:45.730772    5636 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9ebd4a85-36a3-4fdc-bd57-aab45239cbb7
	I0108 21:23:45.730772    5636 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"a05171e3-49ee-4610-ae26-e93c6a171dfe","resourceVersion":"390","creationTimestamp":"2024-01-08T21:23:32Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0108 21:23:45.731774    5636 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-554300" context rescaled to 1 replicas
	I0108 21:23:45.731774    5636 start.go:223] Will wait 6m0s for node &{Name: IP:172.29.107.59 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0108 21:23:45.732772    5636 out.go:177] * Verifying Kubernetes components...
	I0108 21:23:45.746781    5636 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0108 21:23:46.408190    5636 command_runner.go:130] > configmap/coredns replaced
	I0108 21:23:46.408190    5636 start.go:929] {"host.minikube.internal": 172.29.96.1} host record injected into CoreDNS's ConfigMap
	I0108 21:23:46.409197    5636 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	I0108 21:23:46.410193    5636 kapi.go:59] client config for multinode-554300: &rest.Config{Host:"https://172.29.107.59:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\profiles\\multinode-554300\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\profiles\\multinode-554300\\client.key", CAFile:"C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAD
ata:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x188c140), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0108 21:23:46.410193    5636 node_ready.go:35] waiting up to 6m0s for node "multinode-554300" to be "Ready" ...
	I0108 21:23:46.411218    5636 round_trippers.go:463] GET https://172.29.107.59:8443/api/v1/nodes/multinode-554300
	I0108 21:23:46.411218    5636 round_trippers.go:469] Request Headers:
	I0108 21:23:46.411218    5636 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:23:46.411218    5636 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0108 21:23:46.415210    5636 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 21:23:46.415891    5636 round_trippers.go:577] Response Headers:
	I0108 21:23:46.415891    5636 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 15bc4c65-0928-4725-8730-353e6d4075e7
	I0108 21:23:46.415891    5636 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:23:46 GMT
	I0108 21:23:46.415891    5636 round_trippers.go:580]     Audit-Id: 69877e8c-40ad-47f5-acc5-47d604aebca5
	I0108 21:23:46.415891    5636 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:23:46.415891    5636 round_trippers.go:580]     Content-Type: application/json
	I0108 21:23:46.415891    5636 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9ebd4a85-36a3-4fdc-bd57-aab45239cbb7
	I0108 21:23:46.416195    5636 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-554300","uid":"00d4501c-311c-4564-812c-b620304a4e8e","resourceVersion":"335","creationTimestamp":"2024-01-08T21:23:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-554300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-554300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T21_23_33_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-01-08T21:23:28Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0108 21:23:46.917240    5636 round_trippers.go:463] GET https://172.29.107.59:8443/api/v1/nodes/multinode-554300
	I0108 21:23:46.917553    5636 round_trippers.go:469] Request Headers:
	I0108 21:23:46.917553    5636 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0108 21:23:46.917553    5636 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:23:46.920981    5636 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 21:23:46.920981    5636 round_trippers.go:577] Response Headers:
	I0108 21:23:46.921645    5636 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:23:46.921645    5636 round_trippers.go:580]     Content-Type: application/json
	I0108 21:23:46.921645    5636 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9ebd4a85-36a3-4fdc-bd57-aab45239cbb7
	I0108 21:23:46.921645    5636 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 15bc4c65-0928-4725-8730-353e6d4075e7
	I0108 21:23:46.921734    5636 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:23:46 GMT
	I0108 21:23:46.921734    5636 round_trippers.go:580]     Audit-Id: 75389ea5-07d1-47dd-8e7a-ffc6386a9f1f
	I0108 21:23:46.922067    5636 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-554300","uid":"00d4501c-311c-4564-812c-b620304a4e8e","resourceVersion":"335","creationTimestamp":"2024-01-08T21:23:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-554300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-554300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T21_23_33_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-01-08T21:23:28Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0108 21:23:47.412376    5636 round_trippers.go:463] GET https://172.29.107.59:8443/api/v1/nodes/multinode-554300
	I0108 21:23:47.412469    5636 round_trippers.go:469] Request Headers:
	I0108 21:23:47.412469    5636 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:23:47.412469    5636 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0108 21:23:47.415850    5636 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 21:23:47.415850    5636 round_trippers.go:577] Response Headers:
	I0108 21:23:47.415850    5636 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:23:47.415850    5636 round_trippers.go:580]     Content-Type: application/json
	I0108 21:23:47.416678    5636 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9ebd4a85-36a3-4fdc-bd57-aab45239cbb7
	I0108 21:23:47.416678    5636 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 15bc4c65-0928-4725-8730-353e6d4075e7
	I0108 21:23:47.416678    5636 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:23:47 GMT
	I0108 21:23:47.416678    5636 round_trippers.go:580]     Audit-Id: f7422e80-32ce-4174-b28f-02af92f4ca80
	I0108 21:23:47.416989    5636 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-554300","uid":"00d4501c-311c-4564-812c-b620304a4e8e","resourceVersion":"335","creationTimestamp":"2024-01-08T21:23:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-554300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-554300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T21_23_33_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-01-08T21:23:28Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0108 21:23:47.474731    5636 main.go:141] libmachine: [stdout =====>] : Running
	
	I0108 21:23:47.474976    5636 main.go:141] libmachine: [stderr =====>] : 
	I0108 21:23:47.474976    5636 main.go:141] libmachine: [stdout =====>] : Running
	
	I0108 21:23:47.474976    5636 main.go:141] libmachine: [stderr =====>] : 
	I0108 21:23:47.476145    5636 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0108 21:23:47.476331    5636 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	I0108 21:23:47.477086    5636 addons.go:429] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0108 21:23:47.477214    5636 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0108 21:23:47.477244    5636 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-554300 ).state
	I0108 21:23:47.477794    5636 kapi.go:59] client config for multinode-554300: &rest.Config{Host:"https://172.29.107.59:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\profiles\\multinode-554300\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\profiles\\multinode-554300\\client.key", CAFile:"C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAD
ata:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x188c140), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0108 21:23:47.478847    5636 addons.go:237] Setting addon default-storageclass=true in "multinode-554300"
	I0108 21:23:47.479004    5636 host.go:66] Checking if "multinode-554300" exists ...
	I0108 21:23:47.480313    5636 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-554300 ).state
	I0108 21:23:47.918081    5636 round_trippers.go:463] GET https://172.29.107.59:8443/api/v1/nodes/multinode-554300
	I0108 21:23:47.918169    5636 round_trippers.go:469] Request Headers:
	I0108 21:23:47.918169    5636 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:23:47.918252    5636 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0108 21:23:47.922600    5636 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0108 21:23:47.923649    5636 round_trippers.go:577] Response Headers:
	I0108 21:23:47.923649    5636 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 15bc4c65-0928-4725-8730-353e6d4075e7
	I0108 21:23:47.923649    5636 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:23:47 GMT
	I0108 21:23:47.923649    5636 round_trippers.go:580]     Audit-Id: 0f5a3b56-08d8-4e49-9919-ef061e6daccf
	I0108 21:23:47.923758    5636 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:23:47.923758    5636 round_trippers.go:580]     Content-Type: application/json
	I0108 21:23:47.923758    5636 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9ebd4a85-36a3-4fdc-bd57-aab45239cbb7
	I0108 21:23:47.924147    5636 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-554300","uid":"00d4501c-311c-4564-812c-b620304a4e8e","resourceVersion":"335","creationTimestamp":"2024-01-08T21:23:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-554300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-554300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T21_23_33_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-01-08T21:23:28Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0108 21:23:48.424041    5636 round_trippers.go:463] GET https://172.29.107.59:8443/api/v1/nodes/multinode-554300
	I0108 21:23:48.424139    5636 round_trippers.go:469] Request Headers:
	I0108 21:23:48.424139    5636 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:23:48.424139    5636 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0108 21:23:48.428284    5636 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 21:23:48.428284    5636 round_trippers.go:577] Response Headers:
	I0108 21:23:48.428284    5636 round_trippers.go:580]     Audit-Id: 57259220-8c5d-4335-9ff9-270793d87917
	I0108 21:23:48.428284    5636 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:23:48.428284    5636 round_trippers.go:580]     Content-Type: application/json
	I0108 21:23:48.428379    5636 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9ebd4a85-36a3-4fdc-bd57-aab45239cbb7
	I0108 21:23:48.428379    5636 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 15bc4c65-0928-4725-8730-353e6d4075e7
	I0108 21:23:48.428379    5636 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:23:48 GMT
	I0108 21:23:48.428721    5636 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-554300","uid":"00d4501c-311c-4564-812c-b620304a4e8e","resourceVersion":"335","creationTimestamp":"2024-01-08T21:23:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-554300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-554300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T21_23_33_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-01-08T21:23:28Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0108 21:23:48.429333    5636 node_ready.go:58] node "multinode-554300" has status "Ready":"False"
	I0108 21:23:48.918936    5636 round_trippers.go:463] GET https://172.29.107.59:8443/api/v1/nodes/multinode-554300
	I0108 21:23:48.918936    5636 round_trippers.go:469] Request Headers:
	I0108 21:23:48.918936    5636 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:23:48.918936    5636 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0108 21:23:48.923537    5636 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0108 21:23:48.924415    5636 round_trippers.go:577] Response Headers:
	I0108 21:23:48.924415    5636 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 15bc4c65-0928-4725-8730-353e6d4075e7
	I0108 21:23:48.924415    5636 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:23:48 GMT
	I0108 21:23:48.924415    5636 round_trippers.go:580]     Audit-Id: 9ba61aec-2818-4cf8-9ae9-962e61597714
	I0108 21:23:48.924415    5636 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:23:48.924415    5636 round_trippers.go:580]     Content-Type: application/json
	I0108 21:23:48.924415    5636 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9ebd4a85-36a3-4fdc-bd57-aab45239cbb7
	I0108 21:23:48.924810    5636 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-554300","uid":"00d4501c-311c-4564-812c-b620304a4e8e","resourceVersion":"335","creationTimestamp":"2024-01-08T21:23:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-554300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-554300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T21_23_33_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-01-08T21:23:28Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0108 21:23:49.411566    5636 round_trippers.go:463] GET https://172.29.107.59:8443/api/v1/nodes/multinode-554300
	I0108 21:23:49.411669    5636 round_trippers.go:469] Request Headers:
	I0108 21:23:49.411669    5636 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:23:49.411669    5636 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0108 21:23:49.416018    5636 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0108 21:23:49.416714    5636 round_trippers.go:577] Response Headers:
	I0108 21:23:49.416714    5636 round_trippers.go:580]     Audit-Id: 112be477-e192-4dde-90eb-529277e8f54b
	I0108 21:23:49.416714    5636 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:23:49.416714    5636 round_trippers.go:580]     Content-Type: application/json
	I0108 21:23:49.416714    5636 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9ebd4a85-36a3-4fdc-bd57-aab45239cbb7
	I0108 21:23:49.416714    5636 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 15bc4c65-0928-4725-8730-353e6d4075e7
	I0108 21:23:49.416847    5636 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:23:49 GMT
	I0108 21:23:49.417090    5636 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-554300","uid":"00d4501c-311c-4564-812c-b620304a4e8e","resourceVersion":"335","creationTimestamp":"2024-01-08T21:23:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-554300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-554300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T21_23_33_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-01-08T21:23:28Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0108 21:23:49.679383    5636 main.go:141] libmachine: [stdout =====>] : Running
	
	I0108 21:23:49.679383    5636 main.go:141] libmachine: [stderr =====>] : 
	I0108 21:23:49.679668    5636 addons.go:429] installing /etc/kubernetes/addons/storageclass.yaml
	I0108 21:23:49.679668    5636 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0108 21:23:49.679779    5636 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-554300 ).state
	I0108 21:23:49.695107    5636 main.go:141] libmachine: [stdout =====>] : Running
	
	I0108 21:23:49.695479    5636 main.go:141] libmachine: [stderr =====>] : 
	I0108 21:23:49.695479    5636 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-554300 ).networkadapters[0]).ipaddresses[0]
	I0108 21:23:49.917517    5636 round_trippers.go:463] GET https://172.29.107.59:8443/api/v1/nodes/multinode-554300
	I0108 21:23:49.917517    5636 round_trippers.go:469] Request Headers:
	I0108 21:23:49.917517    5636 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:23:49.917517    5636 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0108 21:23:49.921752    5636 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0108 21:23:49.922583    5636 round_trippers.go:577] Response Headers:
	I0108 21:23:49.922583    5636 round_trippers.go:580]     Audit-Id: 35583b70-fb31-462c-ab84-b54bdbcd644d
	I0108 21:23:49.922583    5636 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:23:49.922583    5636 round_trippers.go:580]     Content-Type: application/json
	I0108 21:23:49.922583    5636 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9ebd4a85-36a3-4fdc-bd57-aab45239cbb7
	I0108 21:23:49.922583    5636 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 15bc4c65-0928-4725-8730-353e6d4075e7
	I0108 21:23:49.922583    5636 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:23:49 GMT
	I0108 21:23:49.923024    5636 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-554300","uid":"00d4501c-311c-4564-812c-b620304a4e8e","resourceVersion":"335","creationTimestamp":"2024-01-08T21:23:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-554300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-554300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T21_23_33_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-01-08T21:23:28Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0108 21:23:50.411767    5636 round_trippers.go:463] GET https://172.29.107.59:8443/api/v1/nodes/multinode-554300
	I0108 21:23:50.411767    5636 round_trippers.go:469] Request Headers:
	I0108 21:23:50.411767    5636 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:23:50.411767    5636 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0108 21:23:50.414506    5636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 21:23:50.415395    5636 round_trippers.go:577] Response Headers:
	I0108 21:23:50.415395    5636 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 15bc4c65-0928-4725-8730-353e6d4075e7
	I0108 21:23:50.415395    5636 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:23:50 GMT
	I0108 21:23:50.415395    5636 round_trippers.go:580]     Audit-Id: 379057fe-9e5d-4392-8863-f075f39c4934
	I0108 21:23:50.415395    5636 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:23:50.415395    5636 round_trippers.go:580]     Content-Type: application/json
	I0108 21:23:50.415395    5636 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9ebd4a85-36a3-4fdc-bd57-aab45239cbb7
	I0108 21:23:50.415395    5636 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-554300","uid":"00d4501c-311c-4564-812c-b620304a4e8e","resourceVersion":"335","creationTimestamp":"2024-01-08T21:23:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-554300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-554300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T21_23_33_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-01-08T21:23:28Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0108 21:23:50.921468    5636 round_trippers.go:463] GET https://172.29.107.59:8443/api/v1/nodes/multinode-554300
	I0108 21:23:50.921468    5636 round_trippers.go:469] Request Headers:
	I0108 21:23:50.921468    5636 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:23:50.921468    5636 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0108 21:23:50.925301    5636 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 21:23:50.925798    5636 round_trippers.go:577] Response Headers:
	I0108 21:23:50.925798    5636 round_trippers.go:580]     Content-Type: application/json
	I0108 21:23:50.925798    5636 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9ebd4a85-36a3-4fdc-bd57-aab45239cbb7
	I0108 21:23:50.925798    5636 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 15bc4c65-0928-4725-8730-353e6d4075e7
	I0108 21:23:50.925798    5636 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:23:50 GMT
	I0108 21:23:50.925798    5636 round_trippers.go:580]     Audit-Id: dbc45308-685d-432e-af1b-88c8ad274db0
	I0108 21:23:50.925798    5636 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:23:50.925798    5636 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-554300","uid":"00d4501c-311c-4564-812c-b620304a4e8e","resourceVersion":"335","creationTimestamp":"2024-01-08T21:23:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-554300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-554300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T21_23_33_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-01-08T21:23:28Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0108 21:23:50.926598    5636 node_ready.go:58] node "multinode-554300" has status "Ready":"False"
	I0108 21:23:51.413941    5636 round_trippers.go:463] GET https://172.29.107.59:8443/api/v1/nodes/multinode-554300
	I0108 21:23:51.414156    5636 round_trippers.go:469] Request Headers:
	I0108 21:23:51.414156    5636 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:23:51.414156    5636 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0108 21:23:51.417345    5636 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 21:23:51.417345    5636 round_trippers.go:577] Response Headers:
	I0108 21:23:51.417345    5636 round_trippers.go:580]     Audit-Id: 4a55ba7e-5f36-4862-9c8a-a63ad3230b2d
	I0108 21:23:51.417345    5636 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:23:51.417345    5636 round_trippers.go:580]     Content-Type: application/json
	I0108 21:23:51.417345    5636 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9ebd4a85-36a3-4fdc-bd57-aab45239cbb7
	I0108 21:23:51.417841    5636 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 15bc4c65-0928-4725-8730-353e6d4075e7
	I0108 21:23:51.417841    5636 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:23:51 GMT
	I0108 21:23:51.418075    5636 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-554300","uid":"00d4501c-311c-4564-812c-b620304a4e8e","resourceVersion":"335","creationTimestamp":"2024-01-08T21:23:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-554300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-554300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T21_23_33_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-01-08T21:23:28Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0108 21:23:51.921874    5636 round_trippers.go:463] GET https://172.29.107.59:8443/api/v1/nodes/multinode-554300
	I0108 21:23:51.921936    5636 round_trippers.go:469] Request Headers:
	I0108 21:23:51.921936    5636 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:23:51.921936    5636 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0108 21:23:51.925392    5636 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 21:23:51.925723    5636 round_trippers.go:577] Response Headers:
	I0108 21:23:51.925723    5636 round_trippers.go:580]     Content-Type: application/json
	I0108 21:23:51.925723    5636 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9ebd4a85-36a3-4fdc-bd57-aab45239cbb7
	I0108 21:23:51.925723    5636 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 15bc4c65-0928-4725-8730-353e6d4075e7
	I0108 21:23:51.925723    5636 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:23:51 GMT
	I0108 21:23:51.925804    5636 round_trippers.go:580]     Audit-Id: 3495fcc7-2fde-4a00-b5b6-7f4155f9831f
	I0108 21:23:51.925835    5636 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:23:51.925968    5636 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-554300","uid":"00d4501c-311c-4564-812c-b620304a4e8e","resourceVersion":"335","creationTimestamp":"2024-01-08T21:23:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-554300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-554300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T21_23_33_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-01-08T21:23:28Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0108 21:23:52.063431    5636 main.go:141] libmachine: [stdout =====>] : Running
	
	I0108 21:23:52.063431    5636 main.go:141] libmachine: [stderr =====>] : 
	I0108 21:23:52.063612    5636 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-554300 ).networkadapters[0]).ipaddresses[0]
	I0108 21:23:52.412226    5636 round_trippers.go:463] GET https://172.29.107.59:8443/api/v1/nodes/multinode-554300
	I0108 21:23:52.412226    5636 round_trippers.go:469] Request Headers:
	I0108 21:23:52.412286    5636 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:23:52.412286    5636 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0108 21:23:52.416312    5636 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0108 21:23:52.416967    5636 round_trippers.go:577] Response Headers:
	I0108 21:23:52.416967    5636 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 15bc4c65-0928-4725-8730-353e6d4075e7
	I0108 21:23:52.416967    5636 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:23:52 GMT
	I0108 21:23:52.416967    5636 round_trippers.go:580]     Audit-Id: 28692231-89eb-4a6a-a82c-c5211b44b4d7
	I0108 21:23:52.417086    5636 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:23:52.417086    5636 round_trippers.go:580]     Content-Type: application/json
	I0108 21:23:52.417086    5636 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9ebd4a85-36a3-4fdc-bd57-aab45239cbb7
	I0108 21:23:52.417198    5636 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-554300","uid":"00d4501c-311c-4564-812c-b620304a4e8e","resourceVersion":"335","creationTimestamp":"2024-01-08T21:23:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-554300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-554300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T21_23_33_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-01-08T21:23:28Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0108 21:23:52.474952    5636 main.go:141] libmachine: [stdout =====>] : 172.29.107.59
	
	I0108 21:23:52.475122    5636 main.go:141] libmachine: [stderr =====>] : 
	I0108 21:23:52.475122    5636 sshutil.go:53] new ssh client: &{IP:172.29.107.59 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-554300\id_rsa Username:docker}
	I0108 21:23:52.644773    5636 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0108 21:23:52.918320    5636 round_trippers.go:463] GET https://172.29.107.59:8443/api/v1/nodes/multinode-554300
	I0108 21:23:52.918320    5636 round_trippers.go:469] Request Headers:
	I0108 21:23:52.918320    5636 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:23:52.918320    5636 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0108 21:23:52.921323    5636 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 21:23:52.921323    5636 round_trippers.go:577] Response Headers:
	I0108 21:23:52.922346    5636 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:23:52 GMT
	I0108 21:23:52.922346    5636 round_trippers.go:580]     Audit-Id: 8d36c27a-1d26-4085-92cb-8a7ef4625907
	I0108 21:23:52.922346    5636 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:23:52.922346    5636 round_trippers.go:580]     Content-Type: application/json
	I0108 21:23:52.922346    5636 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9ebd4a85-36a3-4fdc-bd57-aab45239cbb7
	I0108 21:23:52.922346    5636 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 15bc4c65-0928-4725-8730-353e6d4075e7
	I0108 21:23:52.922498    5636 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-554300","uid":"00d4501c-311c-4564-812c-b620304a4e8e","resourceVersion":"335","creationTimestamp":"2024-01-08T21:23:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-554300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-554300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T21_23_33_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-01-08T21:23:28Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0108 21:23:53.411329    5636 round_trippers.go:463] GET https://172.29.107.59:8443/api/v1/nodes/multinode-554300
	I0108 21:23:53.411329    5636 round_trippers.go:469] Request Headers:
	I0108 21:23:53.411329    5636 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:23:53.411329    5636 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0108 21:23:53.416331    5636 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0108 21:23:53.416331    5636 round_trippers.go:577] Response Headers:
	I0108 21:23:53.416730    5636 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:23:53.416730    5636 round_trippers.go:580]     Content-Type: application/json
	I0108 21:23:53.416730    5636 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9ebd4a85-36a3-4fdc-bd57-aab45239cbb7
	I0108 21:23:53.416730    5636 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 15bc4c65-0928-4725-8730-353e6d4075e7
	I0108 21:23:53.416730    5636 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:23:53 GMT
	I0108 21:23:53.416730    5636 round_trippers.go:580]     Audit-Id: ccbb99e5-a696-486d-b489-d6c04dd57b0f
	I0108 21:23:53.417223    5636 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-554300","uid":"00d4501c-311c-4564-812c-b620304a4e8e","resourceVersion":"335","creationTimestamp":"2024-01-08T21:23:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-554300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-554300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T21_23_33_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-01-08T21:23:28Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0108 21:23:53.417693    5636 node_ready.go:58] node "multinode-554300" has status "Ready":"False"
	I0108 21:23:53.468456    5636 command_runner.go:130] > serviceaccount/storage-provisioner created
	I0108 21:23:53.468533    5636 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner created
	I0108 21:23:53.468533    5636 command_runner.go:130] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0108 21:23:53.468533    5636 command_runner.go:130] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0108 21:23:53.468533    5636 command_runner.go:130] > endpoints/k8s.io-minikube-hostpath created
	I0108 21:23:53.468533    5636 command_runner.go:130] > pod/storage-provisioner created
	I0108 21:23:53.916760    5636 round_trippers.go:463] GET https://172.29.107.59:8443/api/v1/nodes/multinode-554300
	I0108 21:23:53.916760    5636 round_trippers.go:469] Request Headers:
	I0108 21:23:53.916760    5636 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:23:53.916884    5636 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0108 21:23:53.920188    5636 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 21:23:53.920188    5636 round_trippers.go:577] Response Headers:
	I0108 21:23:53.920617    5636 round_trippers.go:580]     Audit-Id: d0b844f8-a09f-4482-a7f0-41888545080d
	I0108 21:23:53.920617    5636 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:23:53.920617    5636 round_trippers.go:580]     Content-Type: application/json
	I0108 21:23:53.920617    5636 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9ebd4a85-36a3-4fdc-bd57-aab45239cbb7
	I0108 21:23:53.920617    5636 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 15bc4c65-0928-4725-8730-353e6d4075e7
	I0108 21:23:53.920689    5636 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:23:53 GMT
	I0108 21:23:53.920975    5636 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-554300","uid":"00d4501c-311c-4564-812c-b620304a4e8e","resourceVersion":"335","creationTimestamp":"2024-01-08T21:23:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-554300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-554300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T21_23_33_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-01-08T21:23:28Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0108 21:23:54.423934    5636 round_trippers.go:463] GET https://172.29.107.59:8443/api/v1/nodes/multinode-554300
	I0108 21:23:54.423995    5636 round_trippers.go:469] Request Headers:
	I0108 21:23:54.423995    5636 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:23:54.424057    5636 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0108 21:23:54.427849    5636 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 21:23:54.427849    5636 round_trippers.go:577] Response Headers:
	I0108 21:23:54.427849    5636 round_trippers.go:580]     Audit-Id: 02cf1a21-1543-49f0-8cb5-c9bb6b137865
	I0108 21:23:54.427849    5636 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:23:54.428452    5636 round_trippers.go:580]     Content-Type: application/json
	I0108 21:23:54.428452    5636 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9ebd4a85-36a3-4fdc-bd57-aab45239cbb7
	I0108 21:23:54.428452    5636 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 15bc4c65-0928-4725-8730-353e6d4075e7
	I0108 21:23:54.428452    5636 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:23:54 GMT
	I0108 21:23:54.428773    5636 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-554300","uid":"00d4501c-311c-4564-812c-b620304a4e8e","resourceVersion":"335","creationTimestamp":"2024-01-08T21:23:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-554300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-554300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T21_23_33_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-01-08T21:23:28Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0108 21:23:54.674580    5636 main.go:141] libmachine: [stdout =====>] : 172.29.107.59
	
	I0108 21:23:54.674580    5636 main.go:141] libmachine: [stderr =====>] : 
	I0108 21:23:54.675226    5636 sshutil.go:53] new ssh client: &{IP:172.29.107.59 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-554300\id_rsa Username:docker}
	I0108 21:23:54.811487    5636 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0108 21:23:54.924566    5636 round_trippers.go:463] GET https://172.29.107.59:8443/api/v1/nodes/multinode-554300
	I0108 21:23:54.924566    5636 round_trippers.go:469] Request Headers:
	I0108 21:23:54.924566    5636 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0108 21:23:54.924566    5636 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:23:54.929103    5636 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0108 21:23:54.929103    5636 round_trippers.go:577] Response Headers:
	I0108 21:23:54.929103    5636 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:23:54 GMT
	I0108 21:23:54.929103    5636 round_trippers.go:580]     Audit-Id: 404548ef-a10a-433f-8414-c84e6eecfcc6
	I0108 21:23:54.929103    5636 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:23:54.929225    5636 round_trippers.go:580]     Content-Type: application/json
	I0108 21:23:54.929225    5636 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9ebd4a85-36a3-4fdc-bd57-aab45239cbb7
	I0108 21:23:54.929225    5636 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 15bc4c65-0928-4725-8730-353e6d4075e7
	I0108 21:23:54.929569    5636 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-554300","uid":"00d4501c-311c-4564-812c-b620304a4e8e","resourceVersion":"335","creationTimestamp":"2024-01-08T21:23:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-554300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-554300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T21_23_33_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-01-08T21:23:28Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0108 21:23:55.142984    5636 command_runner.go:130] > storageclass.storage.k8s.io/standard created
	I0108 21:23:55.143369    5636 round_trippers.go:463] GET https://172.29.107.59:8443/apis/storage.k8s.io/v1/storageclasses
	I0108 21:23:55.143369    5636 round_trippers.go:469] Request Headers:
	I0108 21:23:55.143369    5636 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:23:55.143528    5636 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0108 21:23:55.146810    5636 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 21:23:55.146810    5636 round_trippers.go:577] Response Headers:
	I0108 21:23:55.146810    5636 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 15bc4c65-0928-4725-8730-353e6d4075e7
	I0108 21:23:55.146810    5636 round_trippers.go:580]     Content-Length: 1273
	I0108 21:23:55.146810    5636 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:23:55 GMT
	I0108 21:23:55.146810    5636 round_trippers.go:580]     Audit-Id: 3bb7c035-1ace-4caa-a177-9d0385d7c787
	I0108 21:23:55.146810    5636 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:23:55.146810    5636 round_trippers.go:580]     Content-Type: application/json
	I0108 21:23:55.146810    5636 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9ebd4a85-36a3-4fdc-bd57-aab45239cbb7
	I0108 21:23:55.147502    5636 request.go:1212] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"410"},"items":[{"metadata":{"name":"standard","uid":"b075b888-8cc7-4120-9d01-9574c6d6d335","resourceVersion":"410","creationTimestamp":"2024-01-08T21:23:55Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-01-08T21:23:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kuberne
tes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is- [truncated 249 chars]
	I0108 21:23:55.148207    5636 request.go:1212] Request Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"b075b888-8cc7-4120-9d01-9574c6d6d335","resourceVersion":"410","creationTimestamp":"2024-01-08T21:23:55Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-01-08T21:23:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclas
s.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0108 21:23:55.148207    5636 round_trippers.go:463] PUT https://172.29.107.59:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0108 21:23:55.148207    5636 round_trippers.go:469] Request Headers:
	I0108 21:23:55.148207    5636 round_trippers.go:473]     Content-Type: application/json
	I0108 21:23:55.148207    5636 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:23:55.148207    5636 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0108 21:23:55.152449    5636 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0108 21:23:55.152449    5636 round_trippers.go:577] Response Headers:
	I0108 21:23:55.152449    5636 round_trippers.go:580]     Audit-Id: e3581d5e-e46c-4233-b35b-459ded12ecbe
	I0108 21:23:55.152449    5636 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:23:55.152449    5636 round_trippers.go:580]     Content-Type: application/json
	I0108 21:23:55.152449    5636 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9ebd4a85-36a3-4fdc-bd57-aab45239cbb7
	I0108 21:23:55.152449    5636 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 15bc4c65-0928-4725-8730-353e6d4075e7
	I0108 21:23:55.152612    5636 round_trippers.go:580]     Content-Length: 1220
	I0108 21:23:55.152612    5636 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:23:55 GMT
	I0108 21:23:55.152670    5636 request.go:1212] Response Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"b075b888-8cc7-4120-9d01-9574c6d6d335","resourceVersion":"410","creationTimestamp":"2024-01-08T21:23:55Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-01-08T21:23:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storagecla
ss.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0108 21:23:55.153866    5636 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0108 21:23:55.345043    5636 addons.go:508] enable addons completed in 10.1398638s: enabled=[storage-provisioner default-storageclass]
	I0108 21:23:55.414178    5636 round_trippers.go:463] GET https://172.29.107.59:8443/api/v1/nodes/multinode-554300
	I0108 21:23:55.414178    5636 round_trippers.go:469] Request Headers:
	I0108 21:23:55.414178    5636 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0108 21:23:55.414178    5636 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:23:55.417467    5636 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 21:23:55.417467    5636 round_trippers.go:577] Response Headers:
	I0108 21:23:55.417931    5636 round_trippers.go:580]     Audit-Id: 2c8f7e92-fd69-48ae-8120-93af1c221781
	I0108 21:23:55.417931    5636 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:23:55.417931    5636 round_trippers.go:580]     Content-Type: application/json
	I0108 21:23:55.417931    5636 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9ebd4a85-36a3-4fdc-bd57-aab45239cbb7
	I0108 21:23:55.417931    5636 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 15bc4c65-0928-4725-8730-353e6d4075e7
	I0108 21:23:55.417931    5636 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:23:55 GMT
	I0108 21:23:55.418320    5636 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-554300","uid":"00d4501c-311c-4564-812c-b620304a4e8e","resourceVersion":"335","creationTimestamp":"2024-01-08T21:23:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-554300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-554300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T21_23_33_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-01-08T21:23:28Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0108 21:23:55.418848    5636 node_ready.go:58] node "multinode-554300" has status "Ready":"False"
	I0108 21:23:55.918956    5636 round_trippers.go:463] GET https://172.29.107.59:8443/api/v1/nodes/multinode-554300
	I0108 21:23:55.919494    5636 round_trippers.go:469] Request Headers:
	I0108 21:23:55.919494    5636 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:23:55.919494    5636 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0108 21:23:55.926582    5636 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0108 21:23:55.926725    5636 round_trippers.go:577] Response Headers:
	I0108 21:23:55.926769    5636 round_trippers.go:580]     Audit-Id: 70274569-a806-4422-84b2-705d2c4163c0
	I0108 21:23:55.926819    5636 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:23:55.926857    5636 round_trippers.go:580]     Content-Type: application/json
	I0108 21:23:55.926857    5636 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9ebd4a85-36a3-4fdc-bd57-aab45239cbb7
	I0108 21:23:55.926857    5636 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 15bc4c65-0928-4725-8730-353e6d4075e7
	I0108 21:23:55.926905    5636 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:23:55 GMT
	I0108 21:23:55.927377    5636 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-554300","uid":"00d4501c-311c-4564-812c-b620304a4e8e","resourceVersion":"335","creationTimestamp":"2024-01-08T21:23:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-554300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-554300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T21_23_33_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-01-08T21:23:28Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0108 21:23:56.411256    5636 round_trippers.go:463] GET https://172.29.107.59:8443/api/v1/nodes/multinode-554300
	I0108 21:23:56.411346    5636 round_trippers.go:469] Request Headers:
	I0108 21:23:56.411346    5636 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:23:56.411346    5636 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0108 21:23:56.416242    5636 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0108 21:23:56.416996    5636 round_trippers.go:577] Response Headers:
	I0108 21:23:56.416996    5636 round_trippers.go:580]     Audit-Id: e513c780-444b-4e87-8029-ce592c4dbe54
	I0108 21:23:56.416996    5636 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:23:56.416996    5636 round_trippers.go:580]     Content-Type: application/json
	I0108 21:23:56.416996    5636 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9ebd4a85-36a3-4fdc-bd57-aab45239cbb7
	I0108 21:23:56.416996    5636 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 15bc4c65-0928-4725-8730-353e6d4075e7
	I0108 21:23:56.416996    5636 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:23:56 GMT
	I0108 21:23:56.417269    5636 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-554300","uid":"00d4501c-311c-4564-812c-b620304a4e8e","resourceVersion":"335","creationTimestamp":"2024-01-08T21:23:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-554300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-554300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T21_23_33_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-01-08T21:23:28Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0108 21:23:56.920299    5636 round_trippers.go:463] GET https://172.29.107.59:8443/api/v1/nodes/multinode-554300
	I0108 21:23:56.920299    5636 round_trippers.go:469] Request Headers:
	I0108 21:23:56.920299    5636 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:23:56.920299    5636 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0108 21:23:56.927986    5636 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0108 21:23:56.927986    5636 round_trippers.go:577] Response Headers:
	I0108 21:23:56.927986    5636 round_trippers.go:580]     Audit-Id: e4e9eb8a-fd87-4554-9b67-12325a1684ca
	I0108 21:23:56.927986    5636 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:23:56.927986    5636 round_trippers.go:580]     Content-Type: application/json
	I0108 21:23:56.927986    5636 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9ebd4a85-36a3-4fdc-bd57-aab45239cbb7
	I0108 21:23:56.927986    5636 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 15bc4c65-0928-4725-8730-353e6d4075e7
	I0108 21:23:56.927986    5636 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:23:56 GMT
	I0108 21:23:56.927986    5636 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-554300","uid":"00d4501c-311c-4564-812c-b620304a4e8e","resourceVersion":"335","creationTimestamp":"2024-01-08T21:23:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-554300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-554300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T21_23_33_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-01-08T21:23:28Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0108 21:23:57.423327    5636 round_trippers.go:463] GET https://172.29.107.59:8443/api/v1/nodes/multinode-554300
	I0108 21:23:57.423327    5636 round_trippers.go:469] Request Headers:
	I0108 21:23:57.423327    5636 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:23:57.423327    5636 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0108 21:23:57.427000    5636 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 21:23:57.427752    5636 round_trippers.go:577] Response Headers:
	I0108 21:23:57.427752    5636 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 15bc4c65-0928-4725-8730-353e6d4075e7
	I0108 21:23:57.427752    5636 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:23:57 GMT
	I0108 21:23:57.427829    5636 round_trippers.go:580]     Audit-Id: 06c319bc-f044-42e2-9dc0-452f8c00aa43
	I0108 21:23:57.427829    5636 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:23:57.427829    5636 round_trippers.go:580]     Content-Type: application/json
	I0108 21:23:57.427829    5636 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9ebd4a85-36a3-4fdc-bd57-aab45239cbb7
	I0108 21:23:57.427829    5636 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-554300","uid":"00d4501c-311c-4564-812c-b620304a4e8e","resourceVersion":"418","creationTimestamp":"2024-01-08T21:23:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-554300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-554300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T21_23_33_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-01-08T21:23:28Z","fieldsType":"FieldsV1","fi [truncated 4782 chars]
	I0108 21:23:57.428608    5636 node_ready.go:49] node "multinode-554300" has status "Ready":"True"
	I0108 21:23:57.428667    5636 node_ready.go:38] duration metric: took 11.0184175s waiting for node "multinode-554300" to be "Ready" ...
	I0108 21:23:57.428667    5636 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0108 21:23:57.428834    5636 round_trippers.go:463] GET https://172.29.107.59:8443/api/v1/namespaces/kube-system/pods
	I0108 21:23:57.428834    5636 round_trippers.go:469] Request Headers:
	I0108 21:23:57.428834    5636 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:23:57.428834    5636 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0108 21:23:57.436544    5636 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0108 21:23:57.436544    5636 round_trippers.go:577] Response Headers:
	I0108 21:23:57.436544    5636 round_trippers.go:580]     Audit-Id: 07debae3-7d14-4e8f-81fe-0cce9c682a7b
	I0108 21:23:57.436544    5636 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:23:57.436622    5636 round_trippers.go:580]     Content-Type: application/json
	I0108 21:23:57.436622    5636 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9ebd4a85-36a3-4fdc-bd57-aab45239cbb7
	I0108 21:23:57.436643    5636 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 15bc4c65-0928-4725-8730-353e6d4075e7
	I0108 21:23:57.436643    5636 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:23:57 GMT
	I0108 21:23:57.438401    5636 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"424"},"items":[{"metadata":{"name":"coredns-5dd5756b68-q7vd7","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"fe215542-1a69-4152-9098-06937431fa74","resourceVersion":"424","creationTimestamp":"2024-01-08T21:23:44Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"4ef51136-88a5-445c-bfe9-e1a45010851b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-08T21:23:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"4ef51136-88a5-445c-bfe9-e1a45010851b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 53972 chars]
	I0108 21:23:57.443033    5636 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-q7vd7" in "kube-system" namespace to be "Ready" ...
	I0108 21:23:57.443224    5636 round_trippers.go:463] GET https://172.29.107.59:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-q7vd7
	I0108 21:23:57.443248    5636 round_trippers.go:469] Request Headers:
	I0108 21:23:57.443248    5636 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:23:57.443248    5636 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0108 21:23:57.446828    5636 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 21:23:57.446828    5636 round_trippers.go:577] Response Headers:
	I0108 21:23:57.446828    5636 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9ebd4a85-36a3-4fdc-bd57-aab45239cbb7
	I0108 21:23:57.446828    5636 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 15bc4c65-0928-4725-8730-353e6d4075e7
	I0108 21:23:57.446828    5636 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:23:57 GMT
	I0108 21:23:57.446958    5636 round_trippers.go:580]     Audit-Id: da7ac9a8-2af5-4fdd-b534-176e8aabf654
	I0108 21:23:57.446958    5636 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:23:57.446958    5636 round_trippers.go:580]     Content-Type: application/json
	I0108 21:23:57.447201    5636 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-q7vd7","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"fe215542-1a69-4152-9098-06937431fa74","resourceVersion":"424","creationTimestamp":"2024-01-08T21:23:44Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"4ef51136-88a5-445c-bfe9-e1a45010851b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-08T21:23:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"4ef51136-88a5-445c-bfe9-e1a45010851b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6151 chars]
	I0108 21:23:57.447627    5636 round_trippers.go:463] GET https://172.29.107.59:8443/api/v1/nodes/multinode-554300
	I0108 21:23:57.447627    5636 round_trippers.go:469] Request Headers:
	I0108 21:23:57.447627    5636 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:23:57.447627    5636 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0108 21:23:57.450556    5636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 21:23:57.451597    5636 round_trippers.go:577] Response Headers:
	I0108 21:23:57.451597    5636 round_trippers.go:580]     Audit-Id: 432970fb-c01c-45d0-b266-b3fb8a453fab
	I0108 21:23:57.451597    5636 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:23:57.451597    5636 round_trippers.go:580]     Content-Type: application/json
	I0108 21:23:57.451597    5636 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9ebd4a85-36a3-4fdc-bd57-aab45239cbb7
	I0108 21:23:57.451597    5636 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 15bc4c65-0928-4725-8730-353e6d4075e7
	I0108 21:23:57.451597    5636 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:23:57 GMT
	I0108 21:23:57.451597    5636 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-554300","uid":"00d4501c-311c-4564-812c-b620304a4e8e","resourceVersion":"418","creationTimestamp":"2024-01-08T21:23:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-554300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-554300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T21_23_33_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-01-08T21:23:28Z","fieldsType":"FieldsV1","fi [truncated 4782 chars]
	I0108 21:23:57.957803    5636 round_trippers.go:463] GET https://172.29.107.59:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-q7vd7
	I0108 21:23:57.957875    5636 round_trippers.go:469] Request Headers:
	I0108 21:23:57.957875    5636 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:23:57.957875    5636 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0108 21:23:57.961246    5636 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 21:23:57.961246    5636 round_trippers.go:577] Response Headers:
	I0108 21:23:57.961246    5636 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:23:57.961246    5636 round_trippers.go:580]     Content-Type: application/json
	I0108 21:23:57.961246    5636 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9ebd4a85-36a3-4fdc-bd57-aab45239cbb7
	I0108 21:23:57.961246    5636 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 15bc4c65-0928-4725-8730-353e6d4075e7
	I0108 21:23:57.961246    5636 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:23:57 GMT
	I0108 21:23:57.961246    5636 round_trippers.go:580]     Audit-Id: 5e1129eb-56f6-4a77-abee-5265d66927bb
	I0108 21:23:57.961246    5636 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-q7vd7","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"fe215542-1a69-4152-9098-06937431fa74","resourceVersion":"424","creationTimestamp":"2024-01-08T21:23:44Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"4ef51136-88a5-445c-bfe9-e1a45010851b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-08T21:23:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"4ef51136-88a5-445c-bfe9-e1a45010851b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6151 chars]
	I0108 21:23:57.962335    5636 round_trippers.go:463] GET https://172.29.107.59:8443/api/v1/nodes/multinode-554300
	I0108 21:23:57.962335    5636 round_trippers.go:469] Request Headers:
	I0108 21:23:57.962335    5636 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:23:57.962335    5636 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0108 21:23:57.965246    5636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 21:23:57.965246    5636 round_trippers.go:577] Response Headers:
	I0108 21:23:57.965246    5636 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 15bc4c65-0928-4725-8730-353e6d4075e7
	I0108 21:23:57.965246    5636 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:23:57 GMT
	I0108 21:23:57.965246    5636 round_trippers.go:580]     Audit-Id: 5694035a-9471-4518-8aac-c2d99640d589
	I0108 21:23:57.965246    5636 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:23:57.965246    5636 round_trippers.go:580]     Content-Type: application/json
	I0108 21:23:57.965246    5636 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9ebd4a85-36a3-4fdc-bd57-aab45239cbb7
	I0108 21:23:57.965246    5636 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-554300","uid":"00d4501c-311c-4564-812c-b620304a4e8e","resourceVersion":"418","creationTimestamp":"2024-01-08T21:23:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-554300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-554300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T21_23_33_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-01-08T21:23:28Z","fieldsType":"FieldsV1","fi [truncated 4782 chars]
	I0108 21:23:58.450597    5636 round_trippers.go:463] GET https://172.29.107.59:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-q7vd7
	I0108 21:23:58.450720    5636 round_trippers.go:469] Request Headers:
	I0108 21:23:58.450720    5636 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:23:58.450720    5636 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0108 21:23:58.457953    5636 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0108 21:23:58.457953    5636 round_trippers.go:577] Response Headers:
	I0108 21:23:58.457953    5636 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9ebd4a85-36a3-4fdc-bd57-aab45239cbb7
	I0108 21:23:58.457953    5636 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 15bc4c65-0928-4725-8730-353e6d4075e7
	I0108 21:23:58.457953    5636 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:23:58 GMT
	I0108 21:23:58.457953    5636 round_trippers.go:580]     Audit-Id: a0f46b70-e55b-4b8a-aba7-5b3b23df5cc7
	I0108 21:23:58.458515    5636 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:23:58.458515    5636 round_trippers.go:580]     Content-Type: application/json
	I0108 21:23:58.458707    5636 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-q7vd7","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"fe215542-1a69-4152-9098-06937431fa74","resourceVersion":"424","creationTimestamp":"2024-01-08T21:23:44Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"4ef51136-88a5-445c-bfe9-e1a45010851b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-08T21:23:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"4ef51136-88a5-445c-bfe9-e1a45010851b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6151 chars]
	I0108 21:23:58.459434    5636 round_trippers.go:463] GET https://172.29.107.59:8443/api/v1/nodes/multinode-554300
	I0108 21:23:58.459434    5636 round_trippers.go:469] Request Headers:
	I0108 21:23:58.459486    5636 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:23:58.459486    5636 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0108 21:23:58.462138    5636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0108 21:23:58.462138    5636 round_trippers.go:577] Response Headers:
	I0108 21:23:58.462138    5636 round_trippers.go:580]     Content-Type: application/json
	I0108 21:23:58.462138    5636 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9ebd4a85-36a3-4fdc-bd57-aab45239cbb7
	I0108 21:23:58.462138    5636 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 15bc4c65-0928-4725-8730-353e6d4075e7
	I0108 21:23:58.462138    5636 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:23:58 GMT
	I0108 21:23:58.462138    5636 round_trippers.go:580]     Audit-Id: 808380be-3948-4fa9-a0b4-1b8211e3632f
	I0108 21:23:58.462138    5636 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:23:58.462138    5636 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-554300","uid":"00d4501c-311c-4564-812c-b620304a4e8e","resourceVersion":"418","creationTimestamp":"2024-01-08T21:23:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-554300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-554300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T21_23_33_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-01-08T21:23:28Z","fieldsType":"FieldsV1","fi [truncated 4782 chars]
	I0108 21:23:58.944023    5636 round_trippers.go:463] GET https://172.29.107.59:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-q7vd7
	I0108 21:23:58.944103    5636 round_trippers.go:469] Request Headers:
	I0108 21:23:58.944103    5636 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0108 21:23:58.944103    5636 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:23:58.949154    5636 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0108 21:23:58.949154    5636 round_trippers.go:577] Response Headers:
	I0108 21:23:58.949265    5636 round_trippers.go:580]     Audit-Id: 8114ba42-42d2-4fce-a2ba-3c0aa1ae52b2
	I0108 21:23:58.949265    5636 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:23:58.949265    5636 round_trippers.go:580]     Content-Type: application/json
	I0108 21:23:58.949303    5636 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9ebd4a85-36a3-4fdc-bd57-aab45239cbb7
	I0108 21:23:58.949303    5636 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 15bc4c65-0928-4725-8730-353e6d4075e7
	I0108 21:23:58.949303    5636 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:23:58 GMT
	I0108 21:23:58.949470    5636 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-q7vd7","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"fe215542-1a69-4152-9098-06937431fa74","resourceVersion":"424","creationTimestamp":"2024-01-08T21:23:44Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"4ef51136-88a5-445c-bfe9-e1a45010851b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-08T21:23:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"4ef51136-88a5-445c-bfe9-e1a45010851b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6151 chars]
	I0108 21:23:58.949829    5636 round_trippers.go:463] GET https://172.29.107.59:8443/api/v1/nodes/multinode-554300
	I0108 21:23:58.949829    5636 round_trippers.go:469] Request Headers:
	I0108 21:23:58.949829    5636 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:23:58.949829    5636 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0108 21:23:58.953385    5636 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 21:23:58.953618    5636 round_trippers.go:577] Response Headers:
	I0108 21:23:58.953618    5636 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:23:58 GMT
	I0108 21:23:58.953618    5636 round_trippers.go:580]     Audit-Id: a7a9e9d7-b22c-4171-8b94-e18b47b3f65f
	I0108 21:23:58.953672    5636 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:23:58.953672    5636 round_trippers.go:580]     Content-Type: application/json
	I0108 21:23:58.953672    5636 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9ebd4a85-36a3-4fdc-bd57-aab45239cbb7
	I0108 21:23:58.953672    5636 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 15bc4c65-0928-4725-8730-353e6d4075e7
	I0108 21:23:58.953672    5636 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-554300","uid":"00d4501c-311c-4564-812c-b620304a4e8e","resourceVersion":"418","creationTimestamp":"2024-01-08T21:23:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-554300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-554300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T21_23_33_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-01-08T21:23:28Z","fieldsType":"FieldsV1","fi [truncated 4782 chars]
	I0108 21:23:59.444780    5636 round_trippers.go:463] GET https://172.29.107.59:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-q7vd7
	I0108 21:23:59.444780    5636 round_trippers.go:469] Request Headers:
	I0108 21:23:59.444780    5636 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:23:59.444780    5636 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0108 21:23:59.449667    5636 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0108 21:23:59.449667    5636 round_trippers.go:577] Response Headers:
	I0108 21:23:59.449667    5636 round_trippers.go:580]     Audit-Id: 162182d0-d7df-4316-89a3-b252eaacff43
	I0108 21:23:59.449667    5636 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:23:59.449667    5636 round_trippers.go:580]     Content-Type: application/json
	I0108 21:23:59.449667    5636 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9ebd4a85-36a3-4fdc-bd57-aab45239cbb7
	I0108 21:23:59.449667    5636 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 15bc4c65-0928-4725-8730-353e6d4075e7
	I0108 21:23:59.449667    5636 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:23:59 GMT
	I0108 21:23:59.449667    5636 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-q7vd7","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"fe215542-1a69-4152-9098-06937431fa74","resourceVersion":"424","creationTimestamp":"2024-01-08T21:23:44Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"4ef51136-88a5-445c-bfe9-e1a45010851b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-08T21:23:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"4ef51136-88a5-445c-bfe9-e1a45010851b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6151 chars]
	I0108 21:23:59.450873    5636 round_trippers.go:463] GET https://172.29.107.59:8443/api/v1/nodes/multinode-554300
	I0108 21:23:59.450935    5636 round_trippers.go:469] Request Headers:
	I0108 21:23:59.450935    5636 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:23:59.450935    5636 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0108 21:23:59.454169    5636 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 21:23:59.454169    5636 round_trippers.go:577] Response Headers:
	I0108 21:23:59.454169    5636 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9ebd4a85-36a3-4fdc-bd57-aab45239cbb7
	I0108 21:23:59.454169    5636 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 15bc4c65-0928-4725-8730-353e6d4075e7
	I0108 21:23:59.454169    5636 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:23:59 GMT
	I0108 21:23:59.454169    5636 round_trippers.go:580]     Audit-Id: b5298f85-ce93-41b7-aa43-548ea91a7b4e
	I0108 21:23:59.454169    5636 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:23:59.454169    5636 round_trippers.go:580]     Content-Type: application/json
	I0108 21:23:59.454917    5636 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-554300","uid":"00d4501c-311c-4564-812c-b620304a4e8e","resourceVersion":"418","creationTimestamp":"2024-01-08T21:23:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-554300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-554300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T21_23_33_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-01-08T21:23:28Z","fieldsType":"FieldsV1","fi [truncated 4782 chars]
	I0108 21:23:59.455291    5636 pod_ready.go:102] pod "coredns-5dd5756b68-q7vd7" in "kube-system" namespace has status "Ready":"False"
	I0108 21:23:59.946945    5636 round_trippers.go:463] GET https://172.29.107.59:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-q7vd7
	I0108 21:23:59.946945    5636 round_trippers.go:469] Request Headers:
	I0108 21:23:59.946945    5636 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:23:59.946945    5636 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0108 21:23:59.951941    5636 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0108 21:23:59.952406    5636 round_trippers.go:577] Response Headers:
	I0108 21:23:59.952406    5636 round_trippers.go:580]     Audit-Id: b40204e8-3b05-43aa-a97c-0bfb8e5e2b99
	I0108 21:23:59.952406    5636 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:23:59.952406    5636 round_trippers.go:580]     Content-Type: application/json
	I0108 21:23:59.952406    5636 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9ebd4a85-36a3-4fdc-bd57-aab45239cbb7
	I0108 21:23:59.952406    5636 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 15bc4c65-0928-4725-8730-353e6d4075e7
	I0108 21:23:59.952406    5636 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:23:59 GMT
	I0108 21:23:59.952754    5636 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-q7vd7","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"fe215542-1a69-4152-9098-06937431fa74","resourceVersion":"438","creationTimestamp":"2024-01-08T21:23:44Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"4ef51136-88a5-445c-bfe9-e1a45010851b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-08T21:23:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"4ef51136-88a5-445c-bfe9-e1a45010851b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6283 chars]
	I0108 21:23:59.953434    5636 round_trippers.go:463] GET https://172.29.107.59:8443/api/v1/nodes/multinode-554300
	I0108 21:23:59.953434    5636 round_trippers.go:469] Request Headers:
	I0108 21:23:59.953434    5636 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:23:59.953434    5636 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0108 21:23:59.957043    5636 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 21:23:59.957043    5636 round_trippers.go:577] Response Headers:
	I0108 21:23:59.957043    5636 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:23:59 GMT
	I0108 21:23:59.957208    5636 round_trippers.go:580]     Audit-Id: d0a3e0c7-286b-43cf-89dd-5e00858a812d
	I0108 21:23:59.957208    5636 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:23:59.957208    5636 round_trippers.go:580]     Content-Type: application/json
	I0108 21:23:59.957208    5636 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9ebd4a85-36a3-4fdc-bd57-aab45239cbb7
	I0108 21:23:59.957269    5636 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 15bc4c65-0928-4725-8730-353e6d4075e7
	I0108 21:23:59.957408    5636 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-554300","uid":"00d4501c-311c-4564-812c-b620304a4e8e","resourceVersion":"418","creationTimestamp":"2024-01-08T21:23:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-554300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-554300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T21_23_33_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-01-08T21:23:28Z","fieldsType":"FieldsV1","fi [truncated 4782 chars]
	I0108 21:23:59.958230    5636 pod_ready.go:92] pod "coredns-5dd5756b68-q7vd7" in "kube-system" namespace has status "Ready":"True"
	I0108 21:23:59.958230    5636 pod_ready.go:81] duration metric: took 2.5151169s waiting for pod "coredns-5dd5756b68-q7vd7" in "kube-system" namespace to be "Ready" ...
	I0108 21:23:59.958230    5636 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-554300" in "kube-system" namespace to be "Ready" ...
	I0108 21:23:59.958230    5636 round_trippers.go:463] GET https://172.29.107.59:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-554300
	I0108 21:23:59.958230    5636 round_trippers.go:469] Request Headers:
	I0108 21:23:59.958230    5636 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:23:59.958230    5636 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0108 21:23:59.961737    5636 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 21:23:59.962053    5636 round_trippers.go:577] Response Headers:
	I0108 21:23:59.962053    5636 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9ebd4a85-36a3-4fdc-bd57-aab45239cbb7
	I0108 21:23:59.962053    5636 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 15bc4c65-0928-4725-8730-353e6d4075e7
	I0108 21:23:59.962053    5636 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:23:59 GMT
	I0108 21:23:59.962053    5636 round_trippers.go:580]     Audit-Id: b0897b95-3443-472e-a8d0-e067112e12e1
	I0108 21:23:59.962053    5636 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:23:59.962053    5636 round_trippers.go:580]     Content-Type: application/json
	I0108 21:23:59.962458    5636 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-554300","namespace":"kube-system","uid":"06d58411-089c-4312-8685-a2cb7f7e3c33","resourceVersion":"314","creationTimestamp":"2024-01-08T21:23:31Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.29.107.59:2379","kubernetes.io/config.hash":"9b41c89b0647a3bffea3212cb5464059","kubernetes.io/config.mirror":"9b41c89b0647a3bffea3212cb5464059","kubernetes.io/config.seen":"2024-01-08T21:23:23.164883235Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-554300","uid":"00d4501c-311c-4564-812c-b620304a4e8e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-08T21:23:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 5862 chars]
	I0108 21:23:59.963120    5636 round_trippers.go:463] GET https://172.29.107.59:8443/api/v1/nodes/multinode-554300
	I0108 21:23:59.963120    5636 round_trippers.go:469] Request Headers:
	I0108 21:23:59.963208    5636 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:23:59.963208    5636 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0108 21:23:59.965439    5636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 21:23:59.965439    5636 round_trippers.go:577] Response Headers:
	I0108 21:23:59.965439    5636 round_trippers.go:580]     Content-Type: application/json
	I0108 21:23:59.965439    5636 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9ebd4a85-36a3-4fdc-bd57-aab45239cbb7
	I0108 21:23:59.965439    5636 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 15bc4c65-0928-4725-8730-353e6d4075e7
	I0108 21:23:59.965439    5636 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:23:59 GMT
	I0108 21:23:59.965439    5636 round_trippers.go:580]     Audit-Id: e0455241-5bbf-410b-9ade-a258d34dd1c9
	I0108 21:23:59.965439    5636 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:23:59.966629    5636 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-554300","uid":"00d4501c-311c-4564-812c-b620304a4e8e","resourceVersion":"418","creationTimestamp":"2024-01-08T21:23:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-554300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-554300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T21_23_33_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-01-08T21:23:28Z","fieldsType":"FieldsV1","fi [truncated 4782 chars]
	I0108 21:23:59.966994    5636 pod_ready.go:92] pod "etcd-multinode-554300" in "kube-system" namespace has status "Ready":"True"
	I0108 21:23:59.966994    5636 pod_ready.go:81] duration metric: took 8.7642ms waiting for pod "etcd-multinode-554300" in "kube-system" namespace to be "Ready" ...
	I0108 21:23:59.966994    5636 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-554300" in "kube-system" namespace to be "Ready" ...
	I0108 21:23:59.966994    5636 round_trippers.go:463] GET https://172.29.107.59:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-554300
	I0108 21:23:59.966994    5636 round_trippers.go:469] Request Headers:
	I0108 21:23:59.966994    5636 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:23:59.966994    5636 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0108 21:23:59.974257    5636 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0108 21:23:59.974257    5636 round_trippers.go:577] Response Headers:
	I0108 21:23:59.974257    5636 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:23:59 GMT
	I0108 21:23:59.974431    5636 round_trippers.go:580]     Audit-Id: 632cdca3-17a5-4e0b-8f08-3b9797092bd5
	I0108 21:23:59.974431    5636 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:23:59.974475    5636 round_trippers.go:580]     Content-Type: application/json
	I0108 21:23:59.974475    5636 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9ebd4a85-36a3-4fdc-bd57-aab45239cbb7
	I0108 21:23:59.974475    5636 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 15bc4c65-0928-4725-8730-353e6d4075e7
	I0108 21:23:59.974475    5636 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-554300","namespace":"kube-system","uid":"54bb0d68-f8ac-4f67-a9cd-71a15ce550ad","resourceVersion":"326","creationTimestamp":"2024-01-08T21:23:32Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.29.107.59:8443","kubernetes.io/config.hash":"2efb47d905867f62472179a55c21eb33","kubernetes.io/config.mirror":"2efb47d905867f62472179a55c21eb33","kubernetes.io/config.seen":"2024-01-08T21:23:32.232190192Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-554300","uid":"00d4501c-311c-4564-812c-b620304a4e8e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-08T21:23:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kub
ernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes. [truncated 7399 chars]
	I0108 21:23:59.975304    5636 round_trippers.go:463] GET https://172.29.107.59:8443/api/v1/nodes/multinode-554300
	I0108 21:23:59.975304    5636 round_trippers.go:469] Request Headers:
	I0108 21:23:59.975304    5636 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:23:59.975304    5636 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0108 21:23:59.978281    5636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 21:23:59.978281    5636 round_trippers.go:577] Response Headers:
	I0108 21:23:59.978281    5636 round_trippers.go:580]     Audit-Id: 35054a5d-64ee-4798-8b31-bcbda63e8a60
	I0108 21:23:59.978281    5636 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:23:59.978281    5636 round_trippers.go:580]     Content-Type: application/json
	I0108 21:23:59.978281    5636 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9ebd4a85-36a3-4fdc-bd57-aab45239cbb7
	I0108 21:23:59.978281    5636 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 15bc4c65-0928-4725-8730-353e6d4075e7
	I0108 21:23:59.978281    5636 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:23:59 GMT
	I0108 21:23:59.979221    5636 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-554300","uid":"00d4501c-311c-4564-812c-b620304a4e8e","resourceVersion":"418","creationTimestamp":"2024-01-08T21:23:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-554300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-554300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T21_23_33_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-01-08T21:23:28Z","fieldsType":"FieldsV1","fi [truncated 4782 chars]
	I0108 21:23:59.979221    5636 pod_ready.go:92] pod "kube-apiserver-multinode-554300" in "kube-system" namespace has status "Ready":"True"
	I0108 21:23:59.979221    5636 pod_ready.go:81] duration metric: took 12.2268ms waiting for pod "kube-apiserver-multinode-554300" in "kube-system" namespace to be "Ready" ...
	I0108 21:23:59.979221    5636 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-554300" in "kube-system" namespace to be "Ready" ...
	I0108 21:23:59.979221    5636 round_trippers.go:463] GET https://172.29.107.59:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-554300
	I0108 21:23:59.979221    5636 round_trippers.go:469] Request Headers:
	I0108 21:23:59.979221    5636 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:23:59.979221    5636 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0108 21:23:59.982230    5636 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 21:23:59.982230    5636 round_trippers.go:577] Response Headers:
	I0108 21:23:59.982230    5636 round_trippers.go:580]     Audit-Id: 06ea4491-9365-486b-bf41-7af92e2af3b4
	I0108 21:23:59.982575    5636 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:23:59.982575    5636 round_trippers.go:580]     Content-Type: application/json
	I0108 21:23:59.982575    5636 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9ebd4a85-36a3-4fdc-bd57-aab45239cbb7
	I0108 21:23:59.982575    5636 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 15bc4c65-0928-4725-8730-353e6d4075e7
	I0108 21:23:59.982575    5636 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:23:59 GMT
	I0108 21:23:59.982643    5636 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-554300","namespace":"kube-system","uid":"c5c47910-dee9-4e42-8623-dbc45d13564f","resourceVersion":"361","creationTimestamp":"2024-01-08T21:23:32Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"8c1bbf537f0866640621fd36da3286e0","kubernetes.io/config.mirror":"8c1bbf537f0866640621fd36da3286e0","kubernetes.io/config.seen":"2024-01-08T21:23:32.232191792Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-554300","uid":"00d4501c-311c-4564-812c-b620304a4e8e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-08T21:23:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 6969 chars]
	I0108 21:23:59.983237    5636 round_trippers.go:463] GET https://172.29.107.59:8443/api/v1/nodes/multinode-554300
	I0108 21:23:59.983319    5636 round_trippers.go:469] Request Headers:
	I0108 21:23:59.983319    5636 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0108 21:23:59.983319    5636 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:23:59.986007    5636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 21:23:59.986007    5636 round_trippers.go:577] Response Headers:
	I0108 21:23:59.986007    5636 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 15bc4c65-0928-4725-8730-353e6d4075e7
	I0108 21:23:59.986007    5636 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:23:59 GMT
	I0108 21:23:59.986007    5636 round_trippers.go:580]     Audit-Id: 958dae57-3068-4e0d-bfed-36dc484c9030
	I0108 21:23:59.986007    5636 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:23:59.986007    5636 round_trippers.go:580]     Content-Type: application/json
	I0108 21:23:59.986007    5636 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9ebd4a85-36a3-4fdc-bd57-aab45239cbb7
	I0108 21:23:59.986007    5636 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-554300","uid":"00d4501c-311c-4564-812c-b620304a4e8e","resourceVersion":"418","creationTimestamp":"2024-01-08T21:23:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-554300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-554300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T21_23_33_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-01-08T21:23:28Z","fieldsType":"FieldsV1","fi [truncated 4782 chars]
	I0108 21:23:59.986007    5636 pod_ready.go:92] pod "kube-controller-manager-multinode-554300" in "kube-system" namespace has status "Ready":"True"
	I0108 21:23:59.986007    5636 pod_ready.go:81] duration metric: took 6.7864ms waiting for pod "kube-controller-manager-multinode-554300" in "kube-system" namespace to be "Ready" ...
	I0108 21:23:59.986007    5636 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-jsq7c" in "kube-system" namespace to be "Ready" ...
	I0108 21:23:59.986007    5636 round_trippers.go:463] GET https://172.29.107.59:8443/api/v1/namespaces/kube-system/pods/kube-proxy-jsq7c
	I0108 21:23:59.986007    5636 round_trippers.go:469] Request Headers:
	I0108 21:23:59.986007    5636 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:23:59.986007    5636 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0108 21:23:59.990010    5636 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0108 21:23:59.990139    5636 round_trippers.go:577] Response Headers:
	I0108 21:23:59.990139    5636 round_trippers.go:580]     Audit-Id: 014cb842-2449-4a5c-817a-ad1f7261bd22
	I0108 21:23:59.990139    5636 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:23:59.990139    5636 round_trippers.go:580]     Content-Type: application/json
	I0108 21:23:59.990241    5636 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9ebd4a85-36a3-4fdc-bd57-aab45239cbb7
	I0108 21:23:59.990241    5636 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 15bc4c65-0928-4725-8730-353e6d4075e7
	I0108 21:23:59.990241    5636 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:23:59 GMT
	I0108 21:23:59.990725    5636 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-jsq7c","generateName":"kube-proxy-","namespace":"kube-system","uid":"cbc6a2d2-bb66-4af4-8a7d-315bc293cac0","resourceVersion":"398","creationTimestamp":"2024-01-08T21:23:44Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"0597a1a7-46a6-4b69-bb4b-83cbc4a269de","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-08T21:23:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0597a1a7-46a6-4b69-bb4b-83cbc4a269de\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5533 chars]
	I0108 21:23:59.991335    5636 round_trippers.go:463] GET https://172.29.107.59:8443/api/v1/nodes/multinode-554300
	I0108 21:23:59.991389    5636 round_trippers.go:469] Request Headers:
	I0108 21:23:59.991389    5636 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:23:59.991389    5636 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0108 21:23:59.997014    5636 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0108 21:23:59.997014    5636 round_trippers.go:577] Response Headers:
	I0108 21:23:59.997014    5636 round_trippers.go:580]     Audit-Id: 6029f9d5-8955-497e-a62f-aded09300bcf
	I0108 21:23:59.997014    5636 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:23:59.997014    5636 round_trippers.go:580]     Content-Type: application/json
	I0108 21:23:59.997014    5636 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9ebd4a85-36a3-4fdc-bd57-aab45239cbb7
	I0108 21:23:59.997014    5636 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 15bc4c65-0928-4725-8730-353e6d4075e7
	I0108 21:23:59.997014    5636 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:23:59 GMT
	I0108 21:23:59.997014    5636 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-554300","uid":"00d4501c-311c-4564-812c-b620304a4e8e","resourceVersion":"418","creationTimestamp":"2024-01-08T21:23:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-554300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-554300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T21_23_33_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-01-08T21:23:28Z","fieldsType":"FieldsV1","fi [truncated 4782 chars]
	I0108 21:23:59.997989    5636 pod_ready.go:92] pod "kube-proxy-jsq7c" in "kube-system" namespace has status "Ready":"True"
	I0108 21:23:59.997989    5636 pod_ready.go:81] duration metric: took 11.9818ms waiting for pod "kube-proxy-jsq7c" in "kube-system" namespace to be "Ready" ...
	I0108 21:23:59.997989    5636 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-554300" in "kube-system" namespace to be "Ready" ...
	I0108 21:24:00.150085    5636 request.go:629] Waited for 151.8883ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.107.59:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-554300
	I0108 21:24:00.150405    5636 round_trippers.go:463] GET https://172.29.107.59:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-554300
	I0108 21:24:00.150433    5636 round_trippers.go:469] Request Headers:
	I0108 21:24:00.150433    5636 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:24:00.150433    5636 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0108 21:24:00.153761    5636 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 21:24:00.153761    5636 round_trippers.go:577] Response Headers:
	I0108 21:24:00.153761    5636 round_trippers.go:580]     Content-Type: application/json
	I0108 21:24:00.153761    5636 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9ebd4a85-36a3-4fdc-bd57-aab45239cbb7
	I0108 21:24:00.154466    5636 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 15bc4c65-0928-4725-8730-353e6d4075e7
	I0108 21:24:00.154466    5636 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:24:00 GMT
	I0108 21:24:00.154466    5636 round_trippers.go:580]     Audit-Id: 521fb6b2-c977-44a1-a17d-177e72ebbafd
	I0108 21:24:00.154466    5636 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:24:00.154623    5636 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-554300","namespace":"kube-system","uid":"f5b78bba-6cd0-495b-b6d6-c9afd93b3534","resourceVersion":"313","creationTimestamp":"2024-01-08T21:23:32Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"33f58bfb7438c4df2c3fb56db1f613ab","kubernetes.io/config.mirror":"33f58bfb7438c4df2c3fb56db1f613ab","kubernetes.io/config.seen":"2024-01-08T21:23:32.232192792Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-554300","uid":"00d4501c-311c-4564-812c-b620304a4e8e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-08T21:23:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4699 chars]
	I0108 21:24:00.353567    5636 request.go:629] Waited for 197.9722ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.107.59:8443/api/v1/nodes/multinode-554300
	I0108 21:24:00.353567    5636 round_trippers.go:463] GET https://172.29.107.59:8443/api/v1/nodes/multinode-554300
	I0108 21:24:00.353567    5636 round_trippers.go:469] Request Headers:
	I0108 21:24:00.353567    5636 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:24:00.353730    5636 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0108 21:24:00.358224    5636 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0108 21:24:00.358286    5636 round_trippers.go:577] Response Headers:
	I0108 21:24:00.358286    5636 round_trippers.go:580]     Audit-Id: 6e1619b5-bfcf-4a45-843a-4ad46a40452a
	I0108 21:24:00.358286    5636 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:24:00.358286    5636 round_trippers.go:580]     Content-Type: application/json
	I0108 21:24:00.358286    5636 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9ebd4a85-36a3-4fdc-bd57-aab45239cbb7
	I0108 21:24:00.358286    5636 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 15bc4c65-0928-4725-8730-353e6d4075e7
	I0108 21:24:00.358286    5636 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:24:00 GMT
	I0108 21:24:00.358561    5636 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-554300","uid":"00d4501c-311c-4564-812c-b620304a4e8e","resourceVersion":"418","creationTimestamp":"2024-01-08T21:23:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-554300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-554300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T21_23_33_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-01-08T21:23:28Z","fieldsType":"FieldsV1","fi [truncated 4782 chars]
	I0108 21:24:00.359118    5636 pod_ready.go:92] pod "kube-scheduler-multinode-554300" in "kube-system" namespace has status "Ready":"True"
	I0108 21:24:00.359118    5636 pod_ready.go:81] duration metric: took 361.1271ms waiting for pod "kube-scheduler-multinode-554300" in "kube-system" namespace to be "Ready" ...
	I0108 21:24:00.359118    5636 pod_ready.go:38] duration metric: took 2.9304363s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0108 21:24:00.359118    5636 api_server.go:52] waiting for apiserver process to appear ...
	I0108 21:24:00.372869    5636 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 21:24:00.401681    5636 command_runner.go:130] > 2078
	I0108 21:24:00.401905    5636 api_server.go:72] duration metric: took 14.6700562s to wait for apiserver process to appear ...
	I0108 21:24:00.401905    5636 api_server.go:88] waiting for apiserver healthz status ...
	I0108 21:24:00.401905    5636 api_server.go:253] Checking apiserver healthz at https://172.29.107.59:8443/healthz ...
	I0108 21:24:00.411403    5636 api_server.go:279] https://172.29.107.59:8443/healthz returned 200:
	ok
	I0108 21:24:00.411925    5636 round_trippers.go:463] GET https://172.29.107.59:8443/version
	I0108 21:24:00.411925    5636 round_trippers.go:469] Request Headers:
	I0108 21:24:00.411925    5636 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:24:00.412037    5636 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0108 21:24:00.413447    5636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0108 21:24:00.413447    5636 round_trippers.go:577] Response Headers:
	I0108 21:24:00.413447    5636 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9ebd4a85-36a3-4fdc-bd57-aab45239cbb7
	I0108 21:24:00.413447    5636 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 15bc4c65-0928-4725-8730-353e6d4075e7
	I0108 21:24:00.413447    5636 round_trippers.go:580]     Content-Length: 264
	I0108 21:24:00.413447    5636 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:24:00 GMT
	I0108 21:24:00.413447    5636 round_trippers.go:580]     Audit-Id: 53f3f87e-6a11-40eb-8900-22750088232a
	I0108 21:24:00.413447    5636 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:24:00.414099    5636 round_trippers.go:580]     Content-Type: application/json
	I0108 21:24:00.414099    5636 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "28",
	  "gitVersion": "v1.28.4",
	  "gitCommit": "bae2c62678db2b5053817bc97181fcc2e8388103",
	  "gitTreeState": "clean",
	  "buildDate": "2023-11-15T16:48:54Z",
	  "goVersion": "go1.20.11",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0108 21:24:00.414099    5636 api_server.go:141] control plane version: v1.28.4
	I0108 21:24:00.414099    5636 api_server.go:131] duration metric: took 12.1943ms to wait for apiserver health ...
	I0108 21:24:00.414099    5636 system_pods.go:43] waiting for kube-system pods to appear ...
	I0108 21:24:00.556025    5636 request.go:629] Waited for 141.776ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.107.59:8443/api/v1/namespaces/kube-system/pods
	I0108 21:24:00.556391    5636 round_trippers.go:463] GET https://172.29.107.59:8443/api/v1/namespaces/kube-system/pods
	I0108 21:24:00.556391    5636 round_trippers.go:469] Request Headers:
	I0108 21:24:00.556391    5636 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:24:00.556391    5636 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0108 21:24:00.561247    5636 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0108 21:24:00.561490    5636 round_trippers.go:577] Response Headers:
	I0108 21:24:00.561565    5636 round_trippers.go:580]     Audit-Id: 1f6cd330-3d06-4c7b-893d-351a16e04ca0
	I0108 21:24:00.561565    5636 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:24:00.561565    5636 round_trippers.go:580]     Content-Type: application/json
	I0108 21:24:00.561565    5636 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9ebd4a85-36a3-4fdc-bd57-aab45239cbb7
	I0108 21:24:00.561565    5636 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 15bc4c65-0928-4725-8730-353e6d4075e7
	I0108 21:24:00.561565    5636 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:24:00 GMT
	I0108 21:24:00.562892    5636 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"443"},"items":[{"metadata":{"name":"coredns-5dd5756b68-q7vd7","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"fe215542-1a69-4152-9098-06937431fa74","resourceVersion":"438","creationTimestamp":"2024-01-08T21:23:44Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"4ef51136-88a5-445c-bfe9-e1a45010851b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-08T21:23:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"4ef51136-88a5-445c-bfe9-e1a45010851b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 54088 chars]
	I0108 21:24:00.565536    5636 system_pods.go:59] 8 kube-system pods found
	I0108 21:24:00.565536    5636 system_pods.go:61] "coredns-5dd5756b68-q7vd7" [fe215542-1a69-4152-9098-06937431fa74] Running
	I0108 21:24:00.565536    5636 system_pods.go:61] "etcd-multinode-554300" [06d58411-089c-4312-8685-a2cb7f7e3c33] Running
	I0108 21:24:00.565536    5636 system_pods.go:61] "kindnet-5r79t" [275c1f53-70c6-4922-9ba4-d931e1515729] Running
	I0108 21:24:00.565536    5636 system_pods.go:61] "kube-apiserver-multinode-554300" [54bb0d68-f8ac-4f67-a9cd-71a15ce550ad] Running
	I0108 21:24:00.565536    5636 system_pods.go:61] "kube-controller-manager-multinode-554300" [c5c47910-dee9-4e42-8623-dbc45d13564f] Running
	I0108 21:24:00.565536    5636 system_pods.go:61] "kube-proxy-jsq7c" [cbc6a2d2-bb66-4af4-8a7d-315bc293cac0] Running
	I0108 21:24:00.565753    5636 system_pods.go:61] "kube-scheduler-multinode-554300" [f5b78bba-6cd0-495b-b6d6-c9afd93b3534] Running
	I0108 21:24:00.565753    5636 system_pods.go:61] "storage-provisioner" [2fb8721f-01cc-4078-b45c-964d73e3da98] Running
	I0108 21:24:00.565753    5636 system_pods.go:74] duration metric: took 151.5041ms to wait for pod list to return data ...
	I0108 21:24:00.565753    5636 default_sa.go:34] waiting for default service account to be created ...
	I0108 21:24:00.750248    5636 request.go:629] Waited for 184.3225ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.107.59:8443/api/v1/namespaces/default/serviceaccounts
	I0108 21:24:00.750248    5636 round_trippers.go:463] GET https://172.29.107.59:8443/api/v1/namespaces/default/serviceaccounts
	I0108 21:24:00.750248    5636 round_trippers.go:469] Request Headers:
	I0108 21:24:00.750248    5636 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:24:00.750248    5636 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0108 21:24:00.754026    5636 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 21:24:00.754026    5636 round_trippers.go:577] Response Headers:
	I0108 21:24:00.754026    5636 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 15bc4c65-0928-4725-8730-353e6d4075e7
	I0108 21:24:00.754026    5636 round_trippers.go:580]     Content-Length: 261
	I0108 21:24:00.754026    5636 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:24:00 GMT
	I0108 21:24:00.754026    5636 round_trippers.go:580]     Audit-Id: 9d223977-9384-4637-9b95-cbc33c210091
	I0108 21:24:00.754026    5636 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:24:00.754026    5636 round_trippers.go:580]     Content-Type: application/json
	I0108 21:24:00.754026    5636 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9ebd4a85-36a3-4fdc-bd57-aab45239cbb7
	I0108 21:24:00.754952    5636 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"443"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"d2fb8b50-fecf-4612-b557-5a63ee90f2f3","resourceVersion":"365","creationTimestamp":"2024-01-08T21:23:44Z"}}]}
	I0108 21:24:00.755310    5636 default_sa.go:45] found service account: "default"
	I0108 21:24:00.755393    5636 default_sa.go:55] duration metric: took 189.6386ms for default service account to be created ...
	I0108 21:24:00.755393    5636 system_pods.go:116] waiting for k8s-apps to be running ...
	I0108 21:24:00.953910    5636 request.go:629] Waited for 198.2053ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.107.59:8443/api/v1/namespaces/kube-system/pods
	I0108 21:24:00.954018    5636 round_trippers.go:463] GET https://172.29.107.59:8443/api/v1/namespaces/kube-system/pods
	I0108 21:24:00.954018    5636 round_trippers.go:469] Request Headers:
	I0108 21:24:00.954018    5636 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:24:00.954018    5636 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0108 21:24:00.959409    5636 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0108 21:24:00.959409    5636 round_trippers.go:577] Response Headers:
	I0108 21:24:00.960429    5636 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 15bc4c65-0928-4725-8730-353e6d4075e7
	I0108 21:24:00.960429    5636 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:24:00 GMT
	I0108 21:24:00.960429    5636 round_trippers.go:580]     Audit-Id: 854b6725-c290-4c20-ae50-b2d3d6356b6d
	I0108 21:24:00.960429    5636 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:24:00.960429    5636 round_trippers.go:580]     Content-Type: application/json
	I0108 21:24:00.960429    5636 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9ebd4a85-36a3-4fdc-bd57-aab45239cbb7
	I0108 21:24:00.962151    5636 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"444"},"items":[{"metadata":{"name":"coredns-5dd5756b68-q7vd7","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"fe215542-1a69-4152-9098-06937431fa74","resourceVersion":"438","creationTimestamp":"2024-01-08T21:23:44Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"4ef51136-88a5-445c-bfe9-e1a45010851b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-08T21:23:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"4ef51136-88a5-445c-bfe9-e1a45010851b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 54088 chars]
	I0108 21:24:00.964695    5636 system_pods.go:86] 8 kube-system pods found
	I0108 21:24:00.964695    5636 system_pods.go:89] "coredns-5dd5756b68-q7vd7" [fe215542-1a69-4152-9098-06937431fa74] Running
	I0108 21:24:00.964695    5636 system_pods.go:89] "etcd-multinode-554300" [06d58411-089c-4312-8685-a2cb7f7e3c33] Running
	I0108 21:24:00.964695    5636 system_pods.go:89] "kindnet-5r79t" [275c1f53-70c6-4922-9ba4-d931e1515729] Running
	I0108 21:24:00.964695    5636 system_pods.go:89] "kube-apiserver-multinode-554300" [54bb0d68-f8ac-4f67-a9cd-71a15ce550ad] Running
	I0108 21:24:00.964695    5636 system_pods.go:89] "kube-controller-manager-multinode-554300" [c5c47910-dee9-4e42-8623-dbc45d13564f] Running
	I0108 21:24:00.964695    5636 system_pods.go:89] "kube-proxy-jsq7c" [cbc6a2d2-bb66-4af4-8a7d-315bc293cac0] Running
	I0108 21:24:00.964695    5636 system_pods.go:89] "kube-scheduler-multinode-554300" [f5b78bba-6cd0-495b-b6d6-c9afd93b3534] Running
	I0108 21:24:00.964695    5636 system_pods.go:89] "storage-provisioner" [2fb8721f-01cc-4078-b45c-964d73e3da98] Running
	I0108 21:24:00.964695    5636 system_pods.go:126] duration metric: took 209.3011ms to wait for k8s-apps to be running ...
	I0108 21:24:00.964695    5636 system_svc.go:44] waiting for kubelet service to be running ....
	I0108 21:24:00.977145    5636 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0108 21:24:00.998431    5636 system_svc.go:56] duration metric: took 33.7361ms WaitForService to wait for kubelet.
	I0108 21:24:00.998431    5636 kubeadm.go:581] duration metric: took 15.2665797s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0108 21:24:00.998431    5636 node_conditions.go:102] verifying NodePressure condition ...
	I0108 21:24:01.157745    5636 request.go:629] Waited for 159.1091ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.107.59:8443/api/v1/nodes
	I0108 21:24:01.157745    5636 round_trippers.go:463] GET https://172.29.107.59:8443/api/v1/nodes
	I0108 21:24:01.158058    5636 round_trippers.go:469] Request Headers:
	I0108 21:24:01.158058    5636 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:24:01.158126    5636 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0108 21:24:01.163555    5636 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0108 21:24:01.163865    5636 round_trippers.go:577] Response Headers:
	I0108 21:24:01.163865    5636 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9ebd4a85-36a3-4fdc-bd57-aab45239cbb7
	I0108 21:24:01.163951    5636 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 15bc4c65-0928-4725-8730-353e6d4075e7
	I0108 21:24:01.163951    5636 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:24:01 GMT
	I0108 21:24:01.163951    5636 round_trippers.go:580]     Audit-Id: c824a42f-34a7-4b79-a051-c1d103e1b556
	I0108 21:24:01.163951    5636 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:24:01.163951    5636 round_trippers.go:580]     Content-Type: application/json
	I0108 21:24:01.165256    5636 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"444"},"items":[{"metadata":{"name":"multinode-554300","uid":"00d4501c-311c-4564-812c-b620304a4e8e","resourceVersion":"418","creationTimestamp":"2024-01-08T21:23:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-554300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-554300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T21_23_33_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFiel
ds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time" [truncated 4835 chars]
	I0108 21:24:01.165380    5636 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0108 21:24:01.165380    5636 node_conditions.go:123] node cpu capacity is 2
	I0108 21:24:01.165380    5636 node_conditions.go:105] duration metric: took 166.9483ms to run NodePressure ...
	I0108 21:24:01.165380    5636 start.go:228] waiting for startup goroutines ...
	I0108 21:24:01.165380    5636 start.go:233] waiting for cluster config update ...
	I0108 21:24:01.165380    5636 start.go:242] writing updated cluster config ...
	I0108 21:24:01.168463    5636 out.go:177] 
	I0108 21:24:01.179821    5636 config.go:182] Loaded profile config "multinode-554300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0108 21:24:01.179821    5636 profile.go:148] Saving config to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-554300\config.json ...
	I0108 21:24:01.183635    5636 out.go:177] * Starting worker node multinode-554300-m02 in cluster multinode-554300
	I0108 21:24:01.184404    5636 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0108 21:24:01.184476    5636 cache.go:56] Caching tarball of preloaded images
	I0108 21:24:01.184714    5636 preload.go:174] Found C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0108 21:24:01.184714    5636 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on docker
	I0108 21:24:01.184714    5636 profile.go:148] Saving config to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-554300\config.json ...
	I0108 21:24:01.191849    5636 start.go:365] acquiring machines lock for multinode-554300-m02: {Name:mk814f158b6187cc9297257c36fdbe0d2871c950 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0108 21:24:01.191849    5636 start.go:369] acquired machines lock for "multinode-554300-m02" in 0s
	I0108 21:24:01.191849    5636 start.go:93] Provisioning new machine with config: &{Name:multinode-554300 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17806/minikube-v1.32.1-1702708929-17806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-554300 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:172.29.107.59 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeReq
uested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name:m02 IP: Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0108 21:24:01.191849    5636 start.go:125] createHost starting for "m02" (driver="hyperv")
	I0108 21:24:01.192740    5636 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0108 21:24:01.193752    5636 start.go:159] libmachine.API.Create for "multinode-554300" (driver="hyperv")
	I0108 21:24:01.193752    5636 client.go:168] LocalClient.Create starting
	I0108 21:24:01.193752    5636 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem
	I0108 21:24:01.193752    5636 main.go:141] libmachine: Decoding PEM data...
	I0108 21:24:01.193752    5636 main.go:141] libmachine: Parsing certificate...
	I0108 21:24:01.193752    5636 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem
	I0108 21:24:01.193752    5636 main.go:141] libmachine: Decoding PEM data...
	I0108 21:24:01.193752    5636 main.go:141] libmachine: Parsing certificate...
	I0108 21:24:01.194745    5636 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0108 21:24:03.118640    5636 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0108 21:24:03.118826    5636 main.go:141] libmachine: [stderr =====>] : 
	I0108 21:24:03.118906    5636 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0108 21:24:04.889287    5636 main.go:141] libmachine: [stdout =====>] : False
	
	I0108 21:24:04.889287    5636 main.go:141] libmachine: [stderr =====>] : 
	I0108 21:24:04.889287    5636 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0108 21:24:06.401550    5636 main.go:141] libmachine: [stdout =====>] : True
	
	I0108 21:24:06.401742    5636 main.go:141] libmachine: [stderr =====>] : 
	I0108 21:24:06.401742    5636 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0108 21:24:10.137090    5636 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0108 21:24:10.137219    5636 main.go:141] libmachine: [stderr =====>] : 
	I0108 21:24:10.139312    5636 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube7/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.32.1-1702708929-17806-amd64.iso...
	I0108 21:24:10.603337    5636 main.go:141] libmachine: Creating SSH key...
	I0108 21:24:10.738572    5636 main.go:141] libmachine: Creating VM...
	I0108 21:24:10.738572    5636 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0108 21:24:13.675525    5636 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0108 21:24:13.675718    5636 main.go:141] libmachine: [stderr =====>] : 
	I0108 21:24:13.675877    5636 main.go:141] libmachine: Using switch "Default Switch"
	I0108 21:24:13.675939    5636 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0108 21:24:15.477370    5636 main.go:141] libmachine: [stdout =====>] : True
	
	I0108 21:24:15.477370    5636 main.go:141] libmachine: [stderr =====>] : 
	I0108 21:24:15.477616    5636 main.go:141] libmachine: Creating VHD
	I0108 21:24:15.477616    5636 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-554300-m02\fixed.vhd' -SizeBytes 10MB -Fixed
	I0108 21:24:19.278300    5636 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube7
	Path                    : C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-554300-m02\fixed
	                          .vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 41132097-D2A0-40FD-B031-789000B7BC6B
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0108 21:24:19.278534    5636 main.go:141] libmachine: [stderr =====>] : 
	I0108 21:24:19.278534    5636 main.go:141] libmachine: Writing magic tar header
	I0108 21:24:19.278598    5636 main.go:141] libmachine: Writing SSH key tar header
	I0108 21:24:19.288129    5636 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-554300-m02\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-554300-m02\disk.vhd' -VHDType Dynamic -DeleteSource
	I0108 21:24:22.464514    5636 main.go:141] libmachine: [stdout =====>] : 
	I0108 21:24:22.464514    5636 main.go:141] libmachine: [stderr =====>] : 
	I0108 21:24:22.464514    5636 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-554300-m02\disk.vhd' -SizeBytes 20000MB
	I0108 21:24:25.062823    5636 main.go:141] libmachine: [stdout =====>] : 
	I0108 21:24:25.062823    5636 main.go:141] libmachine: [stderr =====>] : 
	I0108 21:24:25.063025    5636 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM multinode-554300-m02 -Path 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-554300-m02' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0108 21:24:28.711606    5636 main.go:141] libmachine: [stdout =====>] : 
	Name                 State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----                 ----- ----------- ----------------- ------   ------             -------
	multinode-554300-m02 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0108 21:24:28.711865    5636 main.go:141] libmachine: [stderr =====>] : 
	I0108 21:24:28.712018    5636 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName multinode-554300-m02 -DynamicMemoryEnabled $false
	I0108 21:24:30.982673    5636 main.go:141] libmachine: [stdout =====>] : 
	I0108 21:24:30.982673    5636 main.go:141] libmachine: [stderr =====>] : 
	I0108 21:24:30.982673    5636 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor multinode-554300-m02 -Count 2
	I0108 21:24:33.179634    5636 main.go:141] libmachine: [stdout =====>] : 
	I0108 21:24:33.179868    5636 main.go:141] libmachine: [stderr =====>] : 
	I0108 21:24:33.179927    5636 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName multinode-554300-m02 -Path 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-554300-m02\boot2docker.iso'
	I0108 21:24:35.781391    5636 main.go:141] libmachine: [stdout =====>] : 
	I0108 21:24:35.781645    5636 main.go:141] libmachine: [stderr =====>] : 
	I0108 21:24:35.781828    5636 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName multinode-554300-m02 -Path 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-554300-m02\disk.vhd'
	I0108 21:24:38.475727    5636 main.go:141] libmachine: [stdout =====>] : 
	I0108 21:24:38.475919    5636 main.go:141] libmachine: [stderr =====>] : 
	I0108 21:24:38.475919    5636 main.go:141] libmachine: Starting VM...
	I0108 21:24:38.475919    5636 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-554300-m02
	I0108 21:24:41.370442    5636 main.go:141] libmachine: [stdout =====>] : 
	I0108 21:24:41.370600    5636 main.go:141] libmachine: [stderr =====>] : 
	I0108 21:24:41.370600    5636 main.go:141] libmachine: Waiting for host to start...
	I0108 21:24:41.370653    5636 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-554300-m02 ).state
	I0108 21:24:43.704964    5636 main.go:141] libmachine: [stdout =====>] : Running
	
	I0108 21:24:43.705312    5636 main.go:141] libmachine: [stderr =====>] : 
	I0108 21:24:43.705378    5636 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-554300-m02 ).networkadapters[0]).ipaddresses[0]
	I0108 21:24:46.247421    5636 main.go:141] libmachine: [stdout =====>] : 
	I0108 21:24:46.247421    5636 main.go:141] libmachine: [stderr =====>] : 
	I0108 21:24:47.250193    5636 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-554300-m02 ).state
	I0108 21:24:49.444630    5636 main.go:141] libmachine: [stdout =====>] : Running
	
	I0108 21:24:49.444844    5636 main.go:141] libmachine: [stderr =====>] : 
	I0108 21:24:49.444994    5636 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-554300-m02 ).networkadapters[0]).ipaddresses[0]
	I0108 21:24:51.994568    5636 main.go:141] libmachine: [stdout =====>] : 
	I0108 21:24:51.994910    5636 main.go:141] libmachine: [stderr =====>] : 
	I0108 21:24:52.996326    5636 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-554300-m02 ).state
	I0108 21:24:55.238434    5636 main.go:141] libmachine: [stdout =====>] : Running
	
	I0108 21:24:55.238434    5636 main.go:141] libmachine: [stderr =====>] : 
	I0108 21:24:55.238434    5636 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-554300-m02 ).networkadapters[0]).ipaddresses[0]
	I0108 21:24:57.828727    5636 main.go:141] libmachine: [stdout =====>] : 
	I0108 21:24:57.828727    5636 main.go:141] libmachine: [stderr =====>] : 
	I0108 21:24:58.839400    5636 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-554300-m02 ).state
	I0108 21:25:01.077376    5636 main.go:141] libmachine: [stdout =====>] : Running
	
	I0108 21:25:01.077552    5636 main.go:141] libmachine: [stderr =====>] : 
	I0108 21:25:01.077613    5636 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-554300-m02 ).networkadapters[0]).ipaddresses[0]
	I0108 21:25:03.644603    5636 main.go:141] libmachine: [stdout =====>] : 
	I0108 21:25:03.644603    5636 main.go:141] libmachine: [stderr =====>] : 
	I0108 21:25:04.645127    5636 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-554300-m02 ).state
	I0108 21:25:06.858569    5636 main.go:141] libmachine: [stdout =====>] : Running
	
	I0108 21:25:06.858607    5636 main.go:141] libmachine: [stderr =====>] : 
	I0108 21:25:06.858690    5636 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-554300-m02 ).networkadapters[0]).ipaddresses[0]
	I0108 21:25:09.460321    5636 main.go:141] libmachine: [stdout =====>] : 172.29.96.43
	
	I0108 21:25:09.460435    5636 main.go:141] libmachine: [stderr =====>] : 
	I0108 21:25:09.460435    5636 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-554300-m02 ).state
	I0108 21:25:11.593873    5636 main.go:141] libmachine: [stdout =====>] : Running
	
	I0108 21:25:11.593873    5636 main.go:141] libmachine: [stderr =====>] : 
	I0108 21:25:11.594183    5636 machine.go:88] provisioning docker machine ...
	I0108 21:25:11.594183    5636 buildroot.go:166] provisioning hostname "multinode-554300-m02"
	I0108 21:25:11.594183    5636 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-554300-m02 ).state
	I0108 21:25:13.729270    5636 main.go:141] libmachine: [stdout =====>] : Running
	
	I0108 21:25:13.729270    5636 main.go:141] libmachine: [stderr =====>] : 
	I0108 21:25:13.729502    5636 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-554300-m02 ).networkadapters[0]).ipaddresses[0]
	I0108 21:25:16.289637    5636 main.go:141] libmachine: [stdout =====>] : 172.29.96.43
	
	I0108 21:25:16.289637    5636 main.go:141] libmachine: [stderr =====>] : 
	I0108 21:25:16.294649    5636 main.go:141] libmachine: Using SSH client type: native
	I0108 21:25:16.305892    5636 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4a6120] 0x4a8c60 <nil>  [] 0s} 172.29.96.43 22 <nil> <nil>}
	I0108 21:25:16.305892    5636 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-554300-m02 && echo "multinode-554300-m02" | sudo tee /etc/hostname
	I0108 21:25:16.457619    5636 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-554300-m02
	
	I0108 21:25:16.457619    5636 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-554300-m02 ).state
	I0108 21:25:18.544244    5636 main.go:141] libmachine: [stdout =====>] : Running
	
	I0108 21:25:18.544244    5636 main.go:141] libmachine: [stderr =====>] : 
	I0108 21:25:18.544244    5636 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-554300-m02 ).networkadapters[0]).ipaddresses[0]
	I0108 21:25:21.090393    5636 main.go:141] libmachine: [stdout =====>] : 172.29.96.43
	
	I0108 21:25:21.090393    5636 main.go:141] libmachine: [stderr =====>] : 
	I0108 21:25:21.095749    5636 main.go:141] libmachine: Using SSH client type: native
	I0108 21:25:21.096450    5636 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4a6120] 0x4a8c60 <nil>  [] 0s} 172.29.96.43 22 <nil> <nil>}
	I0108 21:25:21.096450    5636 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-554300-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-554300-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-554300-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0108 21:25:21.250194    5636 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0108 21:25:21.250194    5636 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube7\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube7\minikube-integration\.minikube}
	I0108 21:25:21.250194    5636 buildroot.go:174] setting up certificates
	I0108 21:25:21.250194    5636 provision.go:83] configureAuth start
	I0108 21:25:21.250194    5636 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-554300-m02 ).state
	I0108 21:25:23.401394    5636 main.go:141] libmachine: [stdout =====>] : Running
	
	I0108 21:25:23.401394    5636 main.go:141] libmachine: [stderr =====>] : 
	I0108 21:25:23.401490    5636 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-554300-m02 ).networkadapters[0]).ipaddresses[0]
	I0108 21:25:25.944536    5636 main.go:141] libmachine: [stdout =====>] : 172.29.96.43
	
	I0108 21:25:25.944536    5636 main.go:141] libmachine: [stderr =====>] : 
	I0108 21:25:25.944856    5636 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-554300-m02 ).state
	I0108 21:25:28.078139    5636 main.go:141] libmachine: [stdout =====>] : Running
	
	I0108 21:25:28.078139    5636 main.go:141] libmachine: [stderr =====>] : 
	I0108 21:25:28.078208    5636 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-554300-m02 ).networkadapters[0]).ipaddresses[0]
	I0108 21:25:30.657530    5636 main.go:141] libmachine: [stdout =====>] : 172.29.96.43
	
	I0108 21:25:30.657780    5636 main.go:141] libmachine: [stderr =====>] : 
	I0108 21:25:30.657780    5636 provision.go:138] copyHostCerts
	I0108 21:25:30.658077    5636 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube7\minikube-integration\.minikube/ca.pem
	I0108 21:25:30.658281    5636 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/ca.pem, removing ...
	I0108 21:25:30.658281    5636 exec_runner.go:203] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.pem
	I0108 21:25:30.658904    5636 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0108 21:25:30.660067    5636 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube7\minikube-integration\.minikube/cert.pem
	I0108 21:25:30.660402    5636 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/cert.pem, removing ...
	I0108 21:25:30.660402    5636 exec_runner.go:203] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cert.pem
	I0108 21:25:30.660402    5636 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0108 21:25:30.661785    5636 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube7\minikube-integration\.minikube/key.pem
	I0108 21:25:30.661785    5636 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/key.pem, removing ...
	I0108 21:25:30.661785    5636 exec_runner.go:203] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\key.pem
	I0108 21:25:30.662387    5636 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/key.pem (1679 bytes)
	I0108 21:25:30.663074    5636 provision.go:112] generating server cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-554300-m02 san=[172.29.96.43 172.29.96.43 localhost 127.0.0.1 minikube multinode-554300-m02]
	I0108 21:25:31.089806    5636 provision.go:172] copyRemoteCerts
	I0108 21:25:31.104419    5636 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0108 21:25:31.104419    5636 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-554300-m02 ).state
	I0108 21:25:33.253161    5636 main.go:141] libmachine: [stdout =====>] : Running
	
	I0108 21:25:33.253161    5636 main.go:141] libmachine: [stderr =====>] : 
	I0108 21:25:33.253568    5636 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-554300-m02 ).networkadapters[0]).ipaddresses[0]
	I0108 21:25:35.828743    5636 main.go:141] libmachine: [stdout =====>] : 172.29.96.43
	
	I0108 21:25:35.828931    5636 main.go:141] libmachine: [stderr =====>] : 
	I0108 21:25:35.828931    5636 sshutil.go:53] new ssh client: &{IP:172.29.96.43 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-554300-m02\id_rsa Username:docker}
	I0108 21:25:35.936666    5636 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.832223s)
	I0108 21:25:35.936666    5636 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0108 21:25:35.936666    5636 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0108 21:25:35.976142    5636 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0108 21:25:35.976142    5636 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1237 bytes)
	I0108 21:25:36.011577    5636 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0108 21:25:36.011939    5636 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0108 21:25:36.053697    5636 provision.go:86] duration metric: configureAuth took 14.8034286s
	I0108 21:25:36.053800    5636 buildroot.go:189] setting minikube options for container-runtime
	I0108 21:25:36.054429    5636 config.go:182] Loaded profile config "multinode-554300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0108 21:25:36.054519    5636 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-554300-m02 ).state
	I0108 21:25:38.194632    5636 main.go:141] libmachine: [stdout =====>] : Running
	
	I0108 21:25:38.195057    5636 main.go:141] libmachine: [stderr =====>] : 
	I0108 21:25:38.195256    5636 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-554300-m02 ).networkadapters[0]).ipaddresses[0]
	I0108 21:25:40.772438    5636 main.go:141] libmachine: [stdout =====>] : 172.29.96.43
	
	I0108 21:25:40.772438    5636 main.go:141] libmachine: [stderr =====>] : 
	I0108 21:25:40.778991    5636 main.go:141] libmachine: Using SSH client type: native
	I0108 21:25:40.779665    5636 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4a6120] 0x4a8c60 <nil>  [] 0s} 172.29.96.43 22 <nil> <nil>}
	I0108 21:25:40.779665    5636 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0108 21:25:40.919219    5636 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0108 21:25:40.919219    5636 buildroot.go:70] root file system type: tmpfs
	I0108 21:25:40.920717    5636 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0108 21:25:40.920717    5636 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-554300-m02 ).state
	I0108 21:25:43.058494    5636 main.go:141] libmachine: [stdout =====>] : Running
	
	I0108 21:25:43.058494    5636 main.go:141] libmachine: [stderr =====>] : 
	I0108 21:25:43.058579    5636 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-554300-m02 ).networkadapters[0]).ipaddresses[0]
	I0108 21:25:45.599251    5636 main.go:141] libmachine: [stdout =====>] : 172.29.96.43
	
	I0108 21:25:45.599514    5636 main.go:141] libmachine: [stderr =====>] : 
	I0108 21:25:45.605327    5636 main.go:141] libmachine: Using SSH client type: native
	I0108 21:25:45.606086    5636 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4a6120] 0x4a8c60 <nil>  [] 0s} 172.29.96.43 22 <nil> <nil>}
	I0108 21:25:45.606086    5636 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.29.107.59"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0108 21:25:45.769879    5636 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.29.107.59
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0108 21:25:45.769879    5636 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-554300-m02 ).state
	I0108 21:25:47.904304    5636 main.go:141] libmachine: [stdout =====>] : Running
	
	I0108 21:25:47.904687    5636 main.go:141] libmachine: [stderr =====>] : 
	I0108 21:25:47.904687    5636 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-554300-m02 ).networkadapters[0]).ipaddresses[0]
	I0108 21:25:50.408594    5636 main.go:141] libmachine: [stdout =====>] : 172.29.96.43
	
	I0108 21:25:50.408594    5636 main.go:141] libmachine: [stderr =====>] : 
	I0108 21:25:50.413484    5636 main.go:141] libmachine: Using SSH client type: native
	I0108 21:25:50.414334    5636 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4a6120] 0x4a8c60 <nil>  [] 0s} 172.29.96.43 22 <nil> <nil>}
	I0108 21:25:50.414334    5636 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0108 21:25:51.388979    5636 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0108 21:25:51.388979    5636 machine.go:91] provisioned docker machine in 39.7945967s
	I0108 21:25:51.388979    5636 client.go:171] LocalClient.Create took 1m50.1946701s
	I0108 21:25:51.388979    5636 start.go:167] duration metric: libmachine.API.Create for "multinode-554300" took 1m50.1946701s
	I0108 21:25:51.388979    5636 start.go:300] post-start starting for "multinode-554300-m02" (driver="hyperv")
	I0108 21:25:51.388979    5636 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0108 21:25:51.401965    5636 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0108 21:25:51.401965    5636 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-554300-m02 ).state
	I0108 21:25:53.547940    5636 main.go:141] libmachine: [stdout =====>] : Running
	
	I0108 21:25:53.547940    5636 main.go:141] libmachine: [stderr =====>] : 
	I0108 21:25:53.548042    5636 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-554300-m02 ).networkadapters[0]).ipaddresses[0]
	I0108 21:25:56.122606    5636 main.go:141] libmachine: [stdout =====>] : 172.29.96.43
	
	I0108 21:25:56.122606    5636 main.go:141] libmachine: [stderr =====>] : 
	I0108 21:25:56.123311    5636 sshutil.go:53] new ssh client: &{IP:172.29.96.43 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-554300-m02\id_rsa Username:docker}
	I0108 21:25:56.233267    5636 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.8311989s)
	I0108 21:25:56.247692    5636 ssh_runner.go:195] Run: cat /etc/os-release
	I0108 21:25:56.253076    5636 command_runner.go:130] > NAME=Buildroot
	I0108 21:25:56.253076    5636 command_runner.go:130] > VERSION=2021.02.12-1-gae27a7b-dirty
	I0108 21:25:56.253157    5636 command_runner.go:130] > ID=buildroot
	I0108 21:25:56.253157    5636 command_runner.go:130] > VERSION_ID=2021.02.12
	I0108 21:25:56.253157    5636 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
	I0108 21:25:56.253157    5636 info.go:137] Remote host: Buildroot 2021.02.12
	I0108 21:25:56.253281    5636 filesync.go:126] Scanning C:\Users\jenkins.minikube7\minikube-integration\.minikube\addons for local assets ...
	I0108 21:25:56.253649    5636 filesync.go:126] Scanning C:\Users\jenkins.minikube7\minikube-integration\.minikube\files for local assets ...
	I0108 21:25:56.254341    5636 filesync.go:149] local asset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\30082.pem -> 30082.pem in /etc/ssl/certs
	I0108 21:25:56.254341    5636 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\30082.pem -> /etc/ssl/certs/30082.pem
	I0108 21:25:56.268773    5636 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0108 21:25:56.284207    5636 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\30082.pem --> /etc/ssl/certs/30082.pem (1708 bytes)
	I0108 21:25:56.324229    5636 start.go:303] post-start completed in 4.9352253s
	I0108 21:25:56.327435    5636 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-554300-m02 ).state
	I0108 21:25:58.454911    5636 main.go:141] libmachine: [stdout =====>] : Running
	
	I0108 21:25:58.454911    5636 main.go:141] libmachine: [stderr =====>] : 
	I0108 21:25:58.455113    5636 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-554300-m02 ).networkadapters[0]).ipaddresses[0]
	I0108 21:26:01.024719    5636 main.go:141] libmachine: [stdout =====>] : 172.29.96.43
	
	I0108 21:26:01.024953    5636 main.go:141] libmachine: [stderr =====>] : 
	I0108 21:26:01.025130    5636 profile.go:148] Saving config to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-554300\config.json ...
	I0108 21:26:01.027710    5636 start.go:128] duration metric: createHost completed in 1m59.8352559s
	I0108 21:26:01.027710    5636 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-554300-m02 ).state
	I0108 21:26:03.172279    5636 main.go:141] libmachine: [stdout =====>] : Running
	
	I0108 21:26:03.172279    5636 main.go:141] libmachine: [stderr =====>] : 
	I0108 21:26:03.172279    5636 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-554300-m02 ).networkadapters[0]).ipaddresses[0]
	I0108 21:26:05.715599    5636 main.go:141] libmachine: [stdout =====>] : 172.29.96.43
	
	I0108 21:26:05.715599    5636 main.go:141] libmachine: [stderr =====>] : 
	I0108 21:26:05.722118    5636 main.go:141] libmachine: Using SSH client type: native
	I0108 21:26:05.722956    5636 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4a6120] 0x4a8c60 <nil>  [] 0s} 172.29.96.43 22 <nil> <nil>}
	I0108 21:26:05.722956    5636 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0108 21:26:05.863588    5636 main.go:141] libmachine: SSH cmd err, output: <nil>: 1704749165.869183515
	
	I0108 21:26:05.863588    5636 fix.go:206] guest clock: 1704749165.869183515
	I0108 21:26:05.863687    5636 fix.go:219] Guest: 2024-01-08 21:26:05.869183515 +0000 UTC Remote: 2024-01-08 21:26:01.0277107 +0000 UTC m=+325.296321901 (delta=4.841472815s)
	I0108 21:26:05.863785    5636 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-554300-m02 ).state
	I0108 21:26:07.958840    5636 main.go:141] libmachine: [stdout =====>] : Running
	
	I0108 21:26:07.958840    5636 main.go:141] libmachine: [stderr =====>] : 
	I0108 21:26:07.959045    5636 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-554300-m02 ).networkadapters[0]).ipaddresses[0]
	I0108 21:26:10.501529    5636 main.go:141] libmachine: [stdout =====>] : 172.29.96.43
	
	I0108 21:26:10.501529    5636 main.go:141] libmachine: [stderr =====>] : 
	I0108 21:26:10.507387    5636 main.go:141] libmachine: Using SSH client type: native
	I0108 21:26:10.508145    5636 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4a6120] 0x4a8c60 <nil>  [] 0s} 172.29.96.43 22 <nil> <nil>}
	I0108 21:26:10.508145    5636 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1704749165
	I0108 21:26:10.658349    5636 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Jan  8 21:26:05 UTC 2024
	
	I0108 21:26:10.658349    5636 fix.go:226] clock set: Mon Jan  8 21:26:05 UTC 2024
	 (err=<nil>)
	I0108 21:26:10.658349    5636 start.go:83] releasing machines lock for "multinode-554300-m02", held for 2m9.4658464s
	I0108 21:26:10.658349    5636 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-554300-m02 ).state
	I0108 21:26:12.840090    5636 main.go:141] libmachine: [stdout =====>] : Running
	
	I0108 21:26:12.840153    5636 main.go:141] libmachine: [stderr =====>] : 
	I0108 21:26:12.840153    5636 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-554300-m02 ).networkadapters[0]).ipaddresses[0]
	I0108 21:26:15.396830    5636 main.go:141] libmachine: [stdout =====>] : 172.29.96.43
	
	I0108 21:26:15.396830    5636 main.go:141] libmachine: [stderr =====>] : 
	I0108 21:26:15.398087    5636 out.go:177] * Found network options:
	I0108 21:26:15.399037    5636 out.go:177]   - NO_PROXY=172.29.107.59
	W0108 21:26:15.399612    5636 proxy.go:119] fail to check proxy env: Error ip not in block
	I0108 21:26:15.400390    5636 out.go:177]   - NO_PROXY=172.29.107.59
	W0108 21:26:15.400902    5636 proxy.go:119] fail to check proxy env: Error ip not in block
	W0108 21:26:15.402389    5636 proxy.go:119] fail to check proxy env: Error ip not in block
	I0108 21:26:15.405102    5636 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0108 21:26:15.405102    5636 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-554300-m02 ).state
	I0108 21:26:15.420135    5636 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0108 21:26:15.420135    5636 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-554300-m02 ).state
	I0108 21:26:17.609032    5636 main.go:141] libmachine: [stdout =====>] : Running
	
	I0108 21:26:17.609032    5636 main.go:141] libmachine: [stderr =====>] : 
	I0108 21:26:17.609127    5636 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-554300-m02 ).networkadapters[0]).ipaddresses[0]
	I0108 21:26:17.655881    5636 main.go:141] libmachine: [stdout =====>] : Running
	
	I0108 21:26:17.656154    5636 main.go:141] libmachine: [stderr =====>] : 
	I0108 21:26:17.656154    5636 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-554300-m02 ).networkadapters[0]).ipaddresses[0]
	I0108 21:26:20.268813    5636 main.go:141] libmachine: [stdout =====>] : 172.29.96.43
	
	I0108 21:26:20.268813    5636 main.go:141] libmachine: [stderr =====>] : 
	I0108 21:26:20.269344    5636 sshutil.go:53] new ssh client: &{IP:172.29.96.43 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-554300-m02\id_rsa Username:docker}
	I0108 21:26:20.294568    5636 main.go:141] libmachine: [stdout =====>] : 172.29.96.43
	
	I0108 21:26:20.294568    5636 main.go:141] libmachine: [stderr =====>] : 
	I0108 21:26:20.295154    5636 sshutil.go:53] new ssh client: &{IP:172.29.96.43 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-554300-m02\id_rsa Username:docker}
	I0108 21:26:20.369673    5636 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	I0108 21:26:20.371015    5636 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (4.9508557s)
	W0108 21:26:20.371015    5636 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0108 21:26:20.387541    5636 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0108 21:26:20.533118    5636 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0108 21:26:20.533118    5636 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (5.1279901s)
	I0108 21:26:20.533118    5636 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0108 21:26:20.533267    5636 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0108 21:26:20.533267    5636 start.go:475] detecting cgroup driver to use...
	I0108 21:26:20.533442    5636 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0108 21:26:20.566511    5636 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0108 21:26:20.580147    5636 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0108 21:26:20.612532    5636 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0108 21:26:20.628306    5636 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0108 21:26:20.641121    5636 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0108 21:26:20.669808    5636 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0108 21:26:20.698905    5636 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0108 21:26:20.728735    5636 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0108 21:26:20.758444    5636 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0108 21:26:20.785988    5636 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0108 21:26:20.815020    5636 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0108 21:26:20.829236    5636 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0108 21:26:20.845407    5636 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0108 21:26:20.878594    5636 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0108 21:26:21.042360    5636 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0108 21:26:21.072039    5636 start.go:475] detecting cgroup driver to use...
	I0108 21:26:21.086236    5636 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0108 21:26:21.103266    5636 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0108 21:26:21.103266    5636 command_runner.go:130] > [Unit]
	I0108 21:26:21.103266    5636 command_runner.go:130] > Description=Docker Application Container Engine
	I0108 21:26:21.103266    5636 command_runner.go:130] > Documentation=https://docs.docker.com
	I0108 21:26:21.103266    5636 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0108 21:26:21.103266    5636 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0108 21:26:21.103266    5636 command_runner.go:130] > StartLimitBurst=3
	I0108 21:26:21.103266    5636 command_runner.go:130] > StartLimitIntervalSec=60
	I0108 21:26:21.103266    5636 command_runner.go:130] > [Service]
	I0108 21:26:21.103266    5636 command_runner.go:130] > Type=notify
	I0108 21:26:21.103266    5636 command_runner.go:130] > Restart=on-failure
	I0108 21:26:21.103266    5636 command_runner.go:130] > Environment=NO_PROXY=172.29.107.59
	I0108 21:26:21.103266    5636 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0108 21:26:21.104271    5636 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0108 21:26:21.104271    5636 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0108 21:26:21.104271    5636 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0108 21:26:21.104271    5636 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0108 21:26:21.104271    5636 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0108 21:26:21.104271    5636 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0108 21:26:21.104271    5636 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0108 21:26:21.104271    5636 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0108 21:26:21.104271    5636 command_runner.go:130] > ExecStart=
	I0108 21:26:21.104271    5636 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0108 21:26:21.104271    5636 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0108 21:26:21.104271    5636 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0108 21:26:21.104271    5636 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0108 21:26:21.104271    5636 command_runner.go:130] > LimitNOFILE=infinity
	I0108 21:26:21.104271    5636 command_runner.go:130] > LimitNPROC=infinity
	I0108 21:26:21.104271    5636 command_runner.go:130] > LimitCORE=infinity
	I0108 21:26:21.104271    5636 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0108 21:26:21.104271    5636 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0108 21:26:21.104271    5636 command_runner.go:130] > TasksMax=infinity
	I0108 21:26:21.104271    5636 command_runner.go:130] > TimeoutStartSec=0
	I0108 21:26:21.104271    5636 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0108 21:26:21.104271    5636 command_runner.go:130] > Delegate=yes
	I0108 21:26:21.104271    5636 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0108 21:26:21.104271    5636 command_runner.go:130] > KillMode=process
	I0108 21:26:21.104271    5636 command_runner.go:130] > [Install]
	I0108 21:26:21.104271    5636 command_runner.go:130] > WantedBy=multi-user.target
	I0108 21:26:21.116331    5636 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0108 21:26:21.144826    5636 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0108 21:26:21.179986    5636 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0108 21:26:21.209868    5636 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0108 21:26:21.245600    5636 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0108 21:26:21.299646    5636 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0108 21:26:21.321392    5636 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0108 21:26:21.346824    5636 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0108 21:26:21.361760    5636 ssh_runner.go:195] Run: which cri-dockerd
	I0108 21:26:21.366574    5636 command_runner.go:130] > /usr/bin/cri-dockerd
	I0108 21:26:21.381195    5636 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0108 21:26:21.395139    5636 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0108 21:26:21.439892    5636 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0108 21:26:21.615342    5636 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0108 21:26:21.771440    5636 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0108 21:26:21.771490    5636 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0108 21:26:21.815417    5636 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0108 21:26:21.977769    5636 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0108 21:26:23.498879    5636 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.5211018s)
	I0108 21:26:23.513445    5636 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0108 21:26:23.545824    5636 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0108 21:26:23.581708    5636 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0108 21:26:23.758429    5636 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0108 21:26:23.933811    5636 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0108 21:26:24.105195    5636 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0108 21:26:24.142984    5636 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0108 21:26:24.175103    5636 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0108 21:26:24.347156    5636 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0108 21:26:24.454794    5636 start.go:522] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0108 21:26:24.468282    5636 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0108 21:26:24.476451    5636 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0108 21:26:24.476451    5636 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0108 21:26:24.476451    5636 command_runner.go:130] > Device: 16h/22d	Inode: 950         Links: 1
	I0108 21:26:24.476451    5636 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0108 21:26:24.476451    5636 command_runner.go:130] > Access: 2024-01-08 21:26:24.377095530 +0000
	I0108 21:26:24.476451    5636 command_runner.go:130] > Modify: 2024-01-08 21:26:24.377095530 +0000
	I0108 21:26:24.476451    5636 command_runner.go:130] > Change: 2024-01-08 21:26:24.381095530 +0000
	I0108 21:26:24.476451    5636 command_runner.go:130] >  Birth: -
	I0108 21:26:24.476451    5636 start.go:543] Will wait 60s for crictl version
	I0108 21:26:24.491155    5636 ssh_runner.go:195] Run: which crictl
	I0108 21:26:24.496221    5636 command_runner.go:130] > /usr/bin/crictl
	I0108 21:26:24.509125    5636 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0108 21:26:24.581417    5636 command_runner.go:130] > Version:  0.1.0
	I0108 21:26:24.581417    5636 command_runner.go:130] > RuntimeName:  docker
	I0108 21:26:24.581417    5636 command_runner.go:130] > RuntimeVersion:  24.0.7
	I0108 21:26:24.582242    5636 command_runner.go:130] > RuntimeApiVersion:  v1
	I0108 21:26:24.582917    5636 start.go:559] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  24.0.7
	RuntimeApiVersion:  v1
	I0108 21:26:24.591842    5636 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0108 21:26:24.624850    5636 command_runner.go:130] > 24.0.7
	I0108 21:26:24.634846    5636 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0108 21:26:24.664844    5636 command_runner.go:130] > 24.0.7
	I0108 21:26:24.666614    5636 out.go:204] * Preparing Kubernetes v1.28.4 on Docker 24.0.7 ...
	I0108 21:26:24.667069    5636 out.go:177]   - env NO_PROXY=172.29.107.59
	I0108 21:26:24.667916    5636 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0108 21:26:24.672551    5636 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0108 21:26:24.672551    5636 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0108 21:26:24.672551    5636 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0108 21:26:24.672551    5636 ip.go:207] Found interface: {Index:10 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:21:91:d6 Flags:up|broadcast|multicast|running}
	I0108 21:26:24.676069    5636 ip.go:210] interface addr: fe80::2be2:9d7a:5cc4:f25c/64
	I0108 21:26:24.676069    5636 ip.go:210] interface addr: 172.29.96.1/20
	I0108 21:26:24.687549    5636 ssh_runner.go:195] Run: grep 172.29.96.1	host.minikube.internal$ /etc/hosts
	I0108 21:26:24.693233    5636 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.29.96.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0108 21:26:24.709837    5636 certs.go:56] Setting up C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-554300 for IP: 172.29.96.43
	I0108 21:26:24.709915    5636 certs.go:190] acquiring lock for shared ca certs: {Name:mk3bd6e475aadf28590677101b36f61c132d4290 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 21:26:24.710740    5636 certs.go:199] skipping minikubeCA CA generation: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.key
	I0108 21:26:24.711197    5636 certs.go:199] skipping proxyClientCA CA generation: C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.key
	I0108 21:26:24.711459    5636 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0108 21:26:24.711746    5636 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0108 21:26:24.711800    5636 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0108 21:26:24.711800    5636 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0108 21:26:24.712366    5636 certs.go:437] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\3008.pem (1338 bytes)
	W0108 21:26:24.712366    5636 certs.go:433] ignoring C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\3008_empty.pem, impossibly tiny 0 bytes
	I0108 21:26:24.712366    5636 certs.go:437] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I0108 21:26:24.713477    5636 certs.go:437] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0108 21:26:24.713714    5636 certs.go:437] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0108 21:26:24.713714    5636 certs.go:437] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0108 21:26:24.714509    5636 certs.go:437] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\30082.pem (1708 bytes)
	I0108 21:26:24.714794    5636 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0108 21:26:24.715005    5636 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\3008.pem -> /usr/share/ca-certificates/3008.pem
	I0108 21:26:24.715005    5636 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\30082.pem -> /usr/share/ca-certificates/30082.pem
	I0108 21:26:24.716006    5636 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0108 21:26:24.756244    5636 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0108 21:26:24.794470    5636 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0108 21:26:24.829582    5636 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0108 21:26:24.865813    5636 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0108 21:26:24.903179    5636 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\3008.pem --> /usr/share/ca-certificates/3008.pem (1338 bytes)
	I0108 21:26:24.941693    5636 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\30082.pem --> /usr/share/ca-certificates/30082.pem (1708 bytes)
	I0108 21:26:24.993026    5636 ssh_runner.go:195] Run: openssl version
	I0108 21:26:25.000886    5636 command_runner.go:130] > OpenSSL 1.1.1n  15 Mar 2022
	I0108 21:26:25.012798    5636 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0108 21:26:25.044252    5636 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0108 21:26:25.051848    5636 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Jan  8 20:14 /usr/share/ca-certificates/minikubeCA.pem
	I0108 21:26:25.051848    5636 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan  8 20:14 /usr/share/ca-certificates/minikubeCA.pem
	I0108 21:26:25.067394    5636 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0108 21:26:25.075802    5636 command_runner.go:130] > b5213941
	I0108 21:26:25.089453    5636 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0108 21:26:25.120470    5636 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3008.pem && ln -fs /usr/share/ca-certificates/3008.pem /etc/ssl/certs/3008.pem"
	I0108 21:26:25.150509    5636 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3008.pem
	I0108 21:26:25.157213    5636 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Jan  8 20:28 /usr/share/ca-certificates/3008.pem
	I0108 21:26:25.157274    5636 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan  8 20:28 /usr/share/ca-certificates/3008.pem
	I0108 21:26:25.170779    5636 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3008.pem
	I0108 21:26:25.178068    5636 command_runner.go:130] > 51391683
	I0108 21:26:25.191719    5636 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/3008.pem /etc/ssl/certs/51391683.0"
	I0108 21:26:25.218258    5636 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/30082.pem && ln -fs /usr/share/ca-certificates/30082.pem /etc/ssl/certs/30082.pem"
	I0108 21:26:25.247662    5636 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/30082.pem
	I0108 21:26:25.254331    5636 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Jan  8 20:28 /usr/share/ca-certificates/30082.pem
	I0108 21:26:25.254453    5636 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan  8 20:28 /usr/share/ca-certificates/30082.pem
	I0108 21:26:25.270380    5636 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/30082.pem
	I0108 21:26:25.278257    5636 command_runner.go:130] > 3ec20f2e
	I0108 21:26:25.294347    5636 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/30082.pem /etc/ssl/certs/3ec20f2e.0"
	I0108 21:26:25.325289    5636 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0108 21:26:25.331187    5636 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0108 21:26:25.331433    5636 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0108 21:26:25.343394    5636 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0108 21:26:25.377425    5636 command_runner.go:130] > cgroupfs
	I0108 21:26:25.378677    5636 cni.go:84] Creating CNI manager for ""
	I0108 21:26:25.378677    5636 cni.go:136] 2 nodes found, recommending kindnet
	I0108 21:26:25.378677    5636 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0108 21:26:25.378677    5636 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.29.96.43 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-554300 NodeName:multinode-554300-m02 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.29.107.59"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.29.96.43 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0108 21:26:25.378677    5636 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.29.96.43
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "multinode-554300-m02"
	  kubeletExtraArgs:
	    node-ip: 172.29.96.43
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.29.107.59"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0108 21:26:25.378677    5636 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=multinode-554300-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.29.96.43
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:multinode-554300 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0108 21:26:25.393053    5636 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0108 21:26:25.405223    5636 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/binaries/v1.28.4': No such file or directory
	I0108 21:26:25.406323    5636 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.28.4: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.28.4': No such file or directory
	
	Initiating transfer...
	I0108 21:26:25.421508    5636 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.28.4
	I0108 21:26:25.438360    5636 download.go:107] Downloading: https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubectl.sha256 -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\linux\amd64\v1.28.4/kubectl
	I0108 21:26:25.438360    5636 download.go:107] Downloading: https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubeadm.sha256 -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\linux\amd64\v1.28.4/kubeadm
	I0108 21:26:25.438360    5636 download.go:107] Downloading: https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubelet.sha256 -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\linux\amd64\v1.28.4/kubelet
	I0108 21:26:26.497308    5636 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\linux\amd64\v1.28.4/kubectl -> /var/lib/minikube/binaries/v1.28.4/kubectl
	I0108 21:26:26.510315    5636 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubectl
	I0108 21:26:26.518324    5636 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/binaries/v1.28.4/kubectl': No such file or directory
	I0108 21:26:26.519128    5636 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.28.4/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.28.4/kubectl': No such file or directory
	I0108 21:26:26.519441    5636 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\linux\amd64\v1.28.4/kubectl --> /var/lib/minikube/binaries/v1.28.4/kubectl (49885184 bytes)
	I0108 21:26:30.132515    5636 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\linux\amd64\v1.28.4/kubeadm -> /var/lib/minikube/binaries/v1.28.4/kubeadm
	I0108 21:26:30.144466    5636 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubeadm
	I0108 21:26:30.151472    5636 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/binaries/v1.28.4/kubeadm': No such file or directory
	I0108 21:26:30.152590    5636 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.28.4/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.28.4/kubeadm': No such file or directory
	I0108 21:26:30.152730    5636 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\linux\amd64\v1.28.4/kubeadm --> /var/lib/minikube/binaries/v1.28.4/kubeadm (49102848 bytes)
	I0108 21:26:35.150102    5636 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0108 21:26:35.171927    5636 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\linux\amd64\v1.28.4/kubelet -> /var/lib/minikube/binaries/v1.28.4/kubelet
	I0108 21:26:35.186766    5636 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubelet
	I0108 21:26:35.192762    5636 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/binaries/v1.28.4/kubelet': No such file or directory
	I0108 21:26:35.193458    5636 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.28.4/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.28.4/kubelet': No such file or directory
	I0108 21:26:35.193458    5636 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\linux\amd64\v1.28.4/kubelet --> /var/lib/minikube/binaries/v1.28.4/kubelet (110850048 bytes)
	I0108 21:26:35.787128    5636 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I0108 21:26:35.801991    5636 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (381 bytes)
	I0108 21:26:35.827328    5636 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0108 21:26:35.868313    5636 ssh_runner.go:195] Run: grep 172.29.107.59	control-plane.minikube.internal$ /etc/hosts
	I0108 21:26:35.874991    5636 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.29.107.59	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0108 21:26:35.891516    5636 host.go:66] Checking if "multinode-554300" exists ...
	I0108 21:26:35.891838    5636 config.go:182] Loaded profile config "multinode-554300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0108 21:26:35.891838    5636 start.go:304] JoinCluster: &{Name:multinode-554300 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17806/minikube-v1.32.1-1702708929-17806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion
:v1.28.4 ClusterName:multinode-554300 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:172.29.107.59 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.29.96.43 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true
ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I0108 21:26:35.892444    5636 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0108 21:26:35.892444    5636 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-554300 ).state
	I0108 21:26:38.019064    5636 main.go:141] libmachine: [stdout =====>] : Running
	
	I0108 21:26:38.019064    5636 main.go:141] libmachine: [stderr =====>] : 
	I0108 21:26:38.019178    5636 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-554300 ).networkadapters[0]).ipaddresses[0]
	I0108 21:26:40.567837    5636 main.go:141] libmachine: [stdout =====>] : 172.29.107.59
	
	I0108 21:26:40.567970    5636 main.go:141] libmachine: [stderr =====>] : 
	I0108 21:26:40.568208    5636 sshutil.go:53] new ssh client: &{IP:172.29.107.59 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-554300\id_rsa Username:docker}
	I0108 21:26:40.758922    5636 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token ap99lv.pu7nf2ie6ydngb38 --discovery-token-ca-cert-hash sha256:5d4a576aad216c3b59d844299451f9173aefcd7f6ecd29b777d9935fab24b02c 
	I0108 21:26:40.758922    5636 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm token create --print-join-command --ttl=0": (4.8664546s)
	I0108 21:26:40.758922    5636 start.go:325] trying to join worker node "m02" to cluster: &{Name:m02 IP:172.29.96.43 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0108 21:26:40.758922    5636 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm join control-plane.minikube.internal:8443 --token ap99lv.pu7nf2ie6ydngb38 --discovery-token-ca-cert-hash sha256:5d4a576aad216c3b59d844299451f9173aefcd7f6ecd29b777d9935fab24b02c --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-554300-m02"
	I0108 21:26:40.819302    5636 command_runner.go:130] > [preflight] Running pre-flight checks
	I0108 21:26:41.028688    5636 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I0108 21:26:41.028795    5636 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I0108 21:26:41.085212    5636 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0108 21:26:41.086246    5636 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0108 21:26:41.086435    5636 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0108 21:26:41.247021    5636 command_runner.go:130] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
	I0108 21:26:42.774056    5636 command_runner.go:130] > This node has joined the cluster:
	I0108 21:26:42.774844    5636 command_runner.go:130] > * Certificate signing request was sent to apiserver and a response was received.
	I0108 21:26:42.774844    5636 command_runner.go:130] > * The Kubelet was informed of the new secure connection details.
	I0108 21:26:42.774844    5636 command_runner.go:130] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I0108 21:26:42.777185    5636 command_runner.go:130] ! W0108 21:26:40.826484    1353 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I0108 21:26:42.777185    5636 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0108 21:26:42.777185    5636 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm join control-plane.minikube.internal:8443 --token ap99lv.pu7nf2ie6ydngb38 --discovery-token-ca-cert-hash sha256:5d4a576aad216c3b59d844299451f9173aefcd7f6ecd29b777d9935fab24b02c --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-554300-m02": (2.0182533s)
	I0108 21:26:42.777404    5636 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0108 21:26:42.964166    5636 command_runner.go:130] ! Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /usr/lib/systemd/system/kubelet.service.
	I0108 21:26:43.138267    5636 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=255792ad75c0218cbe9d2c7121633a1b1d442f28 minikube.k8s.io/name=multinode-554300 minikube.k8s.io/updated_at=2024_01_08T21_26_43_0700 minikube.k8s.io/primary=false "-l minikube.k8s.io/primary!=true" --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:26:43.285285    5636 command_runner.go:130] > node/multinode-554300-m02 labeled
	I0108 21:26:43.285285    5636 start.go:306] JoinCluster complete in 7.3934111s
	I0108 21:26:43.285285    5636 cni.go:84] Creating CNI manager for ""
	I0108 21:26:43.285285    5636 cni.go:136] 2 nodes found, recommending kindnet
	I0108 21:26:43.300276    5636 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0108 21:26:43.309010    5636 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0108 21:26:43.309084    5636 command_runner.go:130] >   Size: 2685752   	Blocks: 5248       IO Block: 4096   regular file
	I0108 21:26:43.309084    5636 command_runner.go:130] > Device: 11h/17d	Inode: 3544        Links: 1
	I0108 21:26:43.309084    5636 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0108 21:26:43.309084    5636 command_runner.go:130] > Access: 2024-01-08 21:21:45.568293000 +0000
	I0108 21:26:43.309084    5636 command_runner.go:130] > Modify: 2023-12-16 11:53:47.000000000 +0000
	I0108 21:26:43.309084    5636 command_runner.go:130] > Change: 2024-01-08 21:21:36.017000000 +0000
	I0108 21:26:43.309084    5636 command_runner.go:130] >  Birth: -
	I0108 21:26:43.309423    5636 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I0108 21:26:43.309423    5636 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0108 21:26:43.360737    5636 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0108 21:26:43.751544    5636 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I0108 21:26:43.751655    5636 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I0108 21:26:43.751655    5636 command_runner.go:130] > serviceaccount/kindnet unchanged
	I0108 21:26:43.751655    5636 command_runner.go:130] > daemonset.apps/kindnet configured
	I0108 21:26:43.752412    5636 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	I0108 21:26:43.752412    5636 kapi.go:59] client config for multinode-554300: &rest.Config{Host:"https://172.29.107.59:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\profiles\\multinode-554300\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\profiles\\multinode-554300\\client.key", CAFile:"C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAD
ata:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x188c140), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0108 21:26:43.753377    5636 round_trippers.go:463] GET https://172.29.107.59:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0108 21:26:43.753377    5636 round_trippers.go:469] Request Headers:
	I0108 21:26:43.753377    5636 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:26:43.753377    5636 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0108 21:26:43.768314    5636 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I0108 21:26:43.768394    5636 round_trippers.go:577] Response Headers:
	I0108 21:26:43.768394    5636 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:26:43 GMT
	I0108 21:26:43.768394    5636 round_trippers.go:580]     Audit-Id: 636da86b-180d-44e9-90b0-3e797e98c8da
	I0108 21:26:43.768394    5636 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:26:43.768394    5636 round_trippers.go:580]     Content-Type: application/json
	I0108 21:26:43.768394    5636 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9ebd4a85-36a3-4fdc-bd57-aab45239cbb7
	I0108 21:26:43.768460    5636 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 15bc4c65-0928-4725-8730-353e6d4075e7
	I0108 21:26:43.768460    5636 round_trippers.go:580]     Content-Length: 291
	I0108 21:26:43.768460    5636 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"a05171e3-49ee-4610-ae26-e93c6a171dfe","resourceVersion":"442","creationTimestamp":"2024-01-08T21:23:32Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0108 21:26:43.768690    5636 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-554300" context rescaled to 1 replicas
	I0108 21:26:43.768745    5636 start.go:223] Will wait 6m0s for node &{Name:m02 IP:172.29.96.43 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0108 21:26:43.769855    5636 out.go:177] * Verifying Kubernetes components...
	I0108 21:26:43.784124    5636 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0108 21:26:43.806123    5636 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	I0108 21:26:43.807134    5636 kapi.go:59] client config for multinode-554300: &rest.Config{Host:"https://172.29.107.59:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\profiles\\multinode-554300\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\profiles\\multinode-554300\\client.key", CAFile:"C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAD
ata:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x188c140), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0108 21:26:43.807134    5636 node_ready.go:35] waiting up to 6m0s for node "multinode-554300-m02" to be "Ready" ...
	I0108 21:26:43.808122    5636 round_trippers.go:463] GET https://172.29.107.59:8443/api/v1/nodes/multinode-554300-m02
	I0108 21:26:43.808122    5636 round_trippers.go:469] Request Headers:
	I0108 21:26:43.808122    5636 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:26:43.808122    5636 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0108 21:26:43.812115    5636 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 21:26:43.812115    5636 round_trippers.go:577] Response Headers:
	I0108 21:26:43.812115    5636 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9ebd4a85-36a3-4fdc-bd57-aab45239cbb7
	I0108 21:26:43.812271    5636 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 15bc4c65-0928-4725-8730-353e6d4075e7
	I0108 21:26:43.812271    5636 round_trippers.go:580]     Content-Length: 3912
	I0108 21:26:43.812271    5636 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:26:43 GMT
	I0108 21:26:43.812271    5636 round_trippers.go:580]     Audit-Id: 62b97666-f57f-46ee-bf90-f71744c266dc
	I0108 21:26:43.812271    5636 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:26:43.812271    5636 round_trippers.go:580]     Content-Type: application/json
	I0108 21:26:43.812487    5636 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-554300-m02","uid":"77388bfb-eaa7-4617-992f-c60f1dbca8c7","resourceVersion":"606","creationTimestamp":"2024-01-08T21:26:42Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-554300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-554300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T21_26_43_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-08T21:26:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:met
adata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec" [truncated 2888 chars]
	I0108 21:26:44.318167    5636 round_trippers.go:463] GET https://172.29.107.59:8443/api/v1/nodes/multinode-554300-m02
	I0108 21:26:44.318167    5636 round_trippers.go:469] Request Headers:
	I0108 21:26:44.318420    5636 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:26:44.318482    5636 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0108 21:26:44.327738    5636 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0108 21:26:44.327778    5636 round_trippers.go:577] Response Headers:
	I0108 21:26:44.327778    5636 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:26:44.327835    5636 round_trippers.go:580]     Content-Type: application/json
	I0108 21:26:44.327835    5636 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9ebd4a85-36a3-4fdc-bd57-aab45239cbb7
	I0108 21:26:44.327835    5636 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 15bc4c65-0928-4725-8730-353e6d4075e7
	I0108 21:26:44.327835    5636 round_trippers.go:580]     Content-Length: 4021
	I0108 21:26:44.327835    5636 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:26:44 GMT
	I0108 21:26:44.327835    5636 round_trippers.go:580]     Audit-Id: 12fc06a8-80f9-4c28-b938-97460fe828ef
	I0108 21:26:44.327995    5636 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-554300-m02","uid":"77388bfb-eaa7-4617-992f-c60f1dbca8c7","resourceVersion":"609","creationTimestamp":"2024-01-08T21:26:42Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-554300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-554300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T21_26_43_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-08T21:26:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annot
ations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":" [truncated 2997 chars]
	I0108 21:26:44.822508    5636 round_trippers.go:463] GET https://172.29.107.59:8443/api/v1/nodes/multinode-554300-m02
	I0108 21:26:44.822508    5636 round_trippers.go:469] Request Headers:
	I0108 21:26:44.822508    5636 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:26:44.822508    5636 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0108 21:26:44.826632    5636 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0108 21:26:44.826632    5636 round_trippers.go:577] Response Headers:
	I0108 21:26:44.826632    5636 round_trippers.go:580]     Audit-Id: 52a98c66-3911-47be-b48d-1c9dbd001929
	I0108 21:26:44.826632    5636 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:26:44.826632    5636 round_trippers.go:580]     Content-Type: application/json
	I0108 21:26:44.827151    5636 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9ebd4a85-36a3-4fdc-bd57-aab45239cbb7
	I0108 21:26:44.827151    5636 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 15bc4c65-0928-4725-8730-353e6d4075e7
	I0108 21:26:44.827151    5636 round_trippers.go:580]     Content-Length: 4021
	I0108 21:26:44.827151    5636 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:26:44 GMT
	I0108 21:26:44.827270    5636 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-554300-m02","uid":"77388bfb-eaa7-4617-992f-c60f1dbca8c7","resourceVersion":"609","creationTimestamp":"2024-01-08T21:26:42Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-554300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-554300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T21_26_43_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-08T21:26:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annot
ations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":" [truncated 2997 chars]
	I0108 21:26:45.309637    5636 round_trippers.go:463] GET https://172.29.107.59:8443/api/v1/nodes/multinode-554300-m02
	I0108 21:26:45.309760    5636 round_trippers.go:469] Request Headers:
	I0108 21:26:45.309760    5636 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:26:45.309760    5636 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0108 21:26:45.313681    5636 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 21:26:45.313681    5636 round_trippers.go:577] Response Headers:
	I0108 21:26:45.314093    5636 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:26:45 GMT
	I0108 21:26:45.314093    5636 round_trippers.go:580]     Audit-Id: 8b0c187e-e5e8-4944-a161-9a8a5d027bb4
	I0108 21:26:45.314093    5636 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:26:45.314093    5636 round_trippers.go:580]     Content-Type: application/json
	I0108 21:26:45.314093    5636 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9ebd4a85-36a3-4fdc-bd57-aab45239cbb7
	I0108 21:26:45.314093    5636 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 15bc4c65-0928-4725-8730-353e6d4075e7
	I0108 21:26:45.314093    5636 round_trippers.go:580]     Content-Length: 4021
	I0108 21:26:45.314285    5636 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-554300-m02","uid":"77388bfb-eaa7-4617-992f-c60f1dbca8c7","resourceVersion":"609","creationTimestamp":"2024-01-08T21:26:42Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-554300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-554300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T21_26_43_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-08T21:26:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annot
ations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":" [truncated 2997 chars]
	I0108 21:26:45.816752    5636 round_trippers.go:463] GET https://172.29.107.59:8443/api/v1/nodes/multinode-554300-m02
	I0108 21:26:45.816752    5636 round_trippers.go:469] Request Headers:
	I0108 21:26:45.816752    5636 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:26:45.816752    5636 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0108 21:26:45.898636    5636 round_trippers.go:574] Response Status: 200 OK in 81 milliseconds
	I0108 21:26:45.898856    5636 round_trippers.go:577] Response Headers:
	I0108 21:26:45.898924    5636 round_trippers.go:580]     Content-Length: 4021
	I0108 21:26:45.898924    5636 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:26:45 GMT
	I0108 21:26:45.898924    5636 round_trippers.go:580]     Audit-Id: 246be8b9-050b-4c67-ba21-02474ca0c8ec
	I0108 21:26:45.898924    5636 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:26:45.898924    5636 round_trippers.go:580]     Content-Type: application/json
	I0108 21:26:45.898924    5636 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9ebd4a85-36a3-4fdc-bd57-aab45239cbb7
	I0108 21:26:45.898924    5636 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 15bc4c65-0928-4725-8730-353e6d4075e7
	I0108 21:26:45.899159    5636 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-554300-m02","uid":"77388bfb-eaa7-4617-992f-c60f1dbca8c7","resourceVersion":"609","creationTimestamp":"2024-01-08T21:26:42Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-554300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-554300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T21_26_43_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-08T21:26:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annot
ations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":" [truncated 2997 chars]
	I0108 21:26:45.899159    5636 node_ready.go:58] node "multinode-554300-m02" has status "Ready":"False"
	I0108 21:26:46.322796    5636 round_trippers.go:463] GET https://172.29.107.59:8443/api/v1/nodes/multinode-554300-m02
	I0108 21:26:46.322796    5636 round_trippers.go:469] Request Headers:
	I0108 21:26:46.322796    5636 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0108 21:26:46.322796    5636 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:26:46.326368    5636 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 21:26:46.326368    5636 round_trippers.go:577] Response Headers:
	I0108 21:26:46.326368    5636 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:26:46.326368    5636 round_trippers.go:580]     Content-Type: application/json
	I0108 21:26:46.326368    5636 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9ebd4a85-36a3-4fdc-bd57-aab45239cbb7
	I0108 21:26:46.326613    5636 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 15bc4c65-0928-4725-8730-353e6d4075e7
	I0108 21:26:46.326613    5636 round_trippers.go:580]     Content-Length: 4021
	I0108 21:26:46.326613    5636 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:26:46 GMT
	I0108 21:26:46.326613    5636 round_trippers.go:580]     Audit-Id: 3cb09ecf-9cd8-43f9-928a-573d1633814a
	I0108 21:26:46.326794    5636 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-554300-m02","uid":"77388bfb-eaa7-4617-992f-c60f1dbca8c7","resourceVersion":"609","creationTimestamp":"2024-01-08T21:26:42Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-554300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-554300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T21_26_43_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-08T21:26:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annot
ations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":" [truncated 2997 chars]
	I0108 21:26:46.812329    5636 round_trippers.go:463] GET https://172.29.107.59:8443/api/v1/nodes/multinode-554300-m02
	I0108 21:26:46.812329    5636 round_trippers.go:469] Request Headers:
	I0108 21:26:46.812329    5636 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0108 21:26:46.812329    5636 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:26:46.816923    5636 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0108 21:26:46.817133    5636 round_trippers.go:577] Response Headers:
	I0108 21:26:46.817133    5636 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:26:46 GMT
	I0108 21:26:46.817133    5636 round_trippers.go:580]     Audit-Id: 700c4129-4443-4fb8-bf56-ed6ad62e67d8
	I0108 21:26:46.817133    5636 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:26:46.817133    5636 round_trippers.go:580]     Content-Type: application/json
	I0108 21:26:46.817242    5636 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9ebd4a85-36a3-4fdc-bd57-aab45239cbb7
	I0108 21:26:46.817242    5636 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 15bc4c65-0928-4725-8730-353e6d4075e7
	I0108 21:26:46.817242    5636 round_trippers.go:580]     Content-Length: 4021
	I0108 21:26:46.817242    5636 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-554300-m02","uid":"77388bfb-eaa7-4617-992f-c60f1dbca8c7","resourceVersion":"609","creationTimestamp":"2024-01-08T21:26:42Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-554300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-554300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T21_26_43_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-08T21:26:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annot
ations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":" [truncated 2997 chars]
	I0108 21:26:47.314028    5636 round_trippers.go:463] GET https://172.29.107.59:8443/api/v1/nodes/multinode-554300-m02
	I0108 21:26:47.314089    5636 round_trippers.go:469] Request Headers:
	I0108 21:26:47.314089    5636 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:26:47.314089    5636 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0108 21:26:47.502029    5636 round_trippers.go:574] Response Status: 200 OK in 187 milliseconds
	I0108 21:26:47.502029    5636 round_trippers.go:577] Response Headers:
	I0108 21:26:47.502029    5636 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9ebd4a85-36a3-4fdc-bd57-aab45239cbb7
	I0108 21:26:47.502029    5636 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 15bc4c65-0928-4725-8730-353e6d4075e7
	I0108 21:26:47.502029    5636 round_trippers.go:580]     Content-Length: 4021
	I0108 21:26:47.502484    5636 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:26:47 GMT
	I0108 21:26:47.502484    5636 round_trippers.go:580]     Audit-Id: 0feeadf2-dc9b-4f2b-9210-9609397e7359
	I0108 21:26:47.502484    5636 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:26:47.502484    5636 round_trippers.go:580]     Content-Type: application/json
	I0108 21:26:47.502611    5636 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-554300-m02","uid":"77388bfb-eaa7-4617-992f-c60f1dbca8c7","resourceVersion":"609","creationTimestamp":"2024-01-08T21:26:42Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-554300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-554300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T21_26_43_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-08T21:26:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annot
ations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":" [truncated 2997 chars]
	I0108 21:26:47.821097    5636 round_trippers.go:463] GET https://172.29.107.59:8443/api/v1/nodes/multinode-554300-m02
	I0108 21:26:47.821097    5636 round_trippers.go:469] Request Headers:
	I0108 21:26:47.821097    5636 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:26:47.821097    5636 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0108 21:26:47.828021    5636 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0108 21:26:47.828021    5636 round_trippers.go:577] Response Headers:
	I0108 21:26:47.828598    5636 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:26:47 GMT
	I0108 21:26:47.828598    5636 round_trippers.go:580]     Audit-Id: 1ce00426-607a-4217-9381-0211b6de11fb
	I0108 21:26:47.828598    5636 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:26:47.828639    5636 round_trippers.go:580]     Content-Type: application/json
	I0108 21:26:47.828639    5636 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9ebd4a85-36a3-4fdc-bd57-aab45239cbb7
	I0108 21:26:47.828639    5636 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 15bc4c65-0928-4725-8730-353e6d4075e7
	I0108 21:26:47.828639    5636 round_trippers.go:580]     Content-Length: 4021
	I0108 21:26:47.828708    5636 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-554300-m02","uid":"77388bfb-eaa7-4617-992f-c60f1dbca8c7","resourceVersion":"609","creationTimestamp":"2024-01-08T21:26:42Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-554300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-554300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T21_26_43_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-08T21:26:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annot
ations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":" [truncated 2997 chars]
	I0108 21:26:48.312047    5636 round_trippers.go:463] GET https://172.29.107.59:8443/api/v1/nodes/multinode-554300-m02
	I0108 21:26:48.312047    5636 round_trippers.go:469] Request Headers:
	I0108 21:26:48.312047    5636 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:26:48.312047    5636 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0108 21:26:48.315047    5636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 21:26:48.315047    5636 round_trippers.go:577] Response Headers:
	I0108 21:26:48.315047    5636 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 15bc4c65-0928-4725-8730-353e6d4075e7
	I0108 21:26:48.316068    5636 round_trippers.go:580]     Content-Length: 4021
	I0108 21:26:48.316068    5636 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:26:48 GMT
	I0108 21:26:48.316092    5636 round_trippers.go:580]     Audit-Id: 64e49315-a712-4cbd-b9d4-91f9a0323daf
	I0108 21:26:48.316092    5636 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:26:48.316092    5636 round_trippers.go:580]     Content-Type: application/json
	I0108 21:26:48.316092    5636 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9ebd4a85-36a3-4fdc-bd57-aab45239cbb7
	I0108 21:26:48.316092    5636 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-554300-m02","uid":"77388bfb-eaa7-4617-992f-c60f1dbca8c7","resourceVersion":"609","creationTimestamp":"2024-01-08T21:26:42Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-554300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-554300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T21_26_43_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-08T21:26:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annot
ations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":" [truncated 2997 chars]
	I0108 21:26:48.316092    5636 node_ready.go:58] node "multinode-554300-m02" has status "Ready":"False"
	I0108 21:26:48.820210    5636 round_trippers.go:463] GET https://172.29.107.59:8443/api/v1/nodes/multinode-554300-m02
	I0108 21:26:48.820210    5636 round_trippers.go:469] Request Headers:
	I0108 21:26:48.820210    5636 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0108 21:26:48.820210    5636 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:26:48.824228    5636 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0108 21:26:48.825054    5636 round_trippers.go:577] Response Headers:
	I0108 21:26:48.825054    5636 round_trippers.go:580]     Audit-Id: 5cfe952f-2a83-4711-ade2-d2a218662650
	I0108 21:26:48.825054    5636 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:26:48.825054    5636 round_trippers.go:580]     Content-Type: application/json
	I0108 21:26:48.825054    5636 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9ebd4a85-36a3-4fdc-bd57-aab45239cbb7
	I0108 21:26:48.825135    5636 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 15bc4c65-0928-4725-8730-353e6d4075e7
	I0108 21:26:48.825135    5636 round_trippers.go:580]     Content-Length: 4021
	I0108 21:26:48.825174    5636 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:26:48 GMT
	I0108 21:26:48.825306    5636 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-554300-m02","uid":"77388bfb-eaa7-4617-992f-c60f1dbca8c7","resourceVersion":"609","creationTimestamp":"2024-01-08T21:26:42Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-554300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-554300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T21_26_43_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-08T21:26:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annot
ations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":" [truncated 2997 chars]
	I0108 21:26:49.313806    5636 round_trippers.go:463] GET https://172.29.107.59:8443/api/v1/nodes/multinode-554300-m02
	I0108 21:26:49.313994    5636 round_trippers.go:469] Request Headers:
	I0108 21:26:49.313994    5636 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:26:49.313994    5636 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0108 21:26:49.318525    5636 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0108 21:26:49.318525    5636 round_trippers.go:577] Response Headers:
	I0108 21:26:49.318525    5636 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 15bc4c65-0928-4725-8730-353e6d4075e7
	I0108 21:26:49.318525    5636 round_trippers.go:580]     Content-Length: 4021
	I0108 21:26:49.318525    5636 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:26:49 GMT
	I0108 21:26:49.319514    5636 round_trippers.go:580]     Audit-Id: 9ea621b0-6114-4c83-a1f8-509f86c90d75
	I0108 21:26:49.319514    5636 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:26:49.319514    5636 round_trippers.go:580]     Content-Type: application/json
	I0108 21:26:49.319514    5636 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9ebd4a85-36a3-4fdc-bd57-aab45239cbb7
	I0108 21:26:49.319514    5636 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-554300-m02","uid":"77388bfb-eaa7-4617-992f-c60f1dbca8c7","resourceVersion":"609","creationTimestamp":"2024-01-08T21:26:42Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-554300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-554300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T21_26_43_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-08T21:26:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annot
ations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":" [truncated 2997 chars]
	I0108 21:26:49.821980    5636 round_trippers.go:463] GET https://172.29.107.59:8443/api/v1/nodes/multinode-554300-m02
	I0108 21:26:49.821980    5636 round_trippers.go:469] Request Headers:
	I0108 21:26:49.821980    5636 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0108 21:26:49.821980    5636 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:26:49.825635    5636 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 21:26:49.825635    5636 round_trippers.go:577] Response Headers:
	I0108 21:26:49.826627    5636 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:26:49 GMT
	I0108 21:26:49.826627    5636 round_trippers.go:580]     Audit-Id: 16d0921d-5ffc-402c-8cbb-32e073f0d00b
	I0108 21:26:49.826652    5636 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:26:49.826652    5636 round_trippers.go:580]     Content-Type: application/json
	I0108 21:26:49.826652    5636 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9ebd4a85-36a3-4fdc-bd57-aab45239cbb7
	I0108 21:26:49.826652    5636 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 15bc4c65-0928-4725-8730-353e6d4075e7
	I0108 21:26:49.826652    5636 round_trippers.go:580]     Content-Length: 4021
	I0108 21:26:49.826834    5636 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-554300-m02","uid":"77388bfb-eaa7-4617-992f-c60f1dbca8c7","resourceVersion":"609","creationTimestamp":"2024-01-08T21:26:42Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-554300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-554300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T21_26_43_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-08T21:26:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annot
ations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":" [truncated 2997 chars]
	I0108 21:26:50.314565    5636 round_trippers.go:463] GET https://172.29.107.59:8443/api/v1/nodes/multinode-554300-m02
	I0108 21:26:50.314624    5636 round_trippers.go:469] Request Headers:
	I0108 21:26:50.314695    5636 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:26:50.314695    5636 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0108 21:26:50.321619    5636 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0108 21:26:50.321691    5636 round_trippers.go:577] Response Headers:
	I0108 21:26:50.321691    5636 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9ebd4a85-36a3-4fdc-bd57-aab45239cbb7
	I0108 21:26:50.321691    5636 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 15bc4c65-0928-4725-8730-353e6d4075e7
	I0108 21:26:50.321766    5636 round_trippers.go:580]     Content-Length: 4021
	I0108 21:26:50.321766    5636 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:26:50 GMT
	I0108 21:26:50.321766    5636 round_trippers.go:580]     Audit-Id: 2290d7b5-9cb1-4e01-95bd-3ede8d953990
	I0108 21:26:50.321766    5636 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:26:50.321766    5636 round_trippers.go:580]     Content-Type: application/json
	I0108 21:26:50.321967    5636 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-554300-m02","uid":"77388bfb-eaa7-4617-992f-c60f1dbca8c7","resourceVersion":"609","creationTimestamp":"2024-01-08T21:26:42Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-554300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-554300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T21_26_43_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-08T21:26:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annot
ations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":" [truncated 2997 chars]
	I0108 21:26:50.322118    5636 node_ready.go:58] node "multinode-554300-m02" has status "Ready":"False"
	I0108 21:26:50.824150    5636 round_trippers.go:463] GET https://172.29.107.59:8443/api/v1/nodes/multinode-554300-m02
	I0108 21:26:50.824227    5636 round_trippers.go:469] Request Headers:
	I0108 21:26:50.824227    5636 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:26:50.824227    5636 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0108 21:26:50.836948    5636 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0108 21:26:50.836948    5636 round_trippers.go:577] Response Headers:
	I0108 21:26:50.836948    5636 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:26:50 GMT
	I0108 21:26:50.836948    5636 round_trippers.go:580]     Audit-Id: 9d80e887-f44e-44ca-a836-35903a5066d1
	I0108 21:26:50.837666    5636 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:26:50.837666    5636 round_trippers.go:580]     Content-Type: application/json
	I0108 21:26:50.837666    5636 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9ebd4a85-36a3-4fdc-bd57-aab45239cbb7
	I0108 21:26:50.837666    5636 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 15bc4c65-0928-4725-8730-353e6d4075e7
	I0108 21:26:50.837666    5636 round_trippers.go:580]     Content-Length: 4021
	I0108 21:26:50.837921    5636 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-554300-m02","uid":"77388bfb-eaa7-4617-992f-c60f1dbca8c7","resourceVersion":"609","creationTimestamp":"2024-01-08T21:26:42Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-554300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-554300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T21_26_43_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-08T21:26:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annot
ations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":" [truncated 2997 chars]
	I0108 21:26:51.315679    5636 round_trippers.go:463] GET https://172.29.107.59:8443/api/v1/nodes/multinode-554300-m02
	I0108 21:26:51.316360    5636 round_trippers.go:469] Request Headers:
	I0108 21:26:51.316416    5636 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:26:51.316490    5636 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0108 21:26:51.321182    5636 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0108 21:26:51.321660    5636 round_trippers.go:577] Response Headers:
	I0108 21:26:51.321660    5636 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:26:51 GMT
	I0108 21:26:51.321660    5636 round_trippers.go:580]     Audit-Id: 8407c478-a398-4d09-ba7e-44578cee3775
	I0108 21:26:51.321660    5636 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:26:51.321660    5636 round_trippers.go:580]     Content-Type: application/json
	I0108 21:26:51.321660    5636 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9ebd4a85-36a3-4fdc-bd57-aab45239cbb7
	I0108 21:26:51.321660    5636 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 15bc4c65-0928-4725-8730-353e6d4075e7
	I0108 21:26:51.321660    5636 round_trippers.go:580]     Content-Length: 4021
	I0108 21:26:51.321660    5636 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-554300-m02","uid":"77388bfb-eaa7-4617-992f-c60f1dbca8c7","resourceVersion":"609","creationTimestamp":"2024-01-08T21:26:42Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-554300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-554300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T21_26_43_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-08T21:26:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annot
ations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":" [truncated 2997 chars]
	I0108 21:26:51.821851    5636 round_trippers.go:463] GET https://172.29.107.59:8443/api/v1/nodes/multinode-554300-m02
	I0108 21:26:51.821927    5636 round_trippers.go:469] Request Headers:
	I0108 21:26:51.821927    5636 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:26:51.821966    5636 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0108 21:26:51.825857    5636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 21:26:51.825903    5636 round_trippers.go:577] Response Headers:
	I0108 21:26:51.825903    5636 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:26:51.825903    5636 round_trippers.go:580]     Content-Type: application/json
	I0108 21:26:51.825903    5636 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9ebd4a85-36a3-4fdc-bd57-aab45239cbb7
	I0108 21:26:51.825903    5636 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 15bc4c65-0928-4725-8730-353e6d4075e7
	I0108 21:26:51.825903    5636 round_trippers.go:580]     Content-Length: 4021
	I0108 21:26:51.825903    5636 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:26:51 GMT
	I0108 21:26:51.825903    5636 round_trippers.go:580]     Audit-Id: 53672ebf-52af-4c77-bdb1-62d8c11961c2
	I0108 21:26:51.825903    5636 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-554300-m02","uid":"77388bfb-eaa7-4617-992f-c60f1dbca8c7","resourceVersion":"609","creationTimestamp":"2024-01-08T21:26:42Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-554300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-554300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T21_26_43_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-08T21:26:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annot
ations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":" [truncated 2997 chars]
	I0108 21:26:52.319261    5636 round_trippers.go:463] GET https://172.29.107.59:8443/api/v1/nodes/multinode-554300-m02
	I0108 21:26:52.319338    5636 round_trippers.go:469] Request Headers:
	I0108 21:26:52.319338    5636 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:26:52.319338    5636 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0108 21:26:52.324141    5636 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0108 21:26:52.324869    5636 round_trippers.go:577] Response Headers:
	I0108 21:26:52.324869    5636 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9ebd4a85-36a3-4fdc-bd57-aab45239cbb7
	I0108 21:26:52.324869    5636 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 15bc4c65-0928-4725-8730-353e6d4075e7
	I0108 21:26:52.324869    5636 round_trippers.go:580]     Content-Length: 4021
	I0108 21:26:52.324869    5636 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:26:52 GMT
	I0108 21:26:52.324934    5636 round_trippers.go:580]     Audit-Id: 935aa358-0fa9-4881-83f0-a121f4426195
	I0108 21:26:52.324934    5636 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:26:52.324934    5636 round_trippers.go:580]     Content-Type: application/json
	I0108 21:26:52.324934    5636 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-554300-m02","uid":"77388bfb-eaa7-4617-992f-c60f1dbca8c7","resourceVersion":"609","creationTimestamp":"2024-01-08T21:26:42Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-554300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-554300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T21_26_43_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-08T21:26:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annot
ations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":" [truncated 2997 chars]
	I0108 21:26:52.324934    5636 node_ready.go:58] node "multinode-554300-m02" has status "Ready":"False"
	I0108 21:26:52.810756    5636 round_trippers.go:463] GET https://172.29.107.59:8443/api/v1/nodes/multinode-554300-m02
	I0108 21:26:52.810756    5636 round_trippers.go:469] Request Headers:
	I0108 21:26:52.810837    5636 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0108 21:26:52.810837    5636 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:26:52.816769    5636 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0108 21:26:52.816769    5636 round_trippers.go:577] Response Headers:
	I0108 21:26:52.816769    5636 round_trippers.go:580]     Audit-Id: 9bde22f6-97e9-47e7-b052-fc831d6b6475
	I0108 21:26:52.816769    5636 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:26:52.816769    5636 round_trippers.go:580]     Content-Type: application/json
	I0108 21:26:52.816769    5636 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9ebd4a85-36a3-4fdc-bd57-aab45239cbb7
	I0108 21:26:52.816769    5636 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 15bc4c65-0928-4725-8730-353e6d4075e7
	I0108 21:26:52.816769    5636 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:26:52 GMT
	I0108 21:26:52.817325    5636 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-554300-m02","uid":"77388bfb-eaa7-4617-992f-c60f1dbca8c7","resourceVersion":"623","creationTimestamp":"2024-01-08T21:26:42Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-554300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-554300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T21_26_43_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-08T21:26:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annot
ations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":" [truncated 3389 chars]
	I0108 21:26:53.318768    5636 round_trippers.go:463] GET https://172.29.107.59:8443/api/v1/nodes/multinode-554300-m02
	I0108 21:26:53.318825    5636 round_trippers.go:469] Request Headers:
	I0108 21:26:53.318825    5636 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:26:53.318904    5636 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0108 21:26:53.322351    5636 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 21:26:53.322351    5636 round_trippers.go:577] Response Headers:
	I0108 21:26:53.322551    5636 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 15bc4c65-0928-4725-8730-353e6d4075e7
	I0108 21:26:53.322551    5636 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:26:53 GMT
	I0108 21:26:53.322551    5636 round_trippers.go:580]     Audit-Id: e488d60b-99c0-4986-aab1-dfdae2af8f91
	I0108 21:26:53.322551    5636 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:26:53.322551    5636 round_trippers.go:580]     Content-Type: application/json
	I0108 21:26:53.322551    5636 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9ebd4a85-36a3-4fdc-bd57-aab45239cbb7
	I0108 21:26:53.323284    5636 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-554300-m02","uid":"77388bfb-eaa7-4617-992f-c60f1dbca8c7","resourceVersion":"623","creationTimestamp":"2024-01-08T21:26:42Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-554300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-554300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T21_26_43_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-08T21:26:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annot
ations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":" [truncated 3389 chars]
	I0108 21:26:53.813377    5636 round_trippers.go:463] GET https://172.29.107.59:8443/api/v1/nodes/multinode-554300-m02
	I0108 21:26:53.813443    5636 round_trippers.go:469] Request Headers:
	I0108 21:26:53.813443    5636 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:26:53.813443    5636 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0108 21:26:53.817583    5636 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0108 21:26:53.818091    5636 round_trippers.go:577] Response Headers:
	I0108 21:26:53.818091    5636 round_trippers.go:580]     Audit-Id: f901ae86-b085-4abf-acc0-18df9ceeb0b5
	I0108 21:26:53.818091    5636 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:26:53.818091    5636 round_trippers.go:580]     Content-Type: application/json
	I0108 21:26:53.818091    5636 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9ebd4a85-36a3-4fdc-bd57-aab45239cbb7
	I0108 21:26:53.818091    5636 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 15bc4c65-0928-4725-8730-353e6d4075e7
	I0108 21:26:53.818091    5636 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:26:53 GMT
	I0108 21:26:53.818169    5636 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-554300-m02","uid":"77388bfb-eaa7-4617-992f-c60f1dbca8c7","resourceVersion":"623","creationTimestamp":"2024-01-08T21:26:42Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-554300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-554300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T21_26_43_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-08T21:26:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annot
ations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":" [truncated 3389 chars]
	I0108 21:26:54.320671    5636 round_trippers.go:463] GET https://172.29.107.59:8443/api/v1/nodes/multinode-554300-m02
	I0108 21:26:54.320745    5636 round_trippers.go:469] Request Headers:
	I0108 21:26:54.320805    5636 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:26:54.320805    5636 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0108 21:26:54.326361    5636 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0108 21:26:54.326361    5636 round_trippers.go:577] Response Headers:
	I0108 21:26:54.326361    5636 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:26:54.326361    5636 round_trippers.go:580]     Content-Type: application/json
	I0108 21:26:54.326361    5636 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9ebd4a85-36a3-4fdc-bd57-aab45239cbb7
	I0108 21:26:54.326361    5636 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 15bc4c65-0928-4725-8730-353e6d4075e7
	I0108 21:26:54.326361    5636 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:26:54 GMT
	I0108 21:26:54.326361    5636 round_trippers.go:580]     Audit-Id: 974a6dbe-591d-4906-b129-c13c01523cde
	I0108 21:26:54.327370    5636 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-554300-m02","uid":"77388bfb-eaa7-4617-992f-c60f1dbca8c7","resourceVersion":"623","creationTimestamp":"2024-01-08T21:26:42Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-554300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-554300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T21_26_43_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-08T21:26:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annot
ations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":" [truncated 3389 chars]
	I0108 21:26:54.327370    5636 node_ready.go:58] node "multinode-554300-m02" has status "Ready":"False"
	I0108 21:26:54.811587    5636 round_trippers.go:463] GET https://172.29.107.59:8443/api/v1/nodes/multinode-554300-m02
	I0108 21:26:54.811657    5636 round_trippers.go:469] Request Headers:
	I0108 21:26:54.811657    5636 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:26:54.811657    5636 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0108 21:26:54.816444    5636 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0108 21:26:54.816444    5636 round_trippers.go:577] Response Headers:
	I0108 21:26:54.816444    5636 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:26:54 GMT
	I0108 21:26:54.816444    5636 round_trippers.go:580]     Audit-Id: 8a3779ff-2a33-4d99-8707-4e094355694d
	I0108 21:26:54.816444    5636 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:26:54.816444    5636 round_trippers.go:580]     Content-Type: application/json
	I0108 21:26:54.816444    5636 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9ebd4a85-36a3-4fdc-bd57-aab45239cbb7
	I0108 21:26:54.816444    5636 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 15bc4c65-0928-4725-8730-353e6d4075e7
	I0108 21:26:54.816444    5636 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-554300-m02","uid":"77388bfb-eaa7-4617-992f-c60f1dbca8c7","resourceVersion":"623","creationTimestamp":"2024-01-08T21:26:42Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-554300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-554300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T21_26_43_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-08T21:26:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annot
ations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":" [truncated 3389 chars]
	I0108 21:26:55.320997    5636 round_trippers.go:463] GET https://172.29.107.59:8443/api/v1/nodes/multinode-554300-m02
	I0108 21:26:55.320997    5636 round_trippers.go:469] Request Headers:
	I0108 21:26:55.320997    5636 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:26:55.320997    5636 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0108 21:26:55.325529    5636 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0108 21:26:55.325929    5636 round_trippers.go:577] Response Headers:
	I0108 21:26:55.325929    5636 round_trippers.go:580]     Content-Type: application/json
	I0108 21:26:55.325929    5636 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9ebd4a85-36a3-4fdc-bd57-aab45239cbb7
	I0108 21:26:55.325929    5636 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 15bc4c65-0928-4725-8730-353e6d4075e7
	I0108 21:26:55.325929    5636 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:26:55 GMT
	I0108 21:26:55.325984    5636 round_trippers.go:580]     Audit-Id: 4538c9ef-b111-4be6-b594-b87b3b64f49f
	I0108 21:26:55.325984    5636 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:26:55.326326    5636 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-554300-m02","uid":"77388bfb-eaa7-4617-992f-c60f1dbca8c7","resourceVersion":"623","creationTimestamp":"2024-01-08T21:26:42Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-554300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-554300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T21_26_43_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-08T21:26:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annot
ations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":" [truncated 3389 chars]
	I0108 21:26:55.814005    5636 round_trippers.go:463] GET https://172.29.107.59:8443/api/v1/nodes/multinode-554300-m02
	I0108 21:26:55.814005    5636 round_trippers.go:469] Request Headers:
	I0108 21:26:55.814152    5636 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:26:55.814152    5636 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0108 21:26:55.817893    5636 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 21:26:55.817893    5636 round_trippers.go:577] Response Headers:
	I0108 21:26:55.818010    5636 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:26:55 GMT
	I0108 21:26:55.818010    5636 round_trippers.go:580]     Audit-Id: c0dd89f1-861c-4aae-882f-f5d2e57b1a39
	I0108 21:26:55.818039    5636 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:26:55.818039    5636 round_trippers.go:580]     Content-Type: application/json
	I0108 21:26:55.818039    5636 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9ebd4a85-36a3-4fdc-bd57-aab45239cbb7
	I0108 21:26:55.818039    5636 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 15bc4c65-0928-4725-8730-353e6d4075e7
	I0108 21:26:55.818231    5636 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-554300-m02","uid":"77388bfb-eaa7-4617-992f-c60f1dbca8c7","resourceVersion":"623","creationTimestamp":"2024-01-08T21:26:42Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-554300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-554300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T21_26_43_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-08T21:26:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annot
ations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":" [truncated 3389 chars]
	I0108 21:26:56.317309    5636 round_trippers.go:463] GET https://172.29.107.59:8443/api/v1/nodes/multinode-554300-m02
	I0108 21:26:56.317442    5636 round_trippers.go:469] Request Headers:
	I0108 21:26:56.317442    5636 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:26:56.317442    5636 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0108 21:26:56.320785    5636 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 21:26:56.321363    5636 round_trippers.go:577] Response Headers:
	I0108 21:26:56.321363    5636 round_trippers.go:580]     Audit-Id: f8b63e84-a32a-41b8-b74d-bc02b065f062
	I0108 21:26:56.321363    5636 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:26:56.321363    5636 round_trippers.go:580]     Content-Type: application/json
	I0108 21:26:56.321363    5636 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9ebd4a85-36a3-4fdc-bd57-aab45239cbb7
	I0108 21:26:56.321363    5636 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 15bc4c65-0928-4725-8730-353e6d4075e7
	I0108 21:26:56.321363    5636 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:26:56 GMT
	I0108 21:26:56.321815    5636 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-554300-m02","uid":"77388bfb-eaa7-4617-992f-c60f1dbca8c7","resourceVersion":"623","creationTimestamp":"2024-01-08T21:26:42Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-554300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-554300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T21_26_43_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-08T21:26:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annot
ations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":" [truncated 3389 chars]
	I0108 21:26:56.808891    5636 round_trippers.go:463] GET https://172.29.107.59:8443/api/v1/nodes/multinode-554300-m02
	I0108 21:26:56.809033    5636 round_trippers.go:469] Request Headers:
	I0108 21:26:56.809033    5636 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:26:56.809033    5636 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0108 21:26:56.812394    5636 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 21:26:56.812867    5636 round_trippers.go:577] Response Headers:
	I0108 21:26:56.812867    5636 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 15bc4c65-0928-4725-8730-353e6d4075e7
	I0108 21:26:56.812867    5636 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:26:56 GMT
	I0108 21:26:56.812937    5636 round_trippers.go:580]     Audit-Id: 64385115-a7d1-44e9-9a01-c7c4d108e25e
	I0108 21:26:56.812937    5636 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:26:56.812937    5636 round_trippers.go:580]     Content-Type: application/json
	I0108 21:26:56.812937    5636 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9ebd4a85-36a3-4fdc-bd57-aab45239cbb7
	I0108 21:26:56.812937    5636 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-554300-m02","uid":"77388bfb-eaa7-4617-992f-c60f1dbca8c7","resourceVersion":"623","creationTimestamp":"2024-01-08T21:26:42Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-554300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-554300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T21_26_43_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-08T21:26:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annot
ations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":" [truncated 3389 chars]
	I0108 21:26:56.813813    5636 node_ready.go:58] node "multinode-554300-m02" has status "Ready":"False"
	I0108 21:26:57.323403    5636 round_trippers.go:463] GET https://172.29.107.59:8443/api/v1/nodes/multinode-554300-m02
	I0108 21:26:57.323403    5636 round_trippers.go:469] Request Headers:
	I0108 21:26:57.323403    5636 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:26:57.323403    5636 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0108 21:26:57.327824    5636 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 21:26:57.327824    5636 round_trippers.go:577] Response Headers:
	I0108 21:26:57.327824    5636 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:26:57.327824    5636 round_trippers.go:580]     Content-Type: application/json
	I0108 21:26:57.327824    5636 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9ebd4a85-36a3-4fdc-bd57-aab45239cbb7
	I0108 21:26:57.327824    5636 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 15bc4c65-0928-4725-8730-353e6d4075e7
	I0108 21:26:57.327824    5636 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:26:57 GMT
	I0108 21:26:57.327824    5636 round_trippers.go:580]     Audit-Id: dd765eef-f220-4c3d-a489-03055281d9d6
	I0108 21:26:57.327824    5636 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-554300-m02","uid":"77388bfb-eaa7-4617-992f-c60f1dbca8c7","resourceVersion":"623","creationTimestamp":"2024-01-08T21:26:42Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-554300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-554300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T21_26_43_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-08T21:26:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annot
ations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":" [truncated 3389 chars]
	I0108 21:26:57.813344    5636 round_trippers.go:463] GET https://172.29.107.59:8443/api/v1/nodes/multinode-554300-m02
	I0108 21:26:57.813474    5636 round_trippers.go:469] Request Headers:
	I0108 21:26:57.813474    5636 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:26:57.813474    5636 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0108 21:26:57.822147    5636 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0108 21:26:57.822703    5636 round_trippers.go:577] Response Headers:
	I0108 21:26:57.822703    5636 round_trippers.go:580]     Audit-Id: d54187a6-800c-4119-a5b0-6f37916e7c38
	I0108 21:26:57.822703    5636 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:26:57.822703    5636 round_trippers.go:580]     Content-Type: application/json
	I0108 21:26:57.822703    5636 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9ebd4a85-36a3-4fdc-bd57-aab45239cbb7
	I0108 21:26:57.822703    5636 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 15bc4c65-0928-4725-8730-353e6d4075e7
	I0108 21:26:57.822703    5636 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:26:57 GMT
	I0108 21:26:57.822703    5636 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-554300-m02","uid":"77388bfb-eaa7-4617-992f-c60f1dbca8c7","resourceVersion":"623","creationTimestamp":"2024-01-08T21:26:42Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-554300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-554300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T21_26_43_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-08T21:26:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annot
ations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":" [truncated 3389 chars]
	I0108 21:26:58.314813    5636 round_trippers.go:463] GET https://172.29.107.59:8443/api/v1/nodes/multinode-554300-m02
	I0108 21:26:58.314954    5636 round_trippers.go:469] Request Headers:
	I0108 21:26:58.314954    5636 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:26:58.314954    5636 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0108 21:26:58.318471    5636 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 21:26:58.318471    5636 round_trippers.go:577] Response Headers:
	I0108 21:26:58.318471    5636 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:26:58 GMT
	I0108 21:26:58.318471    5636 round_trippers.go:580]     Audit-Id: d03b76cf-9e9d-4a03-893d-d4fc46f10984
	I0108 21:26:58.318471    5636 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:26:58.318766    5636 round_trippers.go:580]     Content-Type: application/json
	I0108 21:26:58.318766    5636 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9ebd4a85-36a3-4fdc-bd57-aab45239cbb7
	I0108 21:26:58.318766    5636 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 15bc4c65-0928-4725-8730-353e6d4075e7
	I0108 21:26:58.319031    5636 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-554300-m02","uid":"77388bfb-eaa7-4617-992f-c60f1dbca8c7","resourceVersion":"623","creationTimestamp":"2024-01-08T21:26:42Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-554300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-554300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T21_26_43_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-08T21:26:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annot
ations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":" [truncated 3389 chars]
	I0108 21:26:58.816821    5636 round_trippers.go:463] GET https://172.29.107.59:8443/api/v1/nodes/multinode-554300-m02
	I0108 21:26:58.816950    5636 round_trippers.go:469] Request Headers:
	I0108 21:26:58.816950    5636 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:26:58.817061    5636 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0108 21:26:58.820519    5636 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 21:26:58.820519    5636 round_trippers.go:577] Response Headers:
	I0108 21:26:58.820982    5636 round_trippers.go:580]     Audit-Id: 566bbfe4-1165-4e8c-b058-ddff0c4f205b
	I0108 21:26:58.820982    5636 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:26:58.820982    5636 round_trippers.go:580]     Content-Type: application/json
	I0108 21:26:58.820982    5636 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9ebd4a85-36a3-4fdc-bd57-aab45239cbb7
	I0108 21:26:58.820982    5636 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 15bc4c65-0928-4725-8730-353e6d4075e7
	I0108 21:26:58.820982    5636 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:26:58 GMT
	I0108 21:26:58.821255    5636 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-554300-m02","uid":"77388bfb-eaa7-4617-992f-c60f1dbca8c7","resourceVersion":"623","creationTimestamp":"2024-01-08T21:26:42Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-554300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-554300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T21_26_43_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-08T21:26:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annot
ations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":" [truncated 3389 chars]
	I0108 21:26:58.821830    5636 node_ready.go:58] node "multinode-554300-m02" has status "Ready":"False"
	I0108 21:26:59.318844    5636 round_trippers.go:463] GET https://172.29.107.59:8443/api/v1/nodes/multinode-554300-m02
	I0108 21:26:59.318927    5636 round_trippers.go:469] Request Headers:
	I0108 21:26:59.318927    5636 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0108 21:26:59.318927    5636 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:26:59.322308    5636 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 21:26:59.323131    5636 round_trippers.go:577] Response Headers:
	I0108 21:26:59.323131    5636 round_trippers.go:580]     Audit-Id: 3ad0cc30-0ee1-4045-b6e3-4fe85ba7b628
	I0108 21:26:59.323131    5636 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:26:59.323131    5636 round_trippers.go:580]     Content-Type: application/json
	I0108 21:26:59.323131    5636 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9ebd4a85-36a3-4fdc-bd57-aab45239cbb7
	I0108 21:26:59.323131    5636 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 15bc4c65-0928-4725-8730-353e6d4075e7
	I0108 21:26:59.323131    5636 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:26:59 GMT
	I0108 21:26:59.323362    5636 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-554300-m02","uid":"77388bfb-eaa7-4617-992f-c60f1dbca8c7","resourceVersion":"623","creationTimestamp":"2024-01-08T21:26:42Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-554300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-554300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T21_26_43_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-08T21:26:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annot
ations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":" [truncated 3389 chars]
	I0108 21:26:59.819399    5636 round_trippers.go:463] GET https://172.29.107.59:8443/api/v1/nodes/multinode-554300-m02
	I0108 21:26:59.819399    5636 round_trippers.go:469] Request Headers:
	I0108 21:26:59.819399    5636 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:26:59.819399    5636 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0108 21:26:59.823011    5636 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 21:26:59.823011    5636 round_trippers.go:577] Response Headers:
	I0108 21:26:59.823011    5636 round_trippers.go:580]     Audit-Id: 1e492c71-fc69-4bb1-a208-717596660769
	I0108 21:26:59.823011    5636 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:26:59.823011    5636 round_trippers.go:580]     Content-Type: application/json
	I0108 21:26:59.823011    5636 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9ebd4a85-36a3-4fdc-bd57-aab45239cbb7
	I0108 21:26:59.823710    5636 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 15bc4c65-0928-4725-8730-353e6d4075e7
	I0108 21:26:59.823710    5636 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:26:59 GMT
	I0108 21:26:59.823908    5636 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-554300-m02","uid":"77388bfb-eaa7-4617-992f-c60f1dbca8c7","resourceVersion":"638","creationTimestamp":"2024-01-08T21:26:42Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-554300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-554300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T21_26_43_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-08T21:26:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annot
ations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":" [truncated 3255 chars]
	I0108 21:26:59.824413    5636 node_ready.go:49] node "multinode-554300-m02" has status "Ready":"True"
	I0108 21:26:59.824481    5636 node_ready.go:38] duration metric: took 16.0162795s waiting for node "multinode-554300-m02" to be "Ready" ...
	I0108 21:26:59.824481    5636 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0108 21:26:59.824681    5636 round_trippers.go:463] GET https://172.29.107.59:8443/api/v1/namespaces/kube-system/pods
	I0108 21:26:59.824681    5636 round_trippers.go:469] Request Headers:
	I0108 21:26:59.824730    5636 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:26:59.824730    5636 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0108 21:26:59.829051    5636 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0108 21:26:59.829051    5636 round_trippers.go:577] Response Headers:
	I0108 21:26:59.829051    5636 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9ebd4a85-36a3-4fdc-bd57-aab45239cbb7
	I0108 21:26:59.829814    5636 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 15bc4c65-0928-4725-8730-353e6d4075e7
	I0108 21:26:59.829814    5636 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:26:59 GMT
	I0108 21:26:59.829814    5636 round_trippers.go:580]     Audit-Id: 1314c93c-bf40-4127-be44-0a4dfcdca3b2
	I0108 21:26:59.829814    5636 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:26:59.829814    5636 round_trippers.go:580]     Content-Type: application/json
	I0108 21:26:59.832178    5636 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"638"},"items":[{"metadata":{"name":"coredns-5dd5756b68-q7vd7","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"fe215542-1a69-4152-9098-06937431fa74","resourceVersion":"438","creationTimestamp":"2024-01-08T21:23:44Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"4ef51136-88a5-445c-bfe9-e1a45010851b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-08T21:23:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"4ef51136-88a5-445c-bfe9-e1a45010851b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 67466 chars]
	I0108 21:26:59.835439    5636 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-q7vd7" in "kube-system" namespace to be "Ready" ...
	I0108 21:26:59.835559    5636 round_trippers.go:463] GET https://172.29.107.59:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-q7vd7
	I0108 21:26:59.835559    5636 round_trippers.go:469] Request Headers:
	I0108 21:26:59.835559    5636 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0108 21:26:59.835559    5636 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:26:59.837405    5636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0108 21:26:59.837405    5636 round_trippers.go:577] Response Headers:
	I0108 21:26:59.837405    5636 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:26:59 GMT
	I0108 21:26:59.837405    5636 round_trippers.go:580]     Audit-Id: 5600cd45-7f82-4b30-85a1-65b0157e6d12
	I0108 21:26:59.837405    5636 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:26:59.837405    5636 round_trippers.go:580]     Content-Type: application/json
	I0108 21:26:59.838388    5636 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9ebd4a85-36a3-4fdc-bd57-aab45239cbb7
	I0108 21:26:59.838388    5636 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 15bc4c65-0928-4725-8730-353e6d4075e7
	I0108 21:26:59.838705    5636 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-q7vd7","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"fe215542-1a69-4152-9098-06937431fa74","resourceVersion":"438","creationTimestamp":"2024-01-08T21:23:44Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"4ef51136-88a5-445c-bfe9-e1a45010851b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-08T21:23:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"4ef51136-88a5-445c-bfe9-e1a45010851b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6283 chars]
	I0108 21:26:59.839675    5636 round_trippers.go:463] GET https://172.29.107.59:8443/api/v1/nodes/multinode-554300
	I0108 21:26:59.839745    5636 round_trippers.go:469] Request Headers:
	I0108 21:26:59.839745    5636 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:26:59.839745    5636 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0108 21:26:59.842133    5636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 21:26:59.842133    5636 round_trippers.go:577] Response Headers:
	I0108 21:26:59.842133    5636 round_trippers.go:580]     Audit-Id: 4c373718-d6d3-4360-a4a6-67ae07a80bf4
	I0108 21:26:59.842133    5636 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:26:59.842133    5636 round_trippers.go:580]     Content-Type: application/json
	I0108 21:26:59.842550    5636 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9ebd4a85-36a3-4fdc-bd57-aab45239cbb7
	I0108 21:26:59.842550    5636 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 15bc4c65-0928-4725-8730-353e6d4075e7
	I0108 21:26:59.842550    5636 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:26:59 GMT
	I0108 21:26:59.843019    5636 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-554300","uid":"00d4501c-311c-4564-812c-b620304a4e8e","resourceVersion":"446","creationTimestamp":"2024-01-08T21:23:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-554300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-554300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T21_23_33_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-01-08T21:23:28Z","fieldsType":"FieldsV1","fi [truncated 4958 chars]
	I0108 21:26:59.843512    5636 pod_ready.go:92] pod "coredns-5dd5756b68-q7vd7" in "kube-system" namespace has status "Ready":"True"
	I0108 21:26:59.843574    5636 pod_ready.go:81] duration metric: took 8.1346ms waiting for pod "coredns-5dd5756b68-q7vd7" in "kube-system" namespace to be "Ready" ...
	I0108 21:26:59.843574    5636 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-554300" in "kube-system" namespace to be "Ready" ...
	I0108 21:26:59.843731    5636 round_trippers.go:463] GET https://172.29.107.59:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-554300
	I0108 21:26:59.843771    5636 round_trippers.go:469] Request Headers:
	I0108 21:26:59.843771    5636 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:26:59.843771    5636 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0108 21:26:59.846585    5636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 21:26:59.846585    5636 round_trippers.go:577] Response Headers:
	I0108 21:26:59.847414    5636 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 15bc4c65-0928-4725-8730-353e6d4075e7
	I0108 21:26:59.847414    5636 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:26:59 GMT
	I0108 21:26:59.847414    5636 round_trippers.go:580]     Audit-Id: c161d8f4-f636-47a9-9583-7f712d5bc8a5
	I0108 21:26:59.847414    5636 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:26:59.847414    5636 round_trippers.go:580]     Content-Type: application/json
	I0108 21:26:59.847414    5636 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9ebd4a85-36a3-4fdc-bd57-aab45239cbb7
	I0108 21:26:59.847414    5636 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-554300","namespace":"kube-system","uid":"06d58411-089c-4312-8685-a2cb7f7e3c33","resourceVersion":"314","creationTimestamp":"2024-01-08T21:23:31Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.29.107.59:2379","kubernetes.io/config.hash":"9b41c89b0647a3bffea3212cb5464059","kubernetes.io/config.mirror":"9b41c89b0647a3bffea3212cb5464059","kubernetes.io/config.seen":"2024-01-08T21:23:23.164883235Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-554300","uid":"00d4501c-311c-4564-812c-b620304a4e8e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-08T21:23:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 5862 chars]
	I0108 21:26:59.848352    5636 round_trippers.go:463] GET https://172.29.107.59:8443/api/v1/nodes/multinode-554300
	I0108 21:26:59.848352    5636 round_trippers.go:469] Request Headers:
	I0108 21:26:59.848423    5636 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0108 21:26:59.848423    5636 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:26:59.851160    5636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 21:26:59.851432    5636 round_trippers.go:577] Response Headers:
	I0108 21:26:59.851481    5636 round_trippers.go:580]     Audit-Id: 7a1bedb0-bcae-456f-8da5-99da042db407
	I0108 21:26:59.851481    5636 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:26:59.851643    5636 round_trippers.go:580]     Content-Type: application/json
	I0108 21:26:59.851669    5636 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9ebd4a85-36a3-4fdc-bd57-aab45239cbb7
	I0108 21:26:59.851669    5636 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 15bc4c65-0928-4725-8730-353e6d4075e7
	I0108 21:26:59.851669    5636 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:26:59 GMT
	I0108 21:26:59.851744    5636 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-554300","uid":"00d4501c-311c-4564-812c-b620304a4e8e","resourceVersion":"446","creationTimestamp":"2024-01-08T21:23:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-554300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-554300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T21_23_33_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-01-08T21:23:28Z","fieldsType":"FieldsV1","fi [truncated 4958 chars]
	I0108 21:26:59.852552    5636 pod_ready.go:92] pod "etcd-multinode-554300" in "kube-system" namespace has status "Ready":"True"
	I0108 21:26:59.852581    5636 pod_ready.go:81] duration metric: took 8.9503ms waiting for pod "etcd-multinode-554300" in "kube-system" namespace to be "Ready" ...
	I0108 21:26:59.852581    5636 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-554300" in "kube-system" namespace to be "Ready" ...
	I0108 21:26:59.852665    5636 round_trippers.go:463] GET https://172.29.107.59:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-554300
	I0108 21:26:59.852725    5636 round_trippers.go:469] Request Headers:
	I0108 21:26:59.852725    5636 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:26:59.852725    5636 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0108 21:26:59.857021    5636 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0108 21:26:59.857211    5636 round_trippers.go:577] Response Headers:
	I0108 21:26:59.857211    5636 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 15bc4c65-0928-4725-8730-353e6d4075e7
	I0108 21:26:59.857211    5636 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:26:59 GMT
	I0108 21:26:59.857211    5636 round_trippers.go:580]     Audit-Id: f662c442-12c0-4842-8994-63a70fb3f048
	I0108 21:26:59.857211    5636 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:26:59.857211    5636 round_trippers.go:580]     Content-Type: application/json
	I0108 21:26:59.857211    5636 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9ebd4a85-36a3-4fdc-bd57-aab45239cbb7
	I0108 21:26:59.857211    5636 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-554300","namespace":"kube-system","uid":"54bb0d68-f8ac-4f67-a9cd-71a15ce550ad","resourceVersion":"326","creationTimestamp":"2024-01-08T21:23:32Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.29.107.59:8443","kubernetes.io/config.hash":"2efb47d905867f62472179a55c21eb33","kubernetes.io/config.mirror":"2efb47d905867f62472179a55c21eb33","kubernetes.io/config.seen":"2024-01-08T21:23:32.232190192Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-554300","uid":"00d4501c-311c-4564-812c-b620304a4e8e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-08T21:23:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kub
ernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes. [truncated 7399 chars]
	I0108 21:26:59.858091    5636 round_trippers.go:463] GET https://172.29.107.59:8443/api/v1/nodes/multinode-554300
	I0108 21:26:59.858091    5636 round_trippers.go:469] Request Headers:
	I0108 21:26:59.858091    5636 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0108 21:26:59.858091    5636 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:26:59.862547    5636 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0108 21:26:59.862547    5636 round_trippers.go:577] Response Headers:
	I0108 21:26:59.862547    5636 round_trippers.go:580]     Audit-Id: cf8bc1a0-99ee-4c34-bf14-e224edba80b7
	I0108 21:26:59.862547    5636 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:26:59.862547    5636 round_trippers.go:580]     Content-Type: application/json
	I0108 21:26:59.862547    5636 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9ebd4a85-36a3-4fdc-bd57-aab45239cbb7
	I0108 21:26:59.862547    5636 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 15bc4c65-0928-4725-8730-353e6d4075e7
	I0108 21:26:59.862547    5636 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:26:59 GMT
	I0108 21:26:59.863215    5636 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-554300","uid":"00d4501c-311c-4564-812c-b620304a4e8e","resourceVersion":"446","creationTimestamp":"2024-01-08T21:23:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-554300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-554300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T21_23_33_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-01-08T21:23:28Z","fieldsType":"FieldsV1","fi [truncated 4958 chars]
	I0108 21:26:59.863215    5636 pod_ready.go:92] pod "kube-apiserver-multinode-554300" in "kube-system" namespace has status "Ready":"True"
	I0108 21:26:59.863749    5636 pod_ready.go:81] duration metric: took 11.1687ms waiting for pod "kube-apiserver-multinode-554300" in "kube-system" namespace to be "Ready" ...
	I0108 21:26:59.863749    5636 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-554300" in "kube-system" namespace to be "Ready" ...
	I0108 21:26:59.863908    5636 round_trippers.go:463] GET https://172.29.107.59:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-554300
	I0108 21:26:59.863908    5636 round_trippers.go:469] Request Headers:
	I0108 21:26:59.863908    5636 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0108 21:26:59.863908    5636 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:26:59.866447    5636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 21:26:59.867385    5636 round_trippers.go:577] Response Headers:
	I0108 21:26:59.867505    5636 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9ebd4a85-36a3-4fdc-bd57-aab45239cbb7
	I0108 21:26:59.867505    5636 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 15bc4c65-0928-4725-8730-353e6d4075e7
	I0108 21:26:59.867568    5636 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:26:59 GMT
	I0108 21:26:59.867568    5636 round_trippers.go:580]     Audit-Id: 83e5066a-47fc-4d80-b806-3c66a3e5f3ef
	I0108 21:26:59.867568    5636 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:26:59.867568    5636 round_trippers.go:580]     Content-Type: application/json
	I0108 21:26:59.867568    5636 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-554300","namespace":"kube-system","uid":"c5c47910-dee9-4e42-8623-dbc45d13564f","resourceVersion":"361","creationTimestamp":"2024-01-08T21:23:32Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"8c1bbf537f0866640621fd36da3286e0","kubernetes.io/config.mirror":"8c1bbf537f0866640621fd36da3286e0","kubernetes.io/config.seen":"2024-01-08T21:23:32.232191792Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-554300","uid":"00d4501c-311c-4564-812c-b620304a4e8e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-08T21:23:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 6969 chars]
	I0108 21:26:59.868295    5636 round_trippers.go:463] GET https://172.29.107.59:8443/api/v1/nodes/multinode-554300
	I0108 21:26:59.868295    5636 round_trippers.go:469] Request Headers:
	I0108 21:26:59.868295    5636 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:26:59.868295    5636 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0108 21:26:59.871509    5636 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 21:26:59.871509    5636 round_trippers.go:577] Response Headers:
	I0108 21:26:59.871797    5636 round_trippers.go:580]     Audit-Id: 961d405d-2130-43ad-83a9-a1825f4a318b
	I0108 21:26:59.871797    5636 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:26:59.871797    5636 round_trippers.go:580]     Content-Type: application/json
	I0108 21:26:59.871797    5636 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9ebd4a85-36a3-4fdc-bd57-aab45239cbb7
	I0108 21:26:59.871797    5636 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 15bc4c65-0928-4725-8730-353e6d4075e7
	I0108 21:26:59.871797    5636 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:26:59 GMT
	I0108 21:26:59.871864    5636 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-554300","uid":"00d4501c-311c-4564-812c-b620304a4e8e","resourceVersion":"446","creationTimestamp":"2024-01-08T21:23:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-554300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-554300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T21_23_33_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-01-08T21:23:28Z","fieldsType":"FieldsV1","fi [truncated 4958 chars]
	I0108 21:26:59.871864    5636 pod_ready.go:92] pod "kube-controller-manager-multinode-554300" in "kube-system" namespace has status "Ready":"True"
	I0108 21:26:59.871864    5636 pod_ready.go:81] duration metric: took 8.1145ms waiting for pod "kube-controller-manager-multinode-554300" in "kube-system" namespace to be "Ready" ...
	I0108 21:26:59.872406    5636 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-jsq7c" in "kube-system" namespace to be "Ready" ...
	I0108 21:27:00.023249    5636 request.go:629] Waited for 150.6089ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.107.59:8443/api/v1/namespaces/kube-system/pods/kube-proxy-jsq7c
	I0108 21:27:00.023465    5636 round_trippers.go:463] GET https://172.29.107.59:8443/api/v1/namespaces/kube-system/pods/kube-proxy-jsq7c
	I0108 21:27:00.023465    5636 round_trippers.go:469] Request Headers:
	I0108 21:27:00.023465    5636 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:27:00.023465    5636 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0108 21:27:00.026815    5636 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 21:27:00.026815    5636 round_trippers.go:577] Response Headers:
	I0108 21:27:00.027680    5636 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9ebd4a85-36a3-4fdc-bd57-aab45239cbb7
	I0108 21:27:00.027680    5636 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 15bc4c65-0928-4725-8730-353e6d4075e7
	I0108 21:27:00.027680    5636 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:27:00 GMT
	I0108 21:27:00.027680    5636 round_trippers.go:580]     Audit-Id: 12fe81ad-0f90-494a-b144-76064032f867
	I0108 21:27:00.027680    5636 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:27:00.027680    5636 round_trippers.go:580]     Content-Type: application/json
	I0108 21:27:00.027946    5636 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-jsq7c","generateName":"kube-proxy-","namespace":"kube-system","uid":"cbc6a2d2-bb66-4af4-8a7d-315bc293cac0","resourceVersion":"398","creationTimestamp":"2024-01-08T21:23:44Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"0597a1a7-46a6-4b69-bb4b-83cbc4a269de","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-08T21:23:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0597a1a7-46a6-4b69-bb4b-83cbc4a269de\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5533 chars]
	I0108 21:27:00.225166    5636 request.go:629] Waited for 196.7107ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.107.59:8443/api/v1/nodes/multinode-554300
	I0108 21:27:00.225483    5636 round_trippers.go:463] GET https://172.29.107.59:8443/api/v1/nodes/multinode-554300
	I0108 21:27:00.225528    5636 round_trippers.go:469] Request Headers:
	I0108 21:27:00.225553    5636 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:27:00.225553    5636 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0108 21:27:00.229491    5636 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 21:27:00.229491    5636 round_trippers.go:577] Response Headers:
	I0108 21:27:00.229491    5636 round_trippers.go:580]     Audit-Id: e828b5b4-f155-412f-a16b-1410205625d2
	I0108 21:27:00.229491    5636 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:27:00.229491    5636 round_trippers.go:580]     Content-Type: application/json
	I0108 21:27:00.230013    5636 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9ebd4a85-36a3-4fdc-bd57-aab45239cbb7
	I0108 21:27:00.230013    5636 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 15bc4c65-0928-4725-8730-353e6d4075e7
	I0108 21:27:00.230013    5636 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:27:00 GMT
	I0108 21:27:00.230284    5636 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-554300","uid":"00d4501c-311c-4564-812c-b620304a4e8e","resourceVersion":"446","creationTimestamp":"2024-01-08T21:23:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-554300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-554300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T21_23_33_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-01-08T21:23:28Z","fieldsType":"FieldsV1","fi [truncated 4958 chars]
	I0108 21:27:00.230732    5636 pod_ready.go:92] pod "kube-proxy-jsq7c" in "kube-system" namespace has status "Ready":"True"
	I0108 21:27:00.230905    5636 pod_ready.go:81] duration metric: took 358.4967ms waiting for pod "kube-proxy-jsq7c" in "kube-system" namespace to be "Ready" ...
	I0108 21:27:00.230905    5636 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-nbzjb" in "kube-system" namespace to be "Ready" ...
	I0108 21:27:00.427459    5636 request.go:629] Waited for 196.2816ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.107.59:8443/api/v1/namespaces/kube-system/pods/kube-proxy-nbzjb
	I0108 21:27:00.427565    5636 round_trippers.go:463] GET https://172.29.107.59:8443/api/v1/namespaces/kube-system/pods/kube-proxy-nbzjb
	I0108 21:27:00.427948    5636 round_trippers.go:469] Request Headers:
	I0108 21:27:00.427948    5636 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0108 21:27:00.427948    5636 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:27:00.432558    5636 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0108 21:27:00.432558    5636 round_trippers.go:577] Response Headers:
	I0108 21:27:00.432558    5636 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:27:00 GMT
	I0108 21:27:00.432558    5636 round_trippers.go:580]     Audit-Id: 80933ddc-6efe-4e7b-bea9-632895019197
	I0108 21:27:00.432558    5636 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:27:00.432558    5636 round_trippers.go:580]     Content-Type: application/json
	I0108 21:27:00.432558    5636 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9ebd4a85-36a3-4fdc-bd57-aab45239cbb7
	I0108 21:27:00.432558    5636 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 15bc4c65-0928-4725-8730-353e6d4075e7
	I0108 21:27:00.432558    5636 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-nbzjb","generateName":"kube-proxy-","namespace":"kube-system","uid":"73b08d5a-2015-4712-92b4-2d12298e9fc3","resourceVersion":"624","creationTimestamp":"2024-01-08T21:26:42Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"0597a1a7-46a6-4b69-bb4b-83cbc4a269de","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-08T21:26:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0597a1a7-46a6-4b69-bb4b-83cbc4a269de\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5537 chars]
	I0108 21:27:00.628624    5636 request.go:629] Waited for 195.0673ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.107.59:8443/api/v1/nodes/multinode-554300-m02
	I0108 21:27:00.628952    5636 round_trippers.go:463] GET https://172.29.107.59:8443/api/v1/nodes/multinode-554300-m02
	I0108 21:27:00.628952    5636 round_trippers.go:469] Request Headers:
	I0108 21:27:00.628952    5636 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:27:00.629067    5636 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0108 21:27:00.632447    5636 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 21:27:00.632447    5636 round_trippers.go:577] Response Headers:
	I0108 21:27:00.633294    5636 round_trippers.go:580]     Audit-Id: 734ccde2-a545-4924-ad05-d1e8de334851
	I0108 21:27:00.633294    5636 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:27:00.633294    5636 round_trippers.go:580]     Content-Type: application/json
	I0108 21:27:00.633294    5636 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9ebd4a85-36a3-4fdc-bd57-aab45239cbb7
	I0108 21:27:00.633294    5636 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 15bc4c65-0928-4725-8730-353e6d4075e7
	I0108 21:27:00.633294    5636 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:27:00 GMT
	I0108 21:27:00.633525    5636 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-554300-m02","uid":"77388bfb-eaa7-4617-992f-c60f1dbca8c7","resourceVersion":"638","creationTimestamp":"2024-01-08T21:26:42Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-554300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-554300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T21_26_43_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-08T21:26:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annot
ations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":" [truncated 3255 chars]
	I0108 21:27:00.634083    5636 pod_ready.go:92] pod "kube-proxy-nbzjb" in "kube-system" namespace has status "Ready":"True"
	I0108 21:27:00.634188    5636 pod_ready.go:81] duration metric: took 403.2809ms waiting for pod "kube-proxy-nbzjb" in "kube-system" namespace to be "Ready" ...
	I0108 21:27:00.634188    5636 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-554300" in "kube-system" namespace to be "Ready" ...
	I0108 21:27:00.833389    5636 request.go:629] Waited for 198.8084ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.107.59:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-554300
	I0108 21:27:00.833619    5636 round_trippers.go:463] GET https://172.29.107.59:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-554300
	I0108 21:27:00.833729    5636 round_trippers.go:469] Request Headers:
	I0108 21:27:00.833787    5636 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0108 21:27:00.833840    5636 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:27:00.838220    5636 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0108 21:27:00.838220    5636 round_trippers.go:577] Response Headers:
	I0108 21:27:00.838220    5636 round_trippers.go:580]     Audit-Id: a4723807-1c61-4f54-a5f5-178f84c83ace
	I0108 21:27:00.838220    5636 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:27:00.838220    5636 round_trippers.go:580]     Content-Type: application/json
	I0108 21:27:00.838779    5636 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9ebd4a85-36a3-4fdc-bd57-aab45239cbb7
	I0108 21:27:00.838779    5636 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 15bc4c65-0928-4725-8730-353e6d4075e7
	I0108 21:27:00.838779    5636 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:27:00 GMT
	I0108 21:27:00.839073    5636 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-554300","namespace":"kube-system","uid":"f5b78bba-6cd0-495b-b6d6-c9afd93b3534","resourceVersion":"313","creationTimestamp":"2024-01-08T21:23:32Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"33f58bfb7438c4df2c3fb56db1f613ab","kubernetes.io/config.mirror":"33f58bfb7438c4df2c3fb56db1f613ab","kubernetes.io/config.seen":"2024-01-08T21:23:32.232192792Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-554300","uid":"00d4501c-311c-4564-812c-b620304a4e8e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-08T21:23:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4699 chars]
	I0108 21:27:01.020439    5636 request.go:629] Waited for 180.5832ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.107.59:8443/api/v1/nodes/multinode-554300
	I0108 21:27:01.020439    5636 round_trippers.go:463] GET https://172.29.107.59:8443/api/v1/nodes/multinode-554300
	I0108 21:27:01.020439    5636 round_trippers.go:469] Request Headers:
	I0108 21:27:01.020439    5636 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:27:01.020439    5636 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0108 21:27:01.024028    5636 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 21:27:01.024028    5636 round_trippers.go:577] Response Headers:
	I0108 21:27:01.024606    5636 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:27:01.024606    5636 round_trippers.go:580]     Content-Type: application/json
	I0108 21:27:01.024606    5636 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9ebd4a85-36a3-4fdc-bd57-aab45239cbb7
	I0108 21:27:01.024606    5636 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 15bc4c65-0928-4725-8730-353e6d4075e7
	I0108 21:27:01.024606    5636 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:27:01 GMT
	I0108 21:27:01.024606    5636 round_trippers.go:580]     Audit-Id: d293c069-8755-4e4e-937a-68b6eb681604
	I0108 21:27:01.025752    5636 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-554300","uid":"00d4501c-311c-4564-812c-b620304a4e8e","resourceVersion":"446","creationTimestamp":"2024-01-08T21:23:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-554300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-554300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T21_23_33_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-01-08T21:23:28Z","fieldsType":"FieldsV1","fi [truncated 4958 chars]
	I0108 21:27:01.026357    5636 pod_ready.go:92] pod "kube-scheduler-multinode-554300" in "kube-system" namespace has status "Ready":"True"
	I0108 21:27:01.026431    5636 pod_ready.go:81] duration metric: took 392.117ms waiting for pod "kube-scheduler-multinode-554300" in "kube-system" namespace to be "Ready" ...
	I0108 21:27:01.026431    5636 pod_ready.go:38] duration metric: took 1.2019435s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0108 21:27:01.026431    5636 system_svc.go:44] waiting for kubelet service to be running ....
	I0108 21:27:01.042098    5636 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0108 21:27:01.067369    5636 system_svc.go:56] duration metric: took 40.9383ms WaitForService to wait for kubelet.
	I0108 21:27:01.067369    5636 kubeadm.go:581] duration metric: took 17.2984583s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0108 21:27:01.067369    5636 node_conditions.go:102] verifying NodePressure condition ...
	I0108 21:27:01.223712    5636 request.go:629] Waited for 155.9924ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.107.59:8443/api/v1/nodes
	I0108 21:27:01.223807    5636 round_trippers.go:463] GET https://172.29.107.59:8443/api/v1/nodes
	I0108 21:27:01.224082    5636 round_trippers.go:469] Request Headers:
	I0108 21:27:01.224161    5636 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0108 21:27:01.224161    5636 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:27:01.227764    5636 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 21:27:01.228795    5636 round_trippers.go:577] Response Headers:
	I0108 21:27:01.228832    5636 round_trippers.go:580]     Audit-Id: 8c591e53-f549-4541-ae92-784178dcae27
	I0108 21:27:01.228832    5636 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:27:01.228832    5636 round_trippers.go:580]     Content-Type: application/json
	I0108 21:27:01.228832    5636 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9ebd4a85-36a3-4fdc-bd57-aab45239cbb7
	I0108 21:27:01.228832    5636 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 15bc4c65-0928-4725-8730-353e6d4075e7
	I0108 21:27:01.228961    5636 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:27:01 GMT
	I0108 21:27:01.229143    5636 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"640"},"items":[{"metadata":{"name":"multinode-554300","uid":"00d4501c-311c-4564-812c-b620304a4e8e","resourceVersion":"446","creationTimestamp":"2024-01-08T21:23:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-554300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-554300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T21_23_33_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFiel
ds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time" [truncated 9258 chars]
	I0108 21:27:01.229842    5636 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0108 21:27:01.229842    5636 node_conditions.go:123] node cpu capacity is 2
	I0108 21:27:01.229842    5636 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0108 21:27:01.229842    5636 node_conditions.go:123] node cpu capacity is 2
	I0108 21:27:01.229842    5636 node_conditions.go:105] duration metric: took 162.4721ms to run NodePressure ...
	I0108 21:27:01.229842    5636 start.go:228] waiting for startup goroutines ...
	I0108 21:27:01.229842    5636 start.go:242] writing updated cluster config ...
	I0108 21:27:01.247301    5636 ssh_runner.go:195] Run: rm -f paused
	I0108 21:27:01.411344    5636 start.go:600] kubectl: 1.29.0, cluster: 1.28.4 (minor skew: 1)
	I0108 21:27:01.412644    5636 out.go:177] * Done! kubectl is now configured to use "multinode-554300" cluster and "default" namespace by default
	
	
	==> Docker <==
	-- Journal begins at Mon 2024-01-08 21:21:38 UTC, ends at Mon 2024-01-08 21:28:17 UTC. --
	Jan 08 21:23:57 multinode-554300 dockerd[1322]: time="2024-01-08T21:23:57.802173219Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 08 21:23:57 multinode-554300 dockerd[1322]: time="2024-01-08T21:23:57.817343878Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jan 08 21:23:57 multinode-554300 dockerd[1322]: time="2024-01-08T21:23:57.817424179Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 08 21:23:57 multinode-554300 dockerd[1322]: time="2024-01-08T21:23:57.817466479Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jan 08 21:23:57 multinode-554300 dockerd[1322]: time="2024-01-08T21:23:57.817477380Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 08 21:23:58 multinode-554300 cri-dockerd[1208]: time="2024-01-08T21:23:58Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/c1d7f7a3821e8fa274916aee33a5b58cb4cf17792c45d812dad09d55916835b1/resolv.conf as [nameserver 172.29.96.1]"
	Jan 08 21:23:58 multinode-554300 cri-dockerd[1208]: time="2024-01-08T21:23:58Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/2079ab544b8d9efc44d3ac3301c934ba63e0723f1f7a14cecd3d0a6c1982f29a/resolv.conf as [nameserver 172.29.96.1]"
	Jan 08 21:23:58 multinode-554300 dockerd[1322]: time="2024-01-08T21:23:58.485313462Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jan 08 21:23:58 multinode-554300 dockerd[1322]: time="2024-01-08T21:23:58.485529864Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 08 21:23:58 multinode-554300 dockerd[1322]: time="2024-01-08T21:23:58.485645666Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jan 08 21:23:58 multinode-554300 dockerd[1322]: time="2024-01-08T21:23:58.485729966Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 08 21:23:58 multinode-554300 dockerd[1322]: time="2024-01-08T21:23:58.637744160Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jan 08 21:23:58 multinode-554300 dockerd[1322]: time="2024-01-08T21:23:58.638129564Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 08 21:23:58 multinode-554300 dockerd[1322]: time="2024-01-08T21:23:58.638283465Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jan 08 21:23:58 multinode-554300 dockerd[1322]: time="2024-01-08T21:23:58.638435967Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 08 21:27:26 multinode-554300 dockerd[1322]: time="2024-01-08T21:27:26.660179665Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jan 08 21:27:26 multinode-554300 dockerd[1322]: time="2024-01-08T21:27:26.660345493Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 08 21:27:26 multinode-554300 dockerd[1322]: time="2024-01-08T21:27:26.660397302Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jan 08 21:27:26 multinode-554300 dockerd[1322]: time="2024-01-08T21:27:26.660417805Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 08 21:27:27 multinode-554300 cri-dockerd[1208]: time="2024-01-08T21:27:27Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/e5200fb3682dbeb67b8a7cd3c826bb2af4ba32ac5dc4f6db9e0a740c3b674fa7/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Jan 08 21:27:28 multinode-554300 cri-dockerd[1208]: time="2024-01-08T21:27:28Z" level=info msg="Stop pulling image gcr.io/k8s-minikube/busybox:1.28: Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:1.28"
	Jan 08 21:27:28 multinode-554300 dockerd[1322]: time="2024-01-08T21:27:28.384862901Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jan 08 21:27:28 multinode-554300 dockerd[1322]: time="2024-01-08T21:27:28.385051032Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 08 21:27:28 multinode-554300 dockerd[1322]: time="2024-01-08T21:27:28.385182853Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jan 08 21:27:28 multinode-554300 dockerd[1322]: time="2024-01-08T21:27:28.385201756Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	bb85cd47a0309       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   49 seconds ago      Running             busybox                   0                   e5200fb3682db       busybox-5bc68d56bd-hrhnw
	146f9c24d2a4b       ead0a4a53df89                                                                                         4 minutes ago       Running             coredns                   0                   2079ab544b8d9       coredns-5dd5756b68-q7vd7
	77cfa745d6789       6e38f40d628db                                                                                         4 minutes ago       Running             storage-provisioner       0                   c1d7f7a3821e8       storage-provisioner
	359babcc50a69       kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052              4 minutes ago       Running             kindnet-cni               0                   ceed09dba4fb2       kindnet-5r79t
	2c18647ee3312       83f6cc407eed8                                                                                         4 minutes ago       Running             kube-proxy                0                   5e4892494426d       kube-proxy-jsq7c
	3f926c6626bfc       73deb9a3f7025                                                                                         4 minutes ago       Running             etcd                      0                   b50a134590a70       etcd-multinode-554300
	c193667d32e41       e3db313c6dbc0                                                                                         4 minutes ago       Running             kube-scheduler            0                   4081e28ae5451       kube-scheduler-multinode-554300
	5a21be70e8c82       d058aa5ab969c                                                                                         4 minutes ago       Running             kube-controller-manager   0                   6c64a54424c9b       kube-controller-manager-multinode-554300
	eb93c2ad9198e       7fe0e6f37db33                                                                                         4 minutes ago       Running             kube-apiserver            0                   d5dd9fc6e97eb       kube-apiserver-multinode-554300
	
	
	==> coredns [146f9c24d2a4] <==
	[INFO] 10.244.0.3:33249 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000161823s
	[INFO] 10.244.1.2:46193 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000177326s
	[INFO] 10.244.1.2:37675 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000063209s
	[INFO] 10.244.1.2:36622 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00006491s
	[INFO] 10.244.1.2:46580 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000053708s
	[INFO] 10.244.1.2:42017 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000053208s
	[INFO] 10.244.1.2:38648 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000070711s
	[INFO] 10.244.1.2:38719 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000058008s
	[INFO] 10.244.1.2:53435 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000057109s
	[INFO] 10.244.0.3:36893 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000119617s
	[INFO] 10.244.0.3:58794 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000490371s
	[INFO] 10.244.0.3:53722 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000171725s
	[INFO] 10.244.0.3:46338 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000545679s
	[INFO] 10.244.1.2:58618 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000152622s
	[INFO] 10.244.1.2:54835 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000119318s
	[INFO] 10.244.1.2:36265 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000170025s
	[INFO] 10.244.1.2:55902 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000153422s
	[INFO] 10.244.0.3:52265 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000169424s
	[INFO] 10.244.0.3:43278 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000108816s
	[INFO] 10.244.0.3:35101 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000319945s
	[INFO] 10.244.0.3:36695 - 5 "PTR IN 1.96.29.172.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.00014182s
	[INFO] 10.244.1.2:44665 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00017322s
	[INFO] 10.244.1.2:52765 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000153417s
	[INFO] 10.244.1.2:57262 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000063407s
	[INFO] 10.244.1.2:44027 - 5 "PTR IN 1.96.29.172.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000114314s
	
	
	==> describe nodes <==
	Name:               multinode-554300
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-554300
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=255792ad75c0218cbe9d2c7121633a1b1d442f28
	                    minikube.k8s.io/name=multinode-554300
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_01_08T21_23_33_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 08 Jan 2024 21:23:28 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-554300
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 08 Jan 2024 21:28:09 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 08 Jan 2024 21:27:36 +0000   Mon, 08 Jan 2024 21:23:26 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 08 Jan 2024 21:27:36 +0000   Mon, 08 Jan 2024 21:23:26 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 08 Jan 2024 21:27:36 +0000   Mon, 08 Jan 2024 21:23:26 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 08 Jan 2024 21:27:36 +0000   Mon, 08 Jan 2024 21:23:57 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.29.107.59
	  Hostname:    multinode-554300
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165980Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165980Ki
	  pods:               110
	System Info:
	  Machine ID:                 26cbcdd5ee414fa38121905407ec3cca
	  System UUID:                b9399726-afc3-4741-8f8d-1fb422dcdbf7
	  Boot ID:                    e9852b39-e568-4891-a7c2-8504d9a60b4d
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://24.0.7
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5bc68d56bd-hrhnw                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         51s
	  kube-system                 coredns-5dd5756b68-q7vd7                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     4m33s
	  kube-system                 etcd-multinode-554300                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         4m46s
	  kube-system                 kindnet-5r79t                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      4m33s
	  kube-system                 kube-apiserver-multinode-554300             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m45s
	  kube-system                 kube-controller-manager-multinode-554300    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m45s
	  kube-system                 kube-proxy-jsq7c                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m33s
	  kube-system                 kube-scheduler-multinode-554300             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m45s
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m24s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m31s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  4m54s (x8 over 4m54s)  kubelet          Node multinode-554300 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m54s (x8 over 4m54s)  kubelet          Node multinode-554300 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m54s (x7 over 4m54s)  kubelet          Node multinode-554300 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m54s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 4m45s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m45s                  kubelet          Node multinode-554300 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m45s                  kubelet          Node multinode-554300 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m45s                  kubelet          Node multinode-554300 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m45s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m34s                  node-controller  Node multinode-554300 event: Registered Node multinode-554300 in Controller
	  Normal  NodeReady                4m20s                  kubelet          Node multinode-554300 status is now: NodeReady
	
	
	Name:               multinode-554300-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-554300-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=255792ad75c0218cbe9d2c7121633a1b1d442f28
	                    minikube.k8s.io/name=multinode-554300
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_01_08T21_26_43_0700
	                    minikube.k8s.io/version=v1.32.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 08 Jan 2024 21:26:42 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-554300-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 08 Jan 2024 21:28:13 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 08 Jan 2024 21:27:43 +0000   Mon, 08 Jan 2024 21:26:42 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 08 Jan 2024 21:27:43 +0000   Mon, 08 Jan 2024 21:26:42 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 08 Jan 2024 21:27:43 +0000   Mon, 08 Jan 2024 21:26:42 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 08 Jan 2024 21:27:43 +0000   Mon, 08 Jan 2024 21:26:59 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.29.96.43
	  Hostname:    multinode-554300-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165980Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165980Ki
	  pods:               110
	System Info:
	  Machine ID:                 a5da36c4ad7d46068f0f09ecabc68268
	  System UUID:                55f6d4cc-d2a8-8b44-8585-1032f5566229
	  Boot ID:                    99245e87-302a-4ac7-a91c-2c4f3bb1e366
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://24.0.7
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5bc68d56bd-w2zbn    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         51s
	  kube-system                 kindnet-4q524               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      95s
	  kube-system                 kube-proxy-nbzjb            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         95s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 85s                kube-proxy       
	  Normal  Starting                 96s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  95s (x2 over 96s)  kubelet          Node multinode-554300-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    95s (x2 over 96s)  kubelet          Node multinode-554300-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     95s (x2 over 96s)  kubelet          Node multinode-554300-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  95s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           93s                node-controller  Node multinode-554300-m02 event: Registered Node multinode-554300-m02 in Controller
	  Normal  NodeReady                78s                kubelet          Node multinode-554300-m02 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000025] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.041244] psmouse serio1: trackpoint: failed to get extended button data, assuming 3 buttons
	[  +1.254795] systemd-fstab-generator[113]: Ignoring "noauto" for root device
	[  +1.345933] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +7.773150] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000010] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[Jan 8 21:22] systemd-fstab-generator[657]: Ignoring "noauto" for root device
	[  +0.146256] systemd-fstab-generator[668]: Ignoring "noauto" for root device
	[ +29.820905] systemd-fstab-generator[942]: Ignoring "noauto" for root device
	[  +0.577940] systemd-fstab-generator[979]: Ignoring "noauto" for root device
	[  +0.161532] systemd-fstab-generator[990]: Ignoring "noauto" for root device
	[  +0.192661] systemd-fstab-generator[1003]: Ignoring "noauto" for root device
	[Jan 8 21:23] kauditd_printk_skb: 14 callbacks suppressed
	[  +0.398380] systemd-fstab-generator[1163]: Ignoring "noauto" for root device
	[  +0.153502] systemd-fstab-generator[1174]: Ignoring "noauto" for root device
	[  +0.160329] systemd-fstab-generator[1185]: Ignoring "noauto" for root device
	[  +0.237776] systemd-fstab-generator[1200]: Ignoring "noauto" for root device
	[ +12.071783] systemd-fstab-generator[1307]: Ignoring "noauto" for root device
	[  +2.340413] kauditd_printk_skb: 29 callbacks suppressed
	[  +6.259389] systemd-fstab-generator[1685]: Ignoring "noauto" for root device
	[  +0.668285] kauditd_printk_skb: 29 callbacks suppressed
	[  +8.662636] systemd-fstab-generator[2632]: Ignoring "noauto" for root device
	[ +24.810770] kauditd_printk_skb: 14 callbacks suppressed
	
	
	==> etcd [3f926c6626bf] <==
	{"level":"info","ts":"2024-01-08T21:23:26.067366Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"172.29.107.59:2380"}
	{"level":"info","ts":"2024-01-08T21:23:26.067938Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"f556f2245c8dbb59","initial-advertise-peer-urls":["https://172.29.107.59:2380"],"listen-peer-urls":["https://172.29.107.59:2380"],"advertise-client-urls":["https://172.29.107.59:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://172.29.107.59:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-01-08T21:23:26.068144Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-01-08T21:23:26.667441Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f556f2245c8dbb59 is starting a new election at term 1"}
	{"level":"info","ts":"2024-01-08T21:23:26.667804Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f556f2245c8dbb59 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-01-08T21:23:26.668055Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f556f2245c8dbb59 received MsgPreVoteResp from f556f2245c8dbb59 at term 1"}
	{"level":"info","ts":"2024-01-08T21:23:26.668307Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f556f2245c8dbb59 became candidate at term 2"}
	{"level":"info","ts":"2024-01-08T21:23:26.668429Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f556f2245c8dbb59 received MsgVoteResp from f556f2245c8dbb59 at term 2"}
	{"level":"info","ts":"2024-01-08T21:23:26.668591Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f556f2245c8dbb59 became leader at term 2"}
	{"level":"info","ts":"2024-01-08T21:23:26.668733Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: f556f2245c8dbb59 elected leader f556f2245c8dbb59 at term 2"}
	{"level":"info","ts":"2024-01-08T21:23:26.670318Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-01-08T21:23:26.672249Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"f556f2245c8dbb59","local-member-attributes":"{Name:multinode-554300 ClientURLs:[https://172.29.107.59:2379]}","request-path":"/0/members/f556f2245c8dbb59/attributes","cluster-id":"54a2b764b48fb5bd","publish-timeout":"7s"}
	{"level":"info","ts":"2024-01-08T21:23:26.672572Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"54a2b764b48fb5bd","local-member-id":"f556f2245c8dbb59","cluster-version":"3.5"}
	{"level":"info","ts":"2024-01-08T21:23:26.672879Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-01-08T21:23:26.673069Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-01-08T21:23:26.673369Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-01-08T21:23:26.675621Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-01-08T21:23:26.675926Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-01-08T21:23:26.677303Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-01-08T21:23:26.676034Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-01-08T21:23:26.691467Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"172.29.107.59:2379"}
	{"level":"info","ts":"2024-01-08T21:26:47.477589Z","caller":"traceutil/trace.go:171","msg":"trace[1374339044] transaction","detail":"{read_only:false; response_revision:612; number_of_response:1; }","duration":"201.128699ms","start":"2024-01-08T21:26:47.276443Z","end":"2024-01-08T21:26:47.477572Z","steps":["trace[1374339044] 'process raft request'  (duration: 201.033368ms)"],"step_count":1}
	{"level":"warn","ts":"2024-01-08T21:26:47.479372Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"183.767646ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-554300-m02\" ","response":"range_response_count:1 size:2839"}
	{"level":"info","ts":"2024-01-08T21:26:47.479418Z","caller":"traceutil/trace.go:171","msg":"trace[897995498] range","detail":"{range_begin:/registry/minions/multinode-554300-m02; range_end:; response_count:1; response_revision:612; }","duration":"183.835068ms","start":"2024-01-08T21:26:47.295575Z","end":"2024-01-08T21:26:47.479411Z","steps":["trace[897995498] 'agreement among raft nodes before linearized reading'  (duration: 183.738337ms)"],"step_count":1}
	{"level":"info","ts":"2024-01-08T21:26:47.479248Z","caller":"traceutil/trace.go:171","msg":"trace[1519768817] linearizableReadLoop","detail":"{readStateIndex:664; appliedIndex:664; }","duration":"183.633403ms","start":"2024-01-08T21:26:47.295601Z","end":"2024-01-08T21:26:47.479234Z","steps":["trace[1519768817] 'read index received'  (duration: 182.729108ms)","trace[1519768817] 'applied index is now lower than readState.Index'  (duration: 903.194µs)"],"step_count":2}
	
	
	==> kernel <==
	 21:28:17 up 6 min,  0 users,  load average: 0.35, 0.46, 0.26
	Linux multinode-554300 5.10.57 #1 SMP Sat Dec 16 11:03:54 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	
	==> kindnet [359babcc50a6] <==
	I0108 21:27:16.872525       1 main.go:250] Node multinode-554300-m02 has CIDR [10.244.1.0/24] 
	I0108 21:27:26.877507       1 main.go:223] Handling node with IPs: map[172.29.107.59:{}]
	I0108 21:27:26.877560       1 main.go:227] handling current node
	I0108 21:27:26.877573       1 main.go:223] Handling node with IPs: map[172.29.96.43:{}]
	I0108 21:27:26.877581       1 main.go:250] Node multinode-554300-m02 has CIDR [10.244.1.0/24] 
	I0108 21:27:36.892573       1 main.go:223] Handling node with IPs: map[172.29.107.59:{}]
	I0108 21:27:36.892667       1 main.go:227] handling current node
	I0108 21:27:36.892682       1 main.go:223] Handling node with IPs: map[172.29.96.43:{}]
	I0108 21:27:36.893043       1 main.go:250] Node multinode-554300-m02 has CIDR [10.244.1.0/24] 
	I0108 21:27:46.908998       1 main.go:223] Handling node with IPs: map[172.29.107.59:{}]
	I0108 21:27:46.909133       1 main.go:227] handling current node
	I0108 21:27:46.909149       1 main.go:223] Handling node with IPs: map[172.29.96.43:{}]
	I0108 21:27:46.909157       1 main.go:250] Node multinode-554300-m02 has CIDR [10.244.1.0/24] 
	I0108 21:27:56.916751       1 main.go:223] Handling node with IPs: map[172.29.107.59:{}]
	I0108 21:27:56.916829       1 main.go:227] handling current node
	I0108 21:27:56.916844       1 main.go:223] Handling node with IPs: map[172.29.96.43:{}]
	I0108 21:27:56.916852       1 main.go:250] Node multinode-554300-m02 has CIDR [10.244.1.0/24] 
	I0108 21:28:06.923117       1 main.go:223] Handling node with IPs: map[172.29.107.59:{}]
	I0108 21:28:06.923215       1 main.go:227] handling current node
	I0108 21:28:06.923232       1 main.go:223] Handling node with IPs: map[172.29.96.43:{}]
	I0108 21:28:06.923241       1 main.go:250] Node multinode-554300-m02 has CIDR [10.244.1.0/24] 
	I0108 21:28:16.932292       1 main.go:223] Handling node with IPs: map[172.29.107.59:{}]
	I0108 21:28:16.932334       1 main.go:227] handling current node
	I0108 21:28:16.932347       1 main.go:223] Handling node with IPs: map[172.29.96.43:{}]
	I0108 21:28:16.932353       1 main.go:250] Node multinode-554300-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [eb93c2ad9198] <==
	I0108 21:23:28.624338       1 shared_informer.go:318] Caches are synced for node_authorizer
	I0108 21:23:28.628419       1 controller.go:624] quota admission added evaluator for: namespaces
	I0108 21:23:28.693989       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I0108 21:23:28.694047       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0108 21:23:28.694862       1 aggregator.go:166] initial CRD sync complete...
	I0108 21:23:28.694902       1 autoregister_controller.go:141] Starting autoregister controller
	I0108 21:23:28.694911       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0108 21:23:28.694921       1 cache.go:39] Caches are synced for autoregister controller
	E0108 21:23:28.734383       1 controller.go:146] "Failed to ensure lease exists, will retry" err="namespaces \"kube-system\" not found" interval="200ms"
	I0108 21:23:28.938028       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0108 21:23:29.513463       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0108 21:23:29.519936       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0108 21:23:29.519954       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0108 21:23:30.343424       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0108 21:23:30.403781       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0108 21:23:30.517976       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0108 21:23:30.525220       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [172.29.107.59]
	I0108 21:23:30.526291       1 controller.go:624] quota admission added evaluator for: endpoints
	I0108 21:23:30.531447       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0108 21:23:30.586886       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0108 21:23:32.073396       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0108 21:23:32.088730       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0108 21:23:32.102947       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0108 21:23:44.644897       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I0108 21:23:44.695433       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	
	
	==> kube-controller-manager [5a21be70e8c8] <==
	I0108 21:23:57.186569       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="62.8µs"
	I0108 21:23:57.222702       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="46.8µs"
	I0108 21:23:59.004319       1 node_lifecycle_controller.go:1048] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	I0108 21:23:59.781067       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="57.4µs"
	I0108 21:23:59.804929       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="8.988383ms"
	I0108 21:23:59.805822       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="100.301µs"
	I0108 21:26:42.015944       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-554300-m02\" does not exist"
	I0108 21:26:42.033501       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-554300-m02" podCIDRs=["10.244.1.0/24"]
	I0108 21:26:42.043657       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-nbzjb"
	I0108 21:26:42.054479       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-4q524"
	I0108 21:26:44.037563       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-554300-m02"
	I0108 21:26:44.037692       1 event.go:307] "Event occurred" object="multinode-554300-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-554300-m02 event: Registered Node multinode-554300-m02 in Controller"
	I0108 21:26:59.469721       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-554300-m02"
	I0108 21:27:26.118007       1 event.go:307] "Event occurred" object="default/busybox" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set busybox-5bc68d56bd to 2"
	I0108 21:27:26.132308       1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-5bc68d56bd-w2zbn"
	I0108 21:27:26.159531       1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-5bc68d56bd-hrhnw"
	I0108 21:27:26.187535       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="69.979581ms"
	I0108 21:27:26.204612       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="16.960131ms"
	I0108 21:27:26.229403       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="24.575902ms"
	I0108 21:27:26.230727       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="200.633µs"
	I0108 21:27:26.231565       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="50.308µs"
	I0108 21:27:29.232167       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="8.784491ms"
	I0108 21:27:29.232398       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="80.213µs"
	I0108 21:27:29.314449       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="8.884306ms"
	I0108 21:27:29.314556       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="74.011µs"
	
	
	==> kube-proxy [2c18647ee331] <==
	I0108 21:23:46.067501       1 server_others.go:69] "Using iptables proxy"
	I0108 21:23:46.092785       1 node.go:141] Successfully retrieved node IP: 172.29.107.59
	I0108 21:23:46.212583       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0108 21:23:46.212714       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0108 21:23:46.216002       1 server_others.go:152] "Using iptables Proxier"
	I0108 21:23:46.216133       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0108 21:23:46.216370       1 server.go:846] "Version info" version="v1.28.4"
	I0108 21:23:46.216411       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0108 21:23:46.217701       1 config.go:188] "Starting service config controller"
	I0108 21:23:46.217830       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0108 21:23:46.217870       1 config.go:97] "Starting endpoint slice config controller"
	I0108 21:23:46.217882       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0108 21:23:46.218881       1 config.go:315] "Starting node config controller"
	I0108 21:23:46.218899       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0108 21:23:46.318424       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0108 21:23:46.318484       1 shared_informer.go:318] Caches are synced for service config
	I0108 21:23:46.319187       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [c193667d32e4] <==
	W0108 21:23:29.607938       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0108 21:23:29.608165       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0108 21:23:29.620298       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0108 21:23:29.620325       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0108 21:23:29.667961       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0108 21:23:29.668069       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0108 21:23:29.684016       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0108 21:23:29.684046       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0108 21:23:29.821310       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0108 21:23:29.821697       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0108 21:23:29.831426       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0108 21:23:29.831522       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0108 21:23:29.908576       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0108 21:23:29.908612       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0108 21:23:29.937303       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0108 21:23:29.937511       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0108 21:23:29.957204       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0108 21:23:29.957244       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0108 21:23:30.011985       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0108 21:23:30.012015       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0108 21:23:30.039176       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0108 21:23:30.039207       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0108 21:23:30.060512       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0108 21:23:30.060909       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0108 21:23:32.549569       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	-- Journal begins at Mon 2024-01-08 21:21:38 UTC, ends at Mon 2024-01-08 21:28:17 UTC. --
	Jan 08 21:23:57 multinode-554300 kubelet[2653]: I0108 21:23:57.217819    2653 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gdhn5\" (UniqueName: \"kubernetes.io/projected/2fb8721f-01cc-4078-b45c-964d73e3da98-kube-api-access-gdhn5\") pod \"storage-provisioner\" (UID: \"2fb8721f-01cc-4078-b45c-964d73e3da98\") " pod="kube-system/storage-provisioner"
	Jan 08 21:23:57 multinode-554300 kubelet[2653]: I0108 21:23:57.217943    2653 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/2fb8721f-01cc-4078-b45c-964d73e3da98-tmp\") pod \"storage-provisioner\" (UID: \"2fb8721f-01cc-4078-b45c-964d73e3da98\") " pod="kube-system/storage-provisioner"
	Jan 08 21:23:57 multinode-554300 kubelet[2653]: I0108 21:23:57.218049    2653 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fe215542-1a69-4152-9098-06937431fa74-config-volume\") pod \"coredns-5dd5756b68-q7vd7\" (UID: \"fe215542-1a69-4152-9098-06937431fa74\") " pod="kube-system/coredns-5dd5756b68-q7vd7"
	Jan 08 21:23:59 multinode-554300 kubelet[2653]: I0108 21:23:59.778743    2653 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=6.778543093 podCreationTimestamp="2024-01-08 21:23:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-01-08 21:23:58.756661329 +0000 UTC m=+26.730044855" watchObservedRunningTime="2024-01-08 21:23:59.778543093 +0000 UTC m=+27.751926619"
	Jan 08 21:23:59 multinode-554300 kubelet[2653]: I0108 21:23:59.797452    2653 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-q7vd7" podStartSLOduration=15.797423667 podCreationTimestamp="2024-01-08 21:23:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-01-08 21:23:59.780020207 +0000 UTC m=+27.753403833" watchObservedRunningTime="2024-01-08 21:23:59.797423667 +0000 UTC m=+27.770807193"
	Jan 08 21:24:32 multinode-554300 kubelet[2653]: E0108 21:24:32.386243    2653 iptables.go:575] "Could not set up iptables canary" err=<
	Jan 08 21:24:32 multinode-554300 kubelet[2653]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jan 08 21:24:32 multinode-554300 kubelet[2653]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jan 08 21:24:32 multinode-554300 kubelet[2653]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jan 08 21:25:32 multinode-554300 kubelet[2653]: E0108 21:25:32.386224    2653 iptables.go:575] "Could not set up iptables canary" err=<
	Jan 08 21:25:32 multinode-554300 kubelet[2653]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jan 08 21:25:32 multinode-554300 kubelet[2653]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jan 08 21:25:32 multinode-554300 kubelet[2653]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jan 08 21:26:32 multinode-554300 kubelet[2653]: E0108 21:26:32.386069    2653 iptables.go:575] "Could not set up iptables canary" err=<
	Jan 08 21:26:32 multinode-554300 kubelet[2653]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jan 08 21:26:32 multinode-554300 kubelet[2653]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jan 08 21:26:32 multinode-554300 kubelet[2653]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jan 08 21:27:26 multinode-554300 kubelet[2653]: I0108 21:27:26.187267    2653 topology_manager.go:215] "Topology Admit Handler" podUID="f1d70203-3637-4218-b3de-e95f0a6c677e" podNamespace="default" podName="busybox-5bc68d56bd-hrhnw"
	Jan 08 21:27:26 multinode-554300 kubelet[2653]: I0108 21:27:26.214940    2653 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4msbv\" (UniqueName: \"kubernetes.io/projected/f1d70203-3637-4218-b3de-e95f0a6c677e-kube-api-access-4msbv\") pod \"busybox-5bc68d56bd-hrhnw\" (UID: \"f1d70203-3637-4218-b3de-e95f0a6c677e\") " pod="default/busybox-5bc68d56bd-hrhnw"
	Jan 08 21:27:27 multinode-554300 kubelet[2653]: I0108 21:27:27.238837    2653 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e5200fb3682dbeb67b8a7cd3c826bb2af4ba32ac5dc4f6db9e0a740c3b674fa7"
	Jan 08 21:27:29 multinode-554300 kubelet[2653]: I0108 21:27:29.305353    2653 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/busybox-5bc68d56bd-hrhnw" podStartSLOduration=2.350109989 podCreationTimestamp="2024-01-08 21:27:26 +0000 UTC" firstStartedPulling="2024-01-08 21:27:27.288169051 +0000 UTC m=+235.261552677" lastFinishedPulling="2024-01-08 21:27:28.243370004 +0000 UTC m=+236.216753530" observedRunningTime="2024-01-08 21:27:29.304952786 +0000 UTC m=+237.278336412" watchObservedRunningTime="2024-01-08 21:27:29.305310842 +0000 UTC m=+237.278694468"
	Jan 08 21:27:32 multinode-554300 kubelet[2653]: E0108 21:27:32.386967    2653 iptables.go:575] "Could not set up iptables canary" err=<
	Jan 08 21:27:32 multinode-554300 kubelet[2653]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jan 08 21:27:32 multinode-554300 kubelet[2653]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jan 08 21:27:32 multinode-554300 kubelet[2653]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0108 21:28:09.483568    8924 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p multinode-554300 -n multinode-554300
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p multinode-554300 -n multinode-554300: (12.1211083s)
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-554300 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/PingHostFrom2Pods FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (57.46s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (541.39s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe node list -p multinode-554300
multinode_test.go:318: (dbg) Run:  out/minikube-windows-amd64.exe stop -p multinode-554300
E0108 21:42:52.231963    3008 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-084500\client.crt: The system cannot find the path specified.
E0108 21:42:58.298696    3008 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-242800\client.crt: The system cannot find the path specified.
multinode_test.go:318: (dbg) Done: out/minikube-windows-amd64.exe stop -p multinode-554300: (1m20.4210934s)
multinode_test.go:323: (dbg) Run:  out/minikube-windows-amd64.exe start -p multinode-554300 --wait=true -v=8 --alsologtostderr
E0108 21:45:48.018899    3008 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ingress-addon-legacy-054400\client.crt: The system cannot find the path specified.
E0108 21:45:55.463272    3008 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-084500\client.crt: The system cannot find the path specified.
E0108 21:47:52.245171    3008 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-084500\client.crt: The system cannot find the path specified.
E0108 21:47:58.297808    3008 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-242800\client.crt: The system cannot find the path specified.
E0108 21:50:31.234556    3008 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ingress-addon-legacy-054400\client.crt: The system cannot find the path specified.
multinode_test.go:323: (dbg) Done: out/minikube-windows-amd64.exe start -p multinode-554300 --wait=true -v=8 --alsologtostderr: (7m4.0882223s)
multinode_test.go:328: (dbg) Run:  out/minikube-windows-amd64.exe node list -p multinode-554300
multinode_test.go:335: reported node list is not the same after restart. Before restart: multinode-554300	172.29.107.59
multinode-554300-m02	172.29.96.43
multinode-554300-m03	172.29.100.57

                                                
                                                
After restart: multinode-554300	172.29.104.77
multinode-554300-m02	172.29.97.220
multinode-554300-m03	172.29.108.2
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-554300 -n multinode-554300
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-554300 -n multinode-554300: (12.1819703s)
helpers_test.go:244: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/RestartKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-554300 logs -n 25
E0108 21:50:48.019583    3008 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ingress-addon-legacy-054400\client.crt: The system cannot find the path specified.
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-554300 logs -n 25: (8.9067841s)
helpers_test.go:252: TestMultiNode/serial/RestartKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|-------------------------------------------------------------------------------------------------------------------------|------------------|-------------------|---------|---------------------|---------------------|
	| Command |                                                          Args                                                           |     Profile      |       User        | Version |     Start Time      |      End Time       |
	|---------|-------------------------------------------------------------------------------------------------------------------------|------------------|-------------------|---------|---------------------|---------------------|
	| ssh     | multinode-554300 ssh -n                                                                                                 | multinode-554300 | minikube7\jenkins | v1.32.0 | 08 Jan 24 21:34 UTC | 08 Jan 24 21:35 UTC |
	|         | multinode-554300-m02 sudo cat                                                                                           |                  |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                |                  |                   |         |                     |                     |
	| cp      | multinode-554300 cp multinode-554300-m02:/home/docker/cp-test.txt                                                       | multinode-554300 | minikube7\jenkins | v1.32.0 | 08 Jan 24 21:35 UTC | 08 Jan 24 21:35 UTC |
	|         | C:\Users\jenkins.minikube7\AppData\Local\Temp\TestMultiNodeserialCopyFile213548577\001\cp-test_multinode-554300-m02.txt |                  |                   |         |                     |                     |
	| ssh     | multinode-554300 ssh -n                                                                                                 | multinode-554300 | minikube7\jenkins | v1.32.0 | 08 Jan 24 21:35 UTC | 08 Jan 24 21:35 UTC |
	|         | multinode-554300-m02 sudo cat                                                                                           |                  |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                |                  |                   |         |                     |                     |
	| cp      | multinode-554300 cp multinode-554300-m02:/home/docker/cp-test.txt                                                       | multinode-554300 | minikube7\jenkins | v1.32.0 | 08 Jan 24 21:35 UTC | 08 Jan 24 21:35 UTC |
	|         | multinode-554300:/home/docker/cp-test_multinode-554300-m02_multinode-554300.txt                                         |                  |                   |         |                     |                     |
	| ssh     | multinode-554300 ssh -n                                                                                                 | multinode-554300 | minikube7\jenkins | v1.32.0 | 08 Jan 24 21:35 UTC | 08 Jan 24 21:35 UTC |
	|         | multinode-554300-m02 sudo cat                                                                                           |                  |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                |                  |                   |         |                     |                     |
	| ssh     | multinode-554300 ssh -n multinode-554300 sudo cat                                                                       | multinode-554300 | minikube7\jenkins | v1.32.0 | 08 Jan 24 21:35 UTC | 08 Jan 24 21:35 UTC |
	|         | /home/docker/cp-test_multinode-554300-m02_multinode-554300.txt                                                          |                  |                   |         |                     |                     |
	| cp      | multinode-554300 cp multinode-554300-m02:/home/docker/cp-test.txt                                                       | multinode-554300 | minikube7\jenkins | v1.32.0 | 08 Jan 24 21:35 UTC | 08 Jan 24 21:36 UTC |
	|         | multinode-554300-m03:/home/docker/cp-test_multinode-554300-m02_multinode-554300-m03.txt                                 |                  |                   |         |                     |                     |
	| ssh     | multinode-554300 ssh -n                                                                                                 | multinode-554300 | minikube7\jenkins | v1.32.0 | 08 Jan 24 21:36 UTC | 08 Jan 24 21:36 UTC |
	|         | multinode-554300-m02 sudo cat                                                                                           |                  |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                |                  |                   |         |                     |                     |
	| ssh     | multinode-554300 ssh -n multinode-554300-m03 sudo cat                                                                   | multinode-554300 | minikube7\jenkins | v1.32.0 | 08 Jan 24 21:36 UTC | 08 Jan 24 21:36 UTC |
	|         | /home/docker/cp-test_multinode-554300-m02_multinode-554300-m03.txt                                                      |                  |                   |         |                     |                     |
	| cp      | multinode-554300 cp testdata\cp-test.txt                                                                                | multinode-554300 | minikube7\jenkins | v1.32.0 | 08 Jan 24 21:36 UTC | 08 Jan 24 21:36 UTC |
	|         | multinode-554300-m03:/home/docker/cp-test.txt                                                                           |                  |                   |         |                     |                     |
	| ssh     | multinode-554300 ssh -n                                                                                                 | multinode-554300 | minikube7\jenkins | v1.32.0 | 08 Jan 24 21:36 UTC | 08 Jan 24 21:36 UTC |
	|         | multinode-554300-m03 sudo cat                                                                                           |                  |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                |                  |                   |         |                     |                     |
	| cp      | multinode-554300 cp multinode-554300-m03:/home/docker/cp-test.txt                                                       | multinode-554300 | minikube7\jenkins | v1.32.0 | 08 Jan 24 21:36 UTC | 08 Jan 24 21:36 UTC |
	|         | C:\Users\jenkins.minikube7\AppData\Local\Temp\TestMultiNodeserialCopyFile213548577\001\cp-test_multinode-554300-m03.txt |                  |                   |         |                     |                     |
	| ssh     | multinode-554300 ssh -n                                                                                                 | multinode-554300 | minikube7\jenkins | v1.32.0 | 08 Jan 24 21:36 UTC | 08 Jan 24 21:37 UTC |
	|         | multinode-554300-m03 sudo cat                                                                                           |                  |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                |                  |                   |         |                     |                     |
	| cp      | multinode-554300 cp multinode-554300-m03:/home/docker/cp-test.txt                                                       | multinode-554300 | minikube7\jenkins | v1.32.0 | 08 Jan 24 21:37 UTC | 08 Jan 24 21:37 UTC |
	|         | multinode-554300:/home/docker/cp-test_multinode-554300-m03_multinode-554300.txt                                         |                  |                   |         |                     |                     |
	| ssh     | multinode-554300 ssh -n                                                                                                 | multinode-554300 | minikube7\jenkins | v1.32.0 | 08 Jan 24 21:37 UTC | 08 Jan 24 21:37 UTC |
	|         | multinode-554300-m03 sudo cat                                                                                           |                  |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                |                  |                   |         |                     |                     |
	| ssh     | multinode-554300 ssh -n multinode-554300 sudo cat                                                                       | multinode-554300 | minikube7\jenkins | v1.32.0 | 08 Jan 24 21:37 UTC | 08 Jan 24 21:37 UTC |
	|         | /home/docker/cp-test_multinode-554300-m03_multinode-554300.txt                                                          |                  |                   |         |                     |                     |
	| cp      | multinode-554300 cp multinode-554300-m03:/home/docker/cp-test.txt                                                       | multinode-554300 | minikube7\jenkins | v1.32.0 | 08 Jan 24 21:37 UTC | 08 Jan 24 21:37 UTC |
	|         | multinode-554300-m02:/home/docker/cp-test_multinode-554300-m03_multinode-554300-m02.txt                                 |                  |                   |         |                     |                     |
	| ssh     | multinode-554300 ssh -n                                                                                                 | multinode-554300 | minikube7\jenkins | v1.32.0 | 08 Jan 24 21:37 UTC | 08 Jan 24 21:38 UTC |
	|         | multinode-554300-m03 sudo cat                                                                                           |                  |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                |                  |                   |         |                     |                     |
	| ssh     | multinode-554300 ssh -n multinode-554300-m02 sudo cat                                                                   | multinode-554300 | minikube7\jenkins | v1.32.0 | 08 Jan 24 21:38 UTC | 08 Jan 24 21:38 UTC |
	|         | /home/docker/cp-test_multinode-554300-m03_multinode-554300-m02.txt                                                      |                  |                   |         |                     |                     |
	| node    | multinode-554300 node stop m03                                                                                          | multinode-554300 | minikube7\jenkins | v1.32.0 | 08 Jan 24 21:38 UTC | 08 Jan 24 21:38 UTC |
	| node    | multinode-554300 node start                                                                                             | multinode-554300 | minikube7\jenkins | v1.32.0 | 08 Jan 24 21:39 UTC | 08 Jan 24 21:41 UTC |
	|         | m03 --alsologtostderr                                                                                                   |                  |                   |         |                     |                     |
	| node    | list -p multinode-554300                                                                                                | multinode-554300 | minikube7\jenkins | v1.32.0 | 08 Jan 24 21:42 UTC |                     |
	| stop    | -p multinode-554300                                                                                                     | multinode-554300 | minikube7\jenkins | v1.32.0 | 08 Jan 24 21:42 UTC | 08 Jan 24 21:43 UTC |
	| start   | -p multinode-554300                                                                                                     | multinode-554300 | minikube7\jenkins | v1.32.0 | 08 Jan 24 21:43 UTC | 08 Jan 24 21:50 UTC |
	|         | --wait=true -v=8                                                                                                        |                  |                   |         |                     |                     |
	|         | --alsologtostderr                                                                                                       |                  |                   |         |                     |                     |
	| node    | list -p multinode-554300                                                                                                | multinode-554300 | minikube7\jenkins | v1.32.0 | 08 Jan 24 21:50 UTC |                     |
	|---------|-------------------------------------------------------------------------------------------------------------------------|------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/01/08 21:43:27
	Running on machine: minikube7
	Binary: Built with gc go1.21.5 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0108 21:43:27.919469   10884 out.go:296] Setting OutFile to fd 1420 ...
	I0108 21:43:27.920361   10884 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 21:43:27.920361   10884 out.go:309] Setting ErrFile to fd 1496...
	I0108 21:43:27.920361   10884 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 21:43:27.942723   10884 out.go:303] Setting JSON to false
	I0108 21:43:27.945410   10884 start.go:128] hostinfo: {"hostname":"minikube7","uptime":28149,"bootTime":1704722057,"procs":194,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.3803 Build 19045.3803","kernelVersion":"10.0.19045.3803 Build 19045.3803","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f66ed2ea-6c04-4a6b-8eea-b2fb0953e990"}
	W0108 21:43:27.945410   10884 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0108 21:43:27.946380   10884 out.go:177] * [multinode-554300] minikube v1.32.0 on Microsoft Windows 10 Enterprise N 10.0.19045.3803 Build 19045.3803
	I0108 21:43:27.947497   10884 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	I0108 21:43:27.948261   10884 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0108 21:43:27.948894   10884 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube7\minikube-integration\.minikube
	I0108 21:43:27.947497   10884 notify.go:220] Checking for updates...
	I0108 21:43:27.949617   10884 out.go:177]   - MINIKUBE_LOCATION=17907
	I0108 21:43:27.950318   10884 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0108 21:43:27.952039   10884 config.go:182] Loaded profile config "multinode-554300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0108 21:43:27.952039   10884 driver.go:392] Setting default libvirt URI to qemu:///system
	I0108 21:43:33.250297   10884 out.go:177] * Using the hyperv driver based on existing profile
	I0108 21:43:33.251031   10884 start.go:298] selected driver: hyperv
	I0108 21:43:33.251031   10884 start.go:902] validating driver "hyperv" against &{Name:multinode-554300 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17806/minikube-v1.32.1-1702708929-17806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCon
fig:{KubernetesVersion:v1.28.4 ClusterName:multinode-554300 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:172.29.107.59 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.29.96.43 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.29.100.57 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inacc
el:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9
p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I0108 21:43:33.251409   10884 start.go:913] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0108 21:43:33.299821   10884 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0108 21:43:33.299977   10884 cni.go:84] Creating CNI manager for ""
	I0108 21:43:33.299977   10884 cni.go:136] 3 nodes found, recommending kindnet
	I0108 21:43:33.299977   10884 start_flags.go:323] config:
	{Name:multinode-554300 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17806/minikube-v1.32.1-1702708929-17806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-554300 Namespace:default AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:172.29.107.59 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.29.96.43 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.29.100.57 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false isti
o-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I0108 21:43:33.300133   10884 iso.go:125] acquiring lock: {Name:mk1869c5b5033c1301af33a7fa364ec43dd63efe Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 21:43:33.301781   10884 out.go:177] * Starting control plane node multinode-554300 in cluster multinode-554300
	I0108 21:43:33.302895   10884 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0108 21:43:33.302895   10884 preload.go:148] Found local preload: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4
	I0108 21:43:33.302895   10884 cache.go:56] Caching tarball of preloaded images
	I0108 21:43:33.302895   10884 preload.go:174] Found C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0108 21:43:33.303676   10884 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on docker
	I0108 21:43:33.303676   10884 profile.go:148] Saving config to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-554300\config.json ...
	I0108 21:43:33.306013   10884 start.go:365] acquiring machines lock for multinode-554300: {Name:mk814f158b6187cc9297257c36fdbe0d2871c950 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0108 21:43:33.306233   10884 start.go:369] acquired machines lock for "multinode-554300" in 219.7µs
	I0108 21:43:33.306407   10884 start.go:96] Skipping create...Using existing machine configuration
	I0108 21:43:33.306479   10884 fix.go:54] fixHost starting: 
	I0108 21:43:33.306771   10884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-554300 ).state
	I0108 21:43:35.896493   10884 main.go:141] libmachine: [stdout =====>] : Off
	
	I0108 21:43:35.896493   10884 main.go:141] libmachine: [stderr =====>] : 
	I0108 21:43:35.896493   10884 fix.go:102] recreateIfNeeded on multinode-554300: state=Stopped err=<nil>
	W0108 21:43:35.896493   10884 fix.go:128] unexpected machine state, will restart: <nil>
	I0108 21:43:35.897441   10884 out.go:177] * Restarting existing hyperv VM for "multinode-554300" ...
	I0108 21:43:35.898227   10884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-554300
	I0108 21:43:38.734428   10884 main.go:141] libmachine: [stdout =====>] : 
	I0108 21:43:38.734428   10884 main.go:141] libmachine: [stderr =====>] : 
	I0108 21:43:38.734428   10884 main.go:141] libmachine: Waiting for host to start...
	I0108 21:43:38.734561   10884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-554300 ).state
	I0108 21:43:40.921551   10884 main.go:141] libmachine: [stdout =====>] : Running
	
	I0108 21:43:40.921627   10884 main.go:141] libmachine: [stderr =====>] : 
	I0108 21:43:40.921762   10884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-554300 ).networkadapters[0]).ipaddresses[0]
	I0108 21:43:43.368781   10884 main.go:141] libmachine: [stdout =====>] : 
	I0108 21:43:43.369037   10884 main.go:141] libmachine: [stderr =====>] : 
	I0108 21:43:44.381824   10884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-554300 ).state
	I0108 21:43:46.540954   10884 main.go:141] libmachine: [stdout =====>] : Running
	
	I0108 21:43:46.541047   10884 main.go:141] libmachine: [stderr =====>] : 
	I0108 21:43:46.541377   10884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-554300 ).networkadapters[0]).ipaddresses[0]
	I0108 21:43:49.036751   10884 main.go:141] libmachine: [stdout =====>] : 
	I0108 21:43:49.036785   10884 main.go:141] libmachine: [stderr =====>] : 
	I0108 21:43:50.037509   10884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-554300 ).state
	I0108 21:43:52.218442   10884 main.go:141] libmachine: [stdout =====>] : Running
	
	I0108 21:43:52.218442   10884 main.go:141] libmachine: [stderr =====>] : 
	I0108 21:43:52.218442   10884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-554300 ).networkadapters[0]).ipaddresses[0]
	I0108 21:43:54.744943   10884 main.go:141] libmachine: [stdout =====>] : 
	I0108 21:43:54.745001   10884 main.go:141] libmachine: [stderr =====>] : 
	I0108 21:43:55.745570   10884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-554300 ).state
	I0108 21:43:57.936586   10884 main.go:141] libmachine: [stdout =====>] : Running
	
	I0108 21:43:57.936870   10884 main.go:141] libmachine: [stderr =====>] : 
	I0108 21:43:57.936870   10884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-554300 ).networkadapters[0]).ipaddresses[0]
	I0108 21:44:00.405237   10884 main.go:141] libmachine: [stdout =====>] : 
	I0108 21:44:00.405430   10884 main.go:141] libmachine: [stderr =====>] : 
	I0108 21:44:01.411649   10884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-554300 ).state
	I0108 21:44:03.597436   10884 main.go:141] libmachine: [stdout =====>] : Running
	
	I0108 21:44:03.597436   10884 main.go:141] libmachine: [stderr =====>] : 
	I0108 21:44:03.597436   10884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-554300 ).networkadapters[0]).ipaddresses[0]
	I0108 21:44:06.064384   10884 main.go:141] libmachine: [stdout =====>] : 172.29.104.77
	
	I0108 21:44:06.064384   10884 main.go:141] libmachine: [stderr =====>] : 
	I0108 21:44:06.067239   10884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-554300 ).state
	I0108 21:44:08.167721   10884 main.go:141] libmachine: [stdout =====>] : Running
	
	I0108 21:44:08.167721   10884 main.go:141] libmachine: [stderr =====>] : 
	I0108 21:44:08.167721   10884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-554300 ).networkadapters[0]).ipaddresses[0]
	I0108 21:44:10.670050   10884 main.go:141] libmachine: [stdout =====>] : 172.29.104.77
	
	I0108 21:44:10.670050   10884 main.go:141] libmachine: [stderr =====>] : 
	I0108 21:44:10.670287   10884 profile.go:148] Saving config to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-554300\config.json ...
	I0108 21:44:10.673353   10884 machine.go:88] provisioning docker machine ...
	I0108 21:44:10.673457   10884 buildroot.go:166] provisioning hostname "multinode-554300"
	I0108 21:44:10.673541   10884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-554300 ).state
	I0108 21:44:12.775348   10884 main.go:141] libmachine: [stdout =====>] : Running
	
	I0108 21:44:12.775348   10884 main.go:141] libmachine: [stderr =====>] : 
	I0108 21:44:12.775348   10884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-554300 ).networkadapters[0]).ipaddresses[0]
	I0108 21:44:15.258192   10884 main.go:141] libmachine: [stdout =====>] : 172.29.104.77
	
	I0108 21:44:15.258503   10884 main.go:141] libmachine: [stderr =====>] : 
	I0108 21:44:15.263899   10884 main.go:141] libmachine: Using SSH client type: native
	I0108 21:44:15.265255   10884 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4a6120] 0x4a8c60 <nil>  [] 0s} 172.29.104.77 22 <nil> <nil>}
	I0108 21:44:15.265255   10884 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-554300 && echo "multinode-554300" | sudo tee /etc/hostname
	I0108 21:44:15.432071   10884 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-554300
	
	I0108 21:44:15.432149   10884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-554300 ).state
	I0108 21:44:17.533262   10884 main.go:141] libmachine: [stdout =====>] : Running
	
	I0108 21:44:17.533303   10884 main.go:141] libmachine: [stderr =====>] : 
	I0108 21:44:17.533376   10884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-554300 ).networkadapters[0]).ipaddresses[0]
	I0108 21:44:20.073848   10884 main.go:141] libmachine: [stdout =====>] : 172.29.104.77
	
	I0108 21:44:20.073926   10884 main.go:141] libmachine: [stderr =====>] : 
	I0108 21:44:20.079680   10884 main.go:141] libmachine: Using SSH client type: native
	I0108 21:44:20.080427   10884 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4a6120] 0x4a8c60 <nil>  [] 0s} 172.29.104.77 22 <nil> <nil>}
	I0108 21:44:20.080427   10884 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-554300' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-554300/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-554300' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0108 21:44:20.233466   10884 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0108 21:44:20.233586   10884 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube7\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube7\minikube-integration\.minikube}
	I0108 21:44:20.233644   10884 buildroot.go:174] setting up certificates
	I0108 21:44:20.233644   10884 provision.go:83] configureAuth start
	I0108 21:44:20.233720   10884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-554300 ).state
	I0108 21:44:22.328928   10884 main.go:141] libmachine: [stdout =====>] : Running
	
	I0108 21:44:22.328928   10884 main.go:141] libmachine: [stderr =====>] : 
	I0108 21:44:22.328928   10884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-554300 ).networkadapters[0]).ipaddresses[0]
	I0108 21:44:24.852232   10884 main.go:141] libmachine: [stdout =====>] : 172.29.104.77
	
	I0108 21:44:24.852232   10884 main.go:141] libmachine: [stderr =====>] : 
	I0108 21:44:24.852341   10884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-554300 ).state
	I0108 21:44:26.973144   10884 main.go:141] libmachine: [stdout =====>] : Running
	
	I0108 21:44:26.973334   10884 main.go:141] libmachine: [stderr =====>] : 
	I0108 21:44:26.973422   10884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-554300 ).networkadapters[0]).ipaddresses[0]
	I0108 21:44:29.484812   10884 main.go:141] libmachine: [stdout =====>] : 172.29.104.77
	
	I0108 21:44:29.484812   10884 main.go:141] libmachine: [stderr =====>] : 
	I0108 21:44:29.484812   10884 provision.go:138] copyHostCerts
	I0108 21:44:29.484812   10884 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube7\minikube-integration\.minikube/ca.pem
	I0108 21:44:29.484812   10884 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/ca.pem, removing ...
	I0108 21:44:29.484812   10884 exec_runner.go:203] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.pem
	I0108 21:44:29.485762   10884 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0108 21:44:29.487046   10884 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube7\minikube-integration\.minikube/cert.pem
	I0108 21:44:29.487046   10884 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/cert.pem, removing ...
	I0108 21:44:29.487046   10884 exec_runner.go:203] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cert.pem
	I0108 21:44:29.487572   10884 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0108 21:44:29.488300   10884 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube7\minikube-integration\.minikube/key.pem
	I0108 21:44:29.488300   10884 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/key.pem, removing ...
	I0108 21:44:29.488300   10884 exec_runner.go:203] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\key.pem
	I0108 21:44:29.488945   10884 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/key.pem (1679 bytes)
	I0108 21:44:29.490260   10884 provision.go:112] generating server cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-554300 san=[172.29.104.77 172.29.104.77 localhost 127.0.0.1 minikube multinode-554300]
	I0108 21:44:29.726314   10884 provision.go:172] copyRemoteCerts
	I0108 21:44:29.738960   10884 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0108 21:44:29.738960   10884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-554300 ).state
	I0108 21:44:31.822790   10884 main.go:141] libmachine: [stdout =====>] : Running
	
	I0108 21:44:31.822790   10884 main.go:141] libmachine: [stderr =====>] : 
	I0108 21:44:31.822912   10884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-554300 ).networkadapters[0]).ipaddresses[0]
	I0108 21:44:34.334052   10884 main.go:141] libmachine: [stdout =====>] : 172.29.104.77
	
	I0108 21:44:34.334052   10884 main.go:141] libmachine: [stderr =====>] : 
	I0108 21:44:34.334788   10884 sshutil.go:53] new ssh client: &{IP:172.29.104.77 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-554300\id_rsa Username:docker}
	I0108 21:44:34.446100   10884 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.7071179s)
	I0108 21:44:34.446269   10884 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0108 21:44:34.446625   10884 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0108 21:44:34.486252   10884 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0108 21:44:34.486787   10884 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1224 bytes)
	I0108 21:44:34.523629   10884 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0108 21:44:34.523629   10884 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0108 21:44:34.559514   10884 provision.go:86] duration metric: configureAuth took 14.3257259s
	I0108 21:44:34.559514   10884 buildroot.go:189] setting minikube options for container-runtime
	I0108 21:44:34.559514   10884 config.go:182] Loaded profile config "multinode-554300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0108 21:44:34.559514   10884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-554300 ).state
	I0108 21:44:36.668396   10884 main.go:141] libmachine: [stdout =====>] : Running
	
	I0108 21:44:36.668550   10884 main.go:141] libmachine: [stderr =====>] : 
	I0108 21:44:36.668550   10884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-554300 ).networkadapters[0]).ipaddresses[0]
	I0108 21:44:39.165810   10884 main.go:141] libmachine: [stdout =====>] : 172.29.104.77
	
	I0108 21:44:39.165810   10884 main.go:141] libmachine: [stderr =====>] : 
	I0108 21:44:39.171669   10884 main.go:141] libmachine: Using SSH client type: native
	I0108 21:44:39.172448   10884 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4a6120] 0x4a8c60 <nil>  [] 0s} 172.29.104.77 22 <nil> <nil>}
	I0108 21:44:39.172448   10884 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0108 21:44:39.330491   10884 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0108 21:44:39.330614   10884 buildroot.go:70] root file system type: tmpfs
	I0108 21:44:39.330687   10884 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0108 21:44:39.330687   10884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-554300 ).state
	I0108 21:44:41.411338   10884 main.go:141] libmachine: [stdout =====>] : Running
	
	I0108 21:44:41.411338   10884 main.go:141] libmachine: [stderr =====>] : 
	I0108 21:44:41.411338   10884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-554300 ).networkadapters[0]).ipaddresses[0]
	I0108 21:44:43.865979   10884 main.go:141] libmachine: [stdout =====>] : 172.29.104.77
	
	I0108 21:44:43.865979   10884 main.go:141] libmachine: [stderr =====>] : 
	I0108 21:44:43.870435   10884 main.go:141] libmachine: Using SSH client type: native
	I0108 21:44:43.871279   10884 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4a6120] 0x4a8c60 <nil>  [] 0s} 172.29.104.77 22 <nil> <nil>}
	I0108 21:44:43.871279   10884 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0108 21:44:44.051110   10884 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0108 21:44:44.051266   10884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-554300 ).state
	I0108 21:44:46.125214   10884 main.go:141] libmachine: [stdout =====>] : Running
	
	I0108 21:44:46.125430   10884 main.go:141] libmachine: [stderr =====>] : 
	I0108 21:44:46.125430   10884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-554300 ).networkadapters[0]).ipaddresses[0]
	I0108 21:44:48.615040   10884 main.go:141] libmachine: [stdout =====>] : 172.29.104.77
	
	I0108 21:44:48.615040   10884 main.go:141] libmachine: [stderr =====>] : 
	I0108 21:44:48.622404   10884 main.go:141] libmachine: Using SSH client type: native
	I0108 21:44:48.622899   10884 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4a6120] 0x4a8c60 <nil>  [] 0s} 172.29.104.77 22 <nil> <nil>}
	I0108 21:44:48.622899   10884 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0108 21:44:49.862663   10884 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0108 21:44:49.862663   10884 machine.go:91] provisioned docker machine in 39.1890361s
	I0108 21:44:49.862663   10884 start.go:300] post-start starting for "multinode-554300" (driver="hyperv")
	I0108 21:44:49.862814   10884 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0108 21:44:49.876254   10884 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0108 21:44:49.876254   10884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-554300 ).state
	I0108 21:44:51.924099   10884 main.go:141] libmachine: [stdout =====>] : Running
	
	I0108 21:44:51.924280   10884 main.go:141] libmachine: [stderr =====>] : 
	I0108 21:44:51.924388   10884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-554300 ).networkadapters[0]).ipaddresses[0]
	I0108 21:44:54.457335   10884 main.go:141] libmachine: [stdout =====>] : 172.29.104.77
	
	I0108 21:44:54.457732   10884 main.go:141] libmachine: [stderr =====>] : 
	I0108 21:44:54.458244   10884 sshutil.go:53] new ssh client: &{IP:172.29.104.77 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-554300\id_rsa Username:docker}
	I0108 21:44:54.568731   10884 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.6924546s)
	I0108 21:44:54.582826   10884 ssh_runner.go:195] Run: cat /etc/os-release
	I0108 21:44:54.590568   10884 command_runner.go:130] > NAME=Buildroot
	I0108 21:44:54.590568   10884 command_runner.go:130] > VERSION=2021.02.12-1-gae27a7b-dirty
	I0108 21:44:54.590760   10884 command_runner.go:130] > ID=buildroot
	I0108 21:44:54.590760   10884 command_runner.go:130] > VERSION_ID=2021.02.12
	I0108 21:44:54.590760   10884 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
	I0108 21:44:54.590969   10884 info.go:137] Remote host: Buildroot 2021.02.12
	I0108 21:44:54.590969   10884 filesync.go:126] Scanning C:\Users\jenkins.minikube7\minikube-integration\.minikube\addons for local assets ...
	I0108 21:44:54.590969   10884 filesync.go:126] Scanning C:\Users\jenkins.minikube7\minikube-integration\.minikube\files for local assets ...
	I0108 21:44:54.592755   10884 filesync.go:149] local asset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\30082.pem -> 30082.pem in /etc/ssl/certs
	I0108 21:44:54.592755   10884 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\30082.pem -> /etc/ssl/certs/30082.pem
	I0108 21:44:54.606495   10884 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0108 21:44:54.625989   10884 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\30082.pem --> /etc/ssl/certs/30082.pem (1708 bytes)
	I0108 21:44:54.663406   10884 start.go:303] post-start completed in 4.8007192s
	I0108 21:44:54.663406   10884 fix.go:56] fixHost completed within 1m21.3565323s
	I0108 21:44:54.664948   10884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-554300 ).state
	I0108 21:44:56.746353   10884 main.go:141] libmachine: [stdout =====>] : Running
	
	I0108 21:44:56.746353   10884 main.go:141] libmachine: [stderr =====>] : 
	I0108 21:44:56.746353   10884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-554300 ).networkadapters[0]).ipaddresses[0]
	I0108 21:44:59.259348   10884 main.go:141] libmachine: [stdout =====>] : 172.29.104.77
	
	I0108 21:44:59.259348   10884 main.go:141] libmachine: [stderr =====>] : 
	I0108 21:44:59.265822   10884 main.go:141] libmachine: Using SSH client type: native
	I0108 21:44:59.266621   10884 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4a6120] 0x4a8c60 <nil>  [] 0s} 172.29.104.77 22 <nil> <nil>}
	I0108 21:44:59.266621   10884 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0108 21:44:59.421750   10884 main.go:141] libmachine: SSH cmd err, output: <nil>: 1704750299.427047950
	
	I0108 21:44:59.421750   10884 fix.go:206] guest clock: 1704750299.427047950
	I0108 21:44:59.421750   10884 fix.go:219] Guest: 2024-01-08 21:44:59.42704795 +0000 UTC Remote: 2024-01-08 21:44:54.6634061 +0000 UTC m=+86.918703801 (delta=4.76364185s)
	I0108 21:44:59.421750   10884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-554300 ).state
	I0108 21:45:01.505533   10884 main.go:141] libmachine: [stdout =====>] : Running
	
	I0108 21:45:01.505533   10884 main.go:141] libmachine: [stderr =====>] : 
	I0108 21:45:01.505533   10884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-554300 ).networkadapters[0]).ipaddresses[0]
	I0108 21:45:04.019640   10884 main.go:141] libmachine: [stdout =====>] : 172.29.104.77
	
	I0108 21:45:04.019640   10884 main.go:141] libmachine: [stderr =====>] : 
	I0108 21:45:04.025744   10884 main.go:141] libmachine: Using SSH client type: native
	I0108 21:45:04.026449   10884 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4a6120] 0x4a8c60 <nil>  [] 0s} 172.29.104.77 22 <nil> <nil>}
	I0108 21:45:04.026449   10884 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1704750299
	I0108 21:45:04.195990   10884 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Jan  8 21:44:59 UTC 2024
	
	I0108 21:45:04.195990   10884 fix.go:226] clock set: Mon Jan  8 21:44:59 UTC 2024
	 (err=<nil>)
	I0108 21:45:04.195990   10884 start.go:83] releasing machines lock for "multinode-554300", held for 1m30.8893164s
	I0108 21:45:04.196581   10884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-554300 ).state
	I0108 21:45:06.269561   10884 main.go:141] libmachine: [stdout =====>] : Running
	
	I0108 21:45:06.269561   10884 main.go:141] libmachine: [stderr =====>] : 
	I0108 21:45:06.270599   10884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-554300 ).networkadapters[0]).ipaddresses[0]
	I0108 21:45:08.749814   10884 main.go:141] libmachine: [stdout =====>] : 172.29.104.77
	
	I0108 21:45:08.749982   10884 main.go:141] libmachine: [stderr =====>] : 
	I0108 21:45:08.757633   10884 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0108 21:45:08.757777   10884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-554300 ).state
	I0108 21:45:08.765985   10884 ssh_runner.go:195] Run: cat /version.json
	I0108 21:45:08.765985   10884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-554300 ).state
	I0108 21:45:10.932992   10884 main.go:141] libmachine: [stdout =====>] : Running
	
	I0108 21:45:10.933198   10884 main.go:141] libmachine: [stderr =====>] : 
	I0108 21:45:10.932992   10884 main.go:141] libmachine: [stdout =====>] : Running
	
	I0108 21:45:10.933305   10884 main.go:141] libmachine: [stderr =====>] : 
	I0108 21:45:10.933305   10884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-554300 ).networkadapters[0]).ipaddresses[0]
	I0108 21:45:10.933353   10884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-554300 ).networkadapters[0]).ipaddresses[0]
	I0108 21:45:13.582702   10884 main.go:141] libmachine: [stdout =====>] : 172.29.104.77
	
	I0108 21:45:13.582970   10884 main.go:141] libmachine: [stderr =====>] : 
	I0108 21:45:13.583206   10884 sshutil.go:53] new ssh client: &{IP:172.29.104.77 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-554300\id_rsa Username:docker}
	I0108 21:45:13.604063   10884 main.go:141] libmachine: [stdout =====>] : 172.29.104.77
	
	I0108 21:45:13.604063   10884 main.go:141] libmachine: [stderr =====>] : 
	I0108 21:45:13.605098   10884 sshutil.go:53] new ssh client: &{IP:172.29.104.77 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-554300\id_rsa Username:docker}
	I0108 21:45:13.781085   10884 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0108 21:45:13.781085   10884 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (5.0233949s)
	I0108 21:45:13.781311   10884 command_runner.go:130] > {"iso_version": "v1.32.1-1702708929-17806", "kicbase_version": "v0.0.42-1702660877-17806", "minikube_version": "v1.32.0", "commit": "957da21b08687cca2533dd65b67e68ead277b79e"}
	I0108 21:45:13.781311   10884 ssh_runner.go:235] Completed: cat /version.json: (5.0153018s)
	I0108 21:45:13.795794   10884 ssh_runner.go:195] Run: systemctl --version
	I0108 21:45:13.804629   10884 command_runner.go:130] > systemd 247 (247)
	I0108 21:45:13.804742   10884 command_runner.go:130] > -PAM -AUDIT -SELINUX -IMA -APPARMOR -SMACK -SYSVINIT -UTMP -LIBCRYPTSETUP -GCRYPT -GNUTLS +ACL +XZ +LZ4 -ZSTD +SECCOMP +BLKID -ELFUTILS +KMOD -IDN2 -IDN -PCRE2 default-hierarchy=hybrid
	I0108 21:45:13.819874   10884 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0108 21:45:13.827738   10884 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0108 21:45:13.828268   10884 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0108 21:45:13.844778   10884 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0108 21:45:13.870529   10884 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0108 21:45:13.870529   10884 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0108 21:45:13.870529   10884 start.go:475] detecting cgroup driver to use...
	I0108 21:45:13.870529   10884 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0108 21:45:13.904151   10884 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0108 21:45:13.919626   10884 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0108 21:45:13.949060   10884 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0108 21:45:13.964563   10884 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0108 21:45:13.980855   10884 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0108 21:45:14.012344   10884 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0108 21:45:14.043565   10884 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0108 21:45:14.073555   10884 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0108 21:45:14.104146   10884 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0108 21:45:14.134609   10884 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0108 21:45:14.165371   10884 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0108 21:45:14.179731   10884 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0108 21:45:14.193908   10884 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0108 21:45:14.226422   10884 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0108 21:45:14.410854   10884 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0108 21:45:14.437446   10884 start.go:475] detecting cgroup driver to use...
	I0108 21:45:14.454023   10884 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0108 21:45:14.473388   10884 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0108 21:45:14.473388   10884 command_runner.go:130] > [Unit]
	I0108 21:45:14.473388   10884 command_runner.go:130] > Description=Docker Application Container Engine
	I0108 21:45:14.473388   10884 command_runner.go:130] > Documentation=https://docs.docker.com
	I0108 21:45:14.473516   10884 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0108 21:45:14.473516   10884 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0108 21:45:14.473516   10884 command_runner.go:130] > StartLimitBurst=3
	I0108 21:45:14.473516   10884 command_runner.go:130] > StartLimitIntervalSec=60
	I0108 21:45:14.473516   10884 command_runner.go:130] > [Service]
	I0108 21:45:14.473516   10884 command_runner.go:130] > Type=notify
	I0108 21:45:14.473516   10884 command_runner.go:130] > Restart=on-failure
	I0108 21:45:14.473516   10884 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0108 21:45:14.473516   10884 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0108 21:45:14.473626   10884 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0108 21:45:14.473626   10884 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0108 21:45:14.473626   10884 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0108 21:45:14.473626   10884 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0108 21:45:14.473626   10884 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0108 21:45:14.473715   10884 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0108 21:45:14.473743   10884 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0108 21:45:14.473743   10884 command_runner.go:130] > ExecStart=
	I0108 21:45:14.473743   10884 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0108 21:45:14.473743   10884 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0108 21:45:14.473743   10884 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0108 21:45:14.473884   10884 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0108 21:45:14.473884   10884 command_runner.go:130] > LimitNOFILE=infinity
	I0108 21:45:14.473884   10884 command_runner.go:130] > LimitNPROC=infinity
	I0108 21:45:14.473884   10884 command_runner.go:130] > LimitCORE=infinity
	I0108 21:45:14.473884   10884 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0108 21:45:14.473884   10884 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0108 21:45:14.473884   10884 command_runner.go:130] > TasksMax=infinity
	I0108 21:45:14.473884   10884 command_runner.go:130] > TimeoutStartSec=0
	I0108 21:45:14.473884   10884 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0108 21:45:14.473884   10884 command_runner.go:130] > Delegate=yes
	I0108 21:45:14.473993   10884 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0108 21:45:14.473993   10884 command_runner.go:130] > KillMode=process
	I0108 21:45:14.473993   10884 command_runner.go:130] > [Install]
	I0108 21:45:14.473993   10884 command_runner.go:130] > WantedBy=multi-user.target
	I0108 21:45:14.486516   10884 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0108 21:45:14.519060   10884 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0108 21:45:14.560154   10884 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0108 21:45:14.593152   10884 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0108 21:45:14.625036   10884 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0108 21:45:14.676670   10884 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0108 21:45:14.696806   10884 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0108 21:45:14.732373   10884 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0108 21:45:14.745908   10884 ssh_runner.go:195] Run: which cri-dockerd
	I0108 21:45:14.751242   10884 command_runner.go:130] > /usr/bin/cri-dockerd
	I0108 21:45:14.766759   10884 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0108 21:45:14.780603   10884 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0108 21:45:14.822581   10884 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0108 21:45:14.991465   10884 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0108 21:45:15.164697   10884 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0108 21:45:15.164697   10884 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0108 21:45:15.211522   10884 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0108 21:45:15.382578   10884 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0108 21:45:16.977162   10884 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.5945765s)
	I0108 21:45:16.989146   10884 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0108 21:45:17.026554   10884 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0108 21:45:17.059117   10884 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0108 21:45:17.221116   10884 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0108 21:45:17.377713   10884 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0108 21:45:17.532897   10884 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0108 21:45:17.570078   10884 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0108 21:45:17.599086   10884 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0108 21:45:17.758335   10884 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0108 21:45:17.862356   10884 start.go:522] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0108 21:45:17.874345   10884 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0108 21:45:17.881677   10884 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0108 21:45:17.881815   10884 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0108 21:45:17.881891   10884 command_runner.go:130] > Device: 16h/22d	Inode: 929         Links: 1
	I0108 21:45:17.881891   10884 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0108 21:45:17.881891   10884 command_runner.go:130] > Access: 2024-01-08 21:45:17.783961870 +0000
	I0108 21:45:17.881891   10884 command_runner.go:130] > Modify: 2024-01-08 21:45:17.783961870 +0000
	I0108 21:45:17.881891   10884 command_runner.go:130] > Change: 2024-01-08 21:45:17.787961870 +0000
	I0108 21:45:17.881891   10884 command_runner.go:130] >  Birth: -
	I0108 21:45:17.881891   10884 start.go:543] Will wait 60s for crictl version
	I0108 21:45:17.897019   10884 ssh_runner.go:195] Run: which crictl
	I0108 21:45:17.901587   10884 command_runner.go:130] > /usr/bin/crictl
	I0108 21:45:17.916828   10884 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0108 21:45:17.986588   10884 command_runner.go:130] > Version:  0.1.0
	I0108 21:45:17.986657   10884 command_runner.go:130] > RuntimeName:  docker
	I0108 21:45:17.986657   10884 command_runner.go:130] > RuntimeVersion:  24.0.7
	I0108 21:45:17.986657   10884 command_runner.go:130] > RuntimeApiVersion:  v1
	I0108 21:45:17.986657   10884 start.go:559] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  24.0.7
	RuntimeApiVersion:  v1
	I0108 21:45:17.996197   10884 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0108 21:45:18.024226   10884 command_runner.go:130] > 24.0.7
	I0108 21:45:18.037477   10884 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0108 21:45:18.071981   10884 command_runner.go:130] > 24.0.7
	I0108 21:45:18.073782   10884 out.go:204] * Preparing Kubernetes v1.28.4 on Docker 24.0.7 ...
	I0108 21:45:18.073782   10884 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0108 21:45:18.078311   10884 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0108 21:45:18.078311   10884 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0108 21:45:18.078311   10884 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0108 21:45:18.078311   10884 ip.go:207] Found interface: {Index:10 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:21:91:d6 Flags:up|broadcast|multicast|running}
	I0108 21:45:18.081592   10884 ip.go:210] interface addr: fe80::2be2:9d7a:5cc4:f25c/64
	I0108 21:45:18.081592   10884 ip.go:210] interface addr: 172.29.96.1/20
	I0108 21:45:18.092401   10884 ssh_runner.go:195] Run: grep 172.29.96.1	host.minikube.internal$ /etc/hosts
	I0108 21:45:18.098215   10884 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.29.96.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0108 21:45:18.115307   10884 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0108 21:45:18.124864   10884 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0108 21:45:18.149672   10884 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.28.4
	I0108 21:45:18.149672   10884 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.28.4
	I0108 21:45:18.149672   10884 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.28.4
	I0108 21:45:18.149672   10884 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.28.4
	I0108 21:45:18.149672   10884 command_runner.go:130] > kindest/kindnetd:v20230809-80a64d96
	I0108 21:45:18.149672   10884 command_runner.go:130] > registry.k8s.io/etcd:3.5.9-0
	I0108 21:45:18.149672   10884 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.10.1
	I0108 21:45:18.149672   10884 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0108 21:45:18.149672   10884 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0108 21:45:18.149672   10884 command_runner.go:130] > gcr.io/k8s-minikube/busybox:1.28
	I0108 21:45:18.149672   10884 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.28.4
	registry.k8s.io/kube-controller-manager:v1.28.4
	registry.k8s.io/kube-scheduler:v1.28.4
	registry.k8s.io/kube-proxy:v1.28.4
	kindest/kindnetd:v20230809-80a64d96
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0108 21:45:18.149672   10884 docker.go:615] Images already preloaded, skipping extraction
	I0108 21:45:18.160480   10884 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0108 21:45:18.188229   10884 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.28.4
	I0108 21:45:18.188229   10884 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.28.4
	I0108 21:45:18.188229   10884 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.28.4
	I0108 21:45:18.188229   10884 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.28.4
	I0108 21:45:18.188229   10884 command_runner.go:130] > kindest/kindnetd:v20230809-80a64d96
	I0108 21:45:18.188229   10884 command_runner.go:130] > registry.k8s.io/etcd:3.5.9-0
	I0108 21:45:18.188324   10884 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.10.1
	I0108 21:45:18.188324   10884 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0108 21:45:18.188324   10884 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0108 21:45:18.188324   10884 command_runner.go:130] > gcr.io/k8s-minikube/busybox:1.28
	I0108 21:45:18.188402   10884 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.28.4
	registry.k8s.io/kube-controller-manager:v1.28.4
	registry.k8s.io/kube-scheduler:v1.28.4
	registry.k8s.io/kube-proxy:v1.28.4
	kindest/kindnetd:v20230809-80a64d96
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0108 21:45:18.188464   10884 cache_images.go:84] Images are preloaded, skipping loading
	I0108 21:45:18.197251   10884 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0108 21:45:18.229848   10884 command_runner.go:130] > cgroupfs
	I0108 21:45:18.229848   10884 cni.go:84] Creating CNI manager for ""
	I0108 21:45:18.229848   10884 cni.go:136] 3 nodes found, recommending kindnet
	I0108 21:45:18.229848   10884 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0108 21:45:18.229848   10884 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.29.104.77 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-554300 NodeName:multinode-554300 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.29.104.77"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.29.104.77 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0108 21:45:18.230562   10884 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.29.104.77
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "multinode-554300"
	  kubeletExtraArgs:
	    node-ip: 172.29.104.77
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.29.104.77"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0108 21:45:18.230562   10884 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=multinode-554300 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.29.104.77
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:multinode-554300 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0108 21:45:18.245352   10884 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0108 21:45:18.260661   10884 command_runner.go:130] > kubeadm
	I0108 21:45:18.261264   10884 command_runner.go:130] > kubectl
	I0108 21:45:18.261264   10884 command_runner.go:130] > kubelet
	I0108 21:45:18.261420   10884 binaries.go:44] Found k8s binaries, skipping transfer
	I0108 21:45:18.274057   10884 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0108 21:45:18.290617   10884 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I0108 21:45:18.320624   10884 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0108 21:45:18.344475   10884 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2102 bytes)
	I0108 21:45:18.388487   10884 ssh_runner.go:195] Run: grep 172.29.104.77	control-plane.minikube.internal$ /etc/hosts
	I0108 21:45:18.394884   10884 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.29.104.77	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0108 21:45:18.414489   10884 certs.go:56] Setting up C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-554300 for IP: 172.29.104.77
	I0108 21:45:18.414572   10884 certs.go:190] acquiring lock for shared ca certs: {Name:mk3bd6e475aadf28590677101b36f61c132d4290 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 21:45:18.415444   10884 certs.go:199] skipping minikubeCA CA generation: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.key
	I0108 21:45:18.415444   10884 certs.go:199] skipping proxyClientCA CA generation: C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.key
	I0108 21:45:18.416590   10884 certs.go:315] skipping minikube-user signed cert generation: C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-554300\client.key
	I0108 21:45:18.416590   10884 certs.go:319] generating minikube signed cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-554300\apiserver.key.ee9a37fd
	I0108 21:45:18.417109   10884 crypto.go:68] Generating cert C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-554300\apiserver.crt.ee9a37fd with IP's: [172.29.104.77 10.96.0.1 127.0.0.1 10.0.0.1]
	I0108 21:45:18.639755   10884 crypto.go:156] Writing cert to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-554300\apiserver.crt.ee9a37fd ...
	I0108 21:45:18.639755   10884 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-554300\apiserver.crt.ee9a37fd: {Name:mk7ca2d8a0a5294e9a64923bc5120d3d3b587885 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 21:45:18.640754   10884 crypto.go:164] Writing key to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-554300\apiserver.key.ee9a37fd ...
	I0108 21:45:18.640754   10884 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-554300\apiserver.key.ee9a37fd: {Name:mk39f5fa4b120cce24cf16baf9b6d6229fe83cde Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 21:45:18.641769   10884 certs.go:337] copying C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-554300\apiserver.crt.ee9a37fd -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-554300\apiserver.crt
	I0108 21:45:18.655762   10884 certs.go:341] copying C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-554300\apiserver.key.ee9a37fd -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-554300\apiserver.key
	I0108 21:45:18.657772   10884 certs.go:315] skipping aggregator signed cert generation: C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-554300\proxy-client.key
	I0108 21:45:18.657772   10884 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-554300\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0108 21:45:18.658266   10884 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-554300\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0108 21:45:18.658824   10884 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-554300\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0108 21:45:18.658824   10884 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-554300\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0108 21:45:18.658824   10884 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0108 21:45:18.659584   10884 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0108 21:45:18.659774   10884 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0108 21:45:18.659774   10884 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0108 21:45:18.660344   10884 certs.go:437] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\3008.pem (1338 bytes)
	W0108 21:45:18.660703   10884 certs.go:433] ignoring C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\3008_empty.pem, impossibly tiny 0 bytes
	I0108 21:45:18.660703   10884 certs.go:437] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I0108 21:45:18.660967   10884 certs.go:437] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0108 21:45:18.661547   10884 certs.go:437] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0108 21:45:18.661738   10884 certs.go:437] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0108 21:45:18.661967   10884 certs.go:437] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\30082.pem (1708 bytes)
	I0108 21:45:18.662524   10884 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\30082.pem -> /usr/share/ca-certificates/30082.pem
	I0108 21:45:18.662607   10884 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0108 21:45:18.662781   10884 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\3008.pem -> /usr/share/ca-certificates/3008.pem
	I0108 21:45:18.663453   10884 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-554300\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0108 21:45:18.710385   10884 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-554300\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0108 21:45:18.747370   10884 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-554300\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0108 21:45:18.787195   10884 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-554300\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0108 21:45:18.825548   10884 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0108 21:45:18.864314   10884 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0108 21:45:18.901623   10884 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0108 21:45:18.946678   10884 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0108 21:45:18.983164   10884 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\30082.pem --> /usr/share/ca-certificates/30082.pem (1708 bytes)
	I0108 21:45:19.022994   10884 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0108 21:45:19.064403   10884 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\3008.pem --> /usr/share/ca-certificates/3008.pem (1338 bytes)
	I0108 21:45:19.102728   10884 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0108 21:45:19.142875   10884 ssh_runner.go:195] Run: openssl version
	I0108 21:45:19.150435   10884 command_runner.go:130] > OpenSSL 1.1.1n  15 Mar 2022
	I0108 21:45:19.163227   10884 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/30082.pem && ln -fs /usr/share/ca-certificates/30082.pem /etc/ssl/certs/30082.pem"
	I0108 21:45:19.192520   10884 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/30082.pem
	I0108 21:45:19.199753   10884 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Jan  8 20:28 /usr/share/ca-certificates/30082.pem
	I0108 21:45:19.199912   10884 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan  8 20:28 /usr/share/ca-certificates/30082.pem
	I0108 21:45:19.214732   10884 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/30082.pem
	I0108 21:45:19.222074   10884 command_runner.go:130] > 3ec20f2e
	I0108 21:45:19.236240   10884 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/30082.pem /etc/ssl/certs/3ec20f2e.0"
	I0108 21:45:19.263670   10884 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0108 21:45:19.291127   10884 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0108 21:45:19.297355   10884 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Jan  8 20:14 /usr/share/ca-certificates/minikubeCA.pem
	I0108 21:45:19.297355   10884 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan  8 20:14 /usr/share/ca-certificates/minikubeCA.pem
	I0108 21:45:19.309749   10884 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0108 21:45:19.317901   10884 command_runner.go:130] > b5213941
	I0108 21:45:19.332276   10884 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0108 21:45:19.362926   10884 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3008.pem && ln -fs /usr/share/ca-certificates/3008.pem /etc/ssl/certs/3008.pem"
	I0108 21:45:19.393246   10884 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3008.pem
	I0108 21:45:19.399436   10884 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Jan  8 20:28 /usr/share/ca-certificates/3008.pem
	I0108 21:45:19.399436   10884 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan  8 20:28 /usr/share/ca-certificates/3008.pem
	I0108 21:45:19.412200   10884 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3008.pem
	I0108 21:45:19.418543   10884 command_runner.go:130] > 51391683
	I0108 21:45:19.435409   10884 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/3008.pem /etc/ssl/certs/51391683.0"
	I0108 21:45:19.465329   10884 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0108 21:45:19.472009   10884 command_runner.go:130] > ca.crt
	I0108 21:45:19.472009   10884 command_runner.go:130] > ca.key
	I0108 21:45:19.472009   10884 command_runner.go:130] > healthcheck-client.crt
	I0108 21:45:19.472009   10884 command_runner.go:130] > healthcheck-client.key
	I0108 21:45:19.472134   10884 command_runner.go:130] > peer.crt
	I0108 21:45:19.472134   10884 command_runner.go:130] > peer.key
	I0108 21:45:19.472134   10884 command_runner.go:130] > server.crt
	I0108 21:45:19.472134   10884 command_runner.go:130] > server.key
	I0108 21:45:19.487128   10884 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0108 21:45:19.494643   10884 command_runner.go:130] > Certificate will not expire
	I0108 21:45:19.509400   10884 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0108 21:45:19.516665   10884 command_runner.go:130] > Certificate will not expire
	I0108 21:45:19.529429   10884 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0108 21:45:19.537020   10884 command_runner.go:130] > Certificate will not expire
	I0108 21:45:19.550585   10884 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0108 21:45:19.558678   10884 command_runner.go:130] > Certificate will not expire
	I0108 21:45:19.571640   10884 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0108 21:45:19.578936   10884 command_runner.go:130] > Certificate will not expire
	I0108 21:45:19.591650   10884 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0108 21:45:19.598203   10884 command_runner.go:130] > Certificate will not expire
	I0108 21:45:19.599258   10884 kubeadm.go:404] StartCluster: {Name:multinode-554300 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17806/minikube-v1.32.1-1702708929-17806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersi
on:v1.28.4 ClusterName:multinode-554300 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:172.29.104.77 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.29.96.43 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.29.100.57 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingre
ss:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:doc
ker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I0108 21:45:19.610070   10884 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0108 21:45:19.646136   10884 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0108 21:45:19.661843   10884 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I0108 21:45:19.661843   10884 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I0108 21:45:19.661843   10884 command_runner.go:130] > /var/lib/minikube/etcd:
	I0108 21:45:19.661843   10884 command_runner.go:130] > member
	I0108 21:45:19.661843   10884 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0108 21:45:19.661843   10884 kubeadm.go:636] restartCluster start
	I0108 21:45:19.673232   10884 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0108 21:45:19.686985   10884 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:45:19.687801   10884 kubeconfig.go:135] verify returned: extract IP: "multinode-554300" does not appear in C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	I0108 21:45:19.687801   10884 kubeconfig.go:146] "multinode-554300" context is missing from C:\Users\jenkins.minikube7\minikube-integration\kubeconfig - will repair!
	I0108 21:45:19.688532   10884 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\kubeconfig: {Name:mk3b9816be6135fef42a073128e9ed356868417a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 21:45:19.700586   10884 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	I0108 21:45:19.701118   10884 kapi.go:59] client config for multinode-554300: &rest.Config{Host:"https://172.29.104.77:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\profiles\\multinode-554300/client.crt", KeyFile:"C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\profiles\\multinode-554300/client.key", CAFile:"C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData
:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x188c140), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0108 21:45:19.702495   10884 cert_rotation.go:137] Starting client certificate rotation controller
	I0108 21:45:19.714115   10884 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0108 21:45:19.728559   10884 command_runner.go:130] > --- /var/tmp/minikube/kubeadm.yaml
	I0108 21:45:19.728559   10884 command_runner.go:130] > +++ /var/tmp/minikube/kubeadm.yaml.new
	I0108 21:45:19.728559   10884 command_runner.go:130] > @@ -1,7 +1,7 @@
	I0108 21:45:19.728559   10884 command_runner.go:130] >  apiVersion: kubeadm.k8s.io/v1beta3
	I0108 21:45:19.728559   10884 command_runner.go:130] >  kind: InitConfiguration
	I0108 21:45:19.728559   10884 command_runner.go:130] >  localAPIEndpoint:
	I0108 21:45:19.728559   10884 command_runner.go:130] > -  advertiseAddress: 172.29.107.59
	I0108 21:45:19.728559   10884 command_runner.go:130] > +  advertiseAddress: 172.29.104.77
	I0108 21:45:19.728559   10884 command_runner.go:130] >    bindPort: 8443
	I0108 21:45:19.728559   10884 command_runner.go:130] >  bootstrapTokens:
	I0108 21:45:19.728559   10884 command_runner.go:130] >    - groups:
	I0108 21:45:19.728559   10884 command_runner.go:130] > @@ -14,13 +14,13 @@
	I0108 21:45:19.728559   10884 command_runner.go:130] >    criSocket: unix:///var/run/cri-dockerd.sock
	I0108 21:45:19.728559   10884 command_runner.go:130] >    name: "multinode-554300"
	I0108 21:45:19.728559   10884 command_runner.go:130] >    kubeletExtraArgs:
	I0108 21:45:19.728559   10884 command_runner.go:130] > -    node-ip: 172.29.107.59
	I0108 21:45:19.728559   10884 command_runner.go:130] > +    node-ip: 172.29.104.77
	I0108 21:45:19.728559   10884 command_runner.go:130] >    taints: []
	I0108 21:45:19.728559   10884 command_runner.go:130] >  ---
	I0108 21:45:19.728559   10884 command_runner.go:130] >  apiVersion: kubeadm.k8s.io/v1beta3
	I0108 21:45:19.728559   10884 command_runner.go:130] >  kind: ClusterConfiguration
	I0108 21:45:19.728559   10884 command_runner.go:130] >  apiServer:
	I0108 21:45:19.728559   10884 command_runner.go:130] > -  certSANs: ["127.0.0.1", "localhost", "172.29.107.59"]
	I0108 21:45:19.728559   10884 command_runner.go:130] > +  certSANs: ["127.0.0.1", "localhost", "172.29.104.77"]
	I0108 21:45:19.728559   10884 command_runner.go:130] >    extraArgs:
	I0108 21:45:19.728559   10884 command_runner.go:130] >      enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	I0108 21:45:19.728559   10884 command_runner.go:130] >  controllerManager:
	I0108 21:45:19.728559   10884 kubeadm.go:602] needs reconfigure: configs differ:
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -1,7 +1,7 @@
	 apiVersion: kubeadm.k8s.io/v1beta3
	 kind: InitConfiguration
	 localAPIEndpoint:
	-  advertiseAddress: 172.29.107.59
	+  advertiseAddress: 172.29.104.77
	   bindPort: 8443
	 bootstrapTokens:
	   - groups:
	@@ -14,13 +14,13 @@
	   criSocket: unix:///var/run/cri-dockerd.sock
	   name: "multinode-554300"
	   kubeletExtraArgs:
	-    node-ip: 172.29.107.59
	+    node-ip: 172.29.104.77
	   taints: []
	 ---
	 apiVersion: kubeadm.k8s.io/v1beta3
	 kind: ClusterConfiguration
	 apiServer:
	-  certSANs: ["127.0.0.1", "localhost", "172.29.107.59"]
	+  certSANs: ["127.0.0.1", "localhost", "172.29.104.77"]
	   extraArgs:
	     enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	 controllerManager:
	
	-- /stdout --
	I0108 21:45:19.729120   10884 kubeadm.go:1135] stopping kube-system containers ...
	I0108 21:45:19.739868   10884 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0108 21:45:19.766193   10884 command_runner.go:130] > 146f9c24d2a4
	I0108 21:45:19.766256   10884 command_runner.go:130] > 77cfa745d678
	I0108 21:45:19.766256   10884 command_runner.go:130] > 2079ab544b8d
	I0108 21:45:19.766256   10884 command_runner.go:130] > c1d7f7a3821e
	I0108 21:45:19.766256   10884 command_runner.go:130] > 359babcc50a6
	I0108 21:45:19.766256   10884 command_runner.go:130] > 2c18647ee331
	I0108 21:45:19.766256   10884 command_runner.go:130] > ceed09dba4fb
	I0108 21:45:19.766256   10884 command_runner.go:130] > 5e4892494426
	I0108 21:45:19.766256   10884 command_runner.go:130] > 3f926c6626bf
	I0108 21:45:19.766256   10884 command_runner.go:130] > c193667d32e4
	I0108 21:45:19.766256   10884 command_runner.go:130] > 5a21be70e8c8
	I0108 21:45:19.766256   10884 command_runner.go:130] > eb93c2ad9198
	I0108 21:45:19.766256   10884 command_runner.go:130] > 4081e28ae545
	I0108 21:45:19.766256   10884 command_runner.go:130] > 6c64a54424c9
	I0108 21:45:19.766256   10884 command_runner.go:130] > d5dd9fc6e97e
	I0108 21:45:19.766256   10884 command_runner.go:130] > b50a134590a7
	I0108 21:45:19.766256   10884 docker.go:483] Stopping containers: [146f9c24d2a4 77cfa745d678 2079ab544b8d c1d7f7a3821e 359babcc50a6 2c18647ee331 ceed09dba4fb 5e4892494426 3f926c6626bf c193667d32e4 5a21be70e8c8 eb93c2ad9198 4081e28ae545 6c64a54424c9 d5dd9fc6e97e b50a134590a7]
	I0108 21:45:19.775257   10884 ssh_runner.go:195] Run: docker stop 146f9c24d2a4 77cfa745d678 2079ab544b8d c1d7f7a3821e 359babcc50a6 2c18647ee331 ceed09dba4fb 5e4892494426 3f926c6626bf c193667d32e4 5a21be70e8c8 eb93c2ad9198 4081e28ae545 6c64a54424c9 d5dd9fc6e97e b50a134590a7
	I0108 21:45:19.806242   10884 command_runner.go:130] > 146f9c24d2a4
	I0108 21:45:19.806242   10884 command_runner.go:130] > 77cfa745d678
	I0108 21:45:19.806242   10884 command_runner.go:130] > 2079ab544b8d
	I0108 21:45:19.806242   10884 command_runner.go:130] > c1d7f7a3821e
	I0108 21:45:19.806242   10884 command_runner.go:130] > 359babcc50a6
	I0108 21:45:19.806242   10884 command_runner.go:130] > 2c18647ee331
	I0108 21:45:19.806242   10884 command_runner.go:130] > ceed09dba4fb
	I0108 21:45:19.806242   10884 command_runner.go:130] > 5e4892494426
	I0108 21:45:19.806242   10884 command_runner.go:130] > 3f926c6626bf
	I0108 21:45:19.806242   10884 command_runner.go:130] > c193667d32e4
	I0108 21:45:19.806242   10884 command_runner.go:130] > 5a21be70e8c8
	I0108 21:45:19.806242   10884 command_runner.go:130] > eb93c2ad9198
	I0108 21:45:19.806242   10884 command_runner.go:130] > 4081e28ae545
	I0108 21:45:19.806242   10884 command_runner.go:130] > 6c64a54424c9
	I0108 21:45:19.806242   10884 command_runner.go:130] > d5dd9fc6e97e
	I0108 21:45:19.806242   10884 command_runner.go:130] > b50a134590a7
	I0108 21:45:19.819246   10884 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0108 21:45:19.855820   10884 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0108 21:45:19.873758   10884 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I0108 21:45:19.873758   10884 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I0108 21:45:19.874793   10884 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I0108 21:45:19.874793   10884 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0108 21:45:19.875058   10884 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0108 21:45:19.888407   10884 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0108 21:45:19.902864   10884 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0108 21:45:19.902949   10884 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0108 21:45:20.293221   10884 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0108 21:45:20.293281   10884 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0108 21:45:20.293281   10884 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0108 21:45:20.293281   10884 command_runner.go:130] > [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0108 21:45:20.293370   10884 command_runner.go:130] > [certs] Using existing front-proxy-ca certificate authority
	I0108 21:45:20.293370   10884 command_runner.go:130] > [certs] Using existing front-proxy-client certificate and key on disk
	I0108 21:45:20.293443   10884 command_runner.go:130] > [certs] Using existing etcd/ca certificate authority
	I0108 21:45:20.293443   10884 command_runner.go:130] > [certs] Using existing etcd/server certificate and key on disk
	I0108 21:45:20.293443   10884 command_runner.go:130] > [certs] Using existing etcd/peer certificate and key on disk
	I0108 21:45:20.293495   10884 command_runner.go:130] > [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0108 21:45:20.293495   10884 command_runner.go:130] > [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0108 21:45:20.293495   10884 command_runner.go:130] > [certs] Using the existing "sa" key
	I0108 21:45:20.293561   10884 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0108 21:45:21.590329   10884 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0108 21:45:21.590396   10884 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I0108 21:45:21.590460   10884 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0108 21:45:21.590460   10884 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0108 21:45:21.590460   10884 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0108 21:45:21.590548   10884 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.2969341s)
	I0108 21:45:21.590629   10884 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0108 21:45:21.671308   10884 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0108 21:45:21.673736   10884 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0108 21:45:21.674570   10884 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0108 21:45:21.834957   10884 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0108 21:45:21.916401   10884 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0108 21:45:21.916401   10884 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0108 21:45:21.916401   10884 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0108 21:45:21.916401   10884 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0108 21:45:21.916401   10884 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0108 21:45:21.997775   10884 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0108 21:45:21.998092   10884 api_server.go:52] waiting for apiserver process to appear ...
	I0108 21:45:22.011673   10884 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 21:45:22.526911   10884 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 21:45:23.016981   10884 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 21:45:23.512815   10884 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 21:45:24.019278   10884 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 21:45:24.524508   10884 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 21:45:25.021531   10884 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 21:45:25.046409   10884 command_runner.go:130] > 1871
	I0108 21:45:25.046515   10884 api_server.go:72] duration metric: took 3.0484804s to wait for apiserver process to appear ...
	I0108 21:45:25.046588   10884 api_server.go:88] waiting for apiserver healthz status ...
	I0108 21:45:25.046666   10884 api_server.go:253] Checking apiserver healthz at https://172.29.104.77:8443/healthz ...
	I0108 21:45:28.479620   10884 api_server.go:279] https://172.29.104.77:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0108 21:45:28.480084   10884 api_server.go:103] status: https://172.29.104.77:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0108 21:45:28.480123   10884 api_server.go:253] Checking apiserver healthz at https://172.29.104.77:8443/healthz ...
	I0108 21:45:28.521670   10884 api_server.go:279] https://172.29.104.77:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0108 21:45:28.521670   10884 api_server.go:103] status: https://172.29.104.77:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0108 21:45:28.555415   10884 api_server.go:253] Checking apiserver healthz at https://172.29.104.77:8443/healthz ...
	I0108 21:45:28.625121   10884 api_server.go:279] https://172.29.104.77:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0108 21:45:28.625558   10884 api_server.go:103] status: https://172.29.104.77:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0108 21:45:29.048502   10884 api_server.go:253] Checking apiserver healthz at https://172.29.104.77:8443/healthz ...
	I0108 21:45:29.070750   10884 api_server.go:279] https://172.29.104.77:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0108 21:45:29.071877   10884 api_server.go:103] status: https://172.29.104.77:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0108 21:45:29.554159   10884 api_server.go:253] Checking apiserver healthz at https://172.29.104.77:8443/healthz ...
	I0108 21:45:29.563893   10884 api_server.go:279] https://172.29.104.77:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0108 21:45:29.563979   10884 api_server.go:103] status: https://172.29.104.77:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0108 21:45:30.058634   10884 api_server.go:253] Checking apiserver healthz at https://172.29.104.77:8443/healthz ...
	I0108 21:45:30.068971   10884 api_server.go:279] https://172.29.104.77:8443/healthz returned 200:
	ok
	I0108 21:45:30.068971   10884 round_trippers.go:463] GET https://172.29.104.77:8443/version
	I0108 21:45:30.068971   10884 round_trippers.go:469] Request Headers:
	I0108 21:45:30.068971   10884 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:45:30.068971   10884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0108 21:45:30.084264   10884 round_trippers.go:574] Response Status: 200 OK in 15 milliseconds
	I0108 21:45:30.084264   10884 round_trippers.go:577] Response Headers:
	I0108 21:45:30.084264   10884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9ebd4a85-36a3-4fdc-bd57-aab45239cbb7
	I0108 21:45:30.084428   10884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 15bc4c65-0928-4725-8730-353e6d4075e7
	I0108 21:45:30.084428   10884 round_trippers.go:580]     Content-Length: 264
	I0108 21:45:30.084428   10884 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:45:30 GMT
	I0108 21:45:30.084428   10884 round_trippers.go:580]     Audit-Id: fa4fde5d-c6fa-476f-97cb-c8832b788d45
	I0108 21:45:30.084428   10884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:45:30.084428   10884 round_trippers.go:580]     Content-Type: application/json
	I0108 21:45:30.084428   10884 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "28",
	  "gitVersion": "v1.28.4",
	  "gitCommit": "bae2c62678db2b5053817bc97181fcc2e8388103",
	  "gitTreeState": "clean",
	  "buildDate": "2023-11-15T16:48:54Z",
	  "goVersion": "go1.20.11",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0108 21:45:30.084704   10884 api_server.go:141] control plane version: v1.28.4
	I0108 21:45:30.084748   10884 api_server.go:131] duration metric: took 5.0381367s to wait for apiserver health ...
	I0108 21:45:30.084748   10884 cni.go:84] Creating CNI manager for ""
	I0108 21:45:30.084748   10884 cni.go:136] 3 nodes found, recommending kindnet
	I0108 21:45:30.085673   10884 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0108 21:45:30.105711   10884 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0108 21:45:30.112036   10884 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0108 21:45:30.112036   10884 command_runner.go:130] >   Size: 2685752   	Blocks: 5248       IO Block: 4096   regular file
	I0108 21:45:30.112036   10884 command_runner.go:130] > Device: 11h/17d	Inode: 3544        Links: 1
	I0108 21:45:30.112036   10884 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0108 21:45:30.112036   10884 command_runner.go:130] > Access: 2024-01-08 21:44:03.520554000 +0000
	I0108 21:45:30.112036   10884 command_runner.go:130] > Modify: 2023-12-16 11:53:47.000000000 +0000
	I0108 21:45:30.112036   10884 command_runner.go:130] > Change: 2024-01-08 21:43:53.914000000 +0000
	I0108 21:45:30.112036   10884 command_runner.go:130] >  Birth: -
	I0108 21:45:30.113029   10884 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I0108 21:45:30.113029   10884 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0108 21:45:30.157951   10884 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0108 21:45:32.203080   10884 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I0108 21:45:32.203080   10884 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I0108 21:45:32.203080   10884 command_runner.go:130] > serviceaccount/kindnet unchanged
	I0108 21:45:32.203080   10884 command_runner.go:130] > daemonset.apps/kindnet configured
	I0108 21:45:32.205193   10884 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (2.0471729s)
	I0108 21:45:32.205308   10884 system_pods.go:43] waiting for kube-system pods to appear ...
	I0108 21:45:32.205466   10884 round_trippers.go:463] GET https://172.29.104.77:8443/api/v1/namespaces/kube-system/pods
	I0108 21:45:32.205528   10884 round_trippers.go:469] Request Headers:
	I0108 21:45:32.205570   10884 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:45:32.205570   10884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0108 21:45:32.211294   10884 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0108 21:45:32.211294   10884 round_trippers.go:577] Response Headers:
	I0108 21:45:32.211294   10884 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:45:32 GMT
	I0108 21:45:32.211736   10884 round_trippers.go:580]     Audit-Id: 2fb9f7b4-de1f-4e49-8a48-e95ee20d4885
	I0108 21:45:32.211736   10884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:45:32.211736   10884 round_trippers.go:580]     Content-Type: application/json
	I0108 21:45:32.211736   10884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9ebd4a85-36a3-4fdc-bd57-aab45239cbb7
	I0108 21:45:32.211736   10884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 15bc4c65-0928-4725-8730-353e6d4075e7
	I0108 21:45:32.213377   10884 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1745"},"items":[{"metadata":{"name":"coredns-5dd5756b68-q7vd7","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"fe215542-1a69-4152-9098-06937431fa74","resourceVersion":"1683","creationTimestamp":"2024-01-08T21:23:44Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"4ef51136-88a5-445c-bfe9-e1a45010851b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-08T21:23:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"4ef51136-88a5-445c-bfe9-e1a45010851b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 84123 chars]
	I0108 21:45:32.219969   10884 system_pods.go:59] 12 kube-system pods found
	I0108 21:45:32.219969   10884 system_pods.go:61] "coredns-5dd5756b68-q7vd7" [fe215542-1a69-4152-9098-06937431fa74] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0108 21:45:32.219969   10884 system_pods.go:61] "etcd-multinode-554300" [55fb89f1-0f93-4967-877e-c170530dd9ed] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0108 21:45:32.219969   10884 system_pods.go:61] "kindnet-4q524" [f633fa0f-0091-439f-b152-02f668039214] Running
	I0108 21:45:32.219969   10884 system_pods.go:61] "kindnet-5r79t" [275c1f53-70c6-4922-9ba4-d931e1515729] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0108 21:45:32.219969   10884 system_pods.go:61] "kindnet-dnjjm" [4c6605a5-1db1-49f6-ae23-e2fbba50ecbc] Running
	I0108 21:45:32.219969   10884 system_pods.go:61] "kube-apiserver-multinode-554300" [ad4821d4-6eff-483c-b12d-9123225ab172] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0108 21:45:32.219969   10884 system_pods.go:61] "kube-controller-manager-multinode-554300" [c5c47910-dee9-4e42-8623-dbc45d13564f] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0108 21:45:32.219969   10884 system_pods.go:61] "kube-proxy-jsq7c" [cbc6a2d2-bb66-4af4-8a7d-315bc293cac0] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0108 21:45:32.219969   10884 system_pods.go:61] "kube-proxy-nbzjb" [73b08d5a-2015-4712-92b4-2d12298e9fc3] Running
	I0108 21:45:32.220509   10884 system_pods.go:61] "kube-proxy-pdt95" [e4aa76bc-96be-46f8-bc0e-7f3a6caa9883] Running
	I0108 21:45:32.220509   10884 system_pods.go:61] "kube-scheduler-multinode-554300" [f5b78bba-6cd0-495b-b6d6-c9afd93b3534] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0108 21:45:32.220509   10884 system_pods.go:61] "storage-provisioner" [2fb8721f-01cc-4078-b45c-964d73e3da98] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0108 21:45:32.220509   10884 system_pods.go:74] duration metric: took 15.2006ms to wait for pod list to return data ...
	I0108 21:45:32.220568   10884 node_conditions.go:102] verifying NodePressure condition ...
	I0108 21:45:32.220568   10884 round_trippers.go:463] GET https://172.29.104.77:8443/api/v1/nodes
	I0108 21:45:32.220568   10884 round_trippers.go:469] Request Headers:
	I0108 21:45:32.220568   10884 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:45:32.220568   10884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0108 21:45:32.225486   10884 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0108 21:45:32.225486   10884 round_trippers.go:577] Response Headers:
	I0108 21:45:32.225486   10884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:45:32.225486   10884 round_trippers.go:580]     Content-Type: application/json
	I0108 21:45:32.225486   10884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9ebd4a85-36a3-4fdc-bd57-aab45239cbb7
	I0108 21:45:32.225486   10884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 15bc4c65-0928-4725-8730-353e6d4075e7
	I0108 21:45:32.225486   10884 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:45:32 GMT
	I0108 21:45:32.225486   10884 round_trippers.go:580]     Audit-Id: b2a77afe-18b2-4aff-ad52-7427a00ce1e5
	I0108 21:45:32.226167   10884 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"1745"},"items":[{"metadata":{"name":"multinode-554300","uid":"00d4501c-311c-4564-812c-b620304a4e8e","resourceVersion":"1667","creationTimestamp":"2024-01-08T21:23:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-554300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-554300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T21_23_33_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFi
elds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","tim [truncated 14856 chars]
	I0108 21:45:32.226802   10884 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0108 21:45:32.227388   10884 node_conditions.go:123] node cpu capacity is 2
	I0108 21:45:32.227508   10884 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0108 21:45:32.227508   10884 node_conditions.go:123] node cpu capacity is 2
	I0108 21:45:32.227672   10884 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0108 21:45:32.227706   10884 node_conditions.go:123] node cpu capacity is 2
	I0108 21:45:32.227706   10884 node_conditions.go:105] duration metric: took 7.138ms to run NodePressure ...
	I0108 21:45:32.227752   10884 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0108 21:45:32.609989   10884 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0108 21:45:32.611011   10884 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0108 21:45:32.611011   10884 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0108 21:45:32.611231   10884 round_trippers.go:463] GET https://172.29.104.77:8443/api/v1/namespaces/kube-system/pods?labelSelector=tier%!D(MISSING)control-plane
	I0108 21:45:32.611231   10884 round_trippers.go:469] Request Headers:
	I0108 21:45:32.611284   10884 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:45:32.611284   10884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0108 21:45:32.622793   10884 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0108 21:45:32.622793   10884 round_trippers.go:577] Response Headers:
	I0108 21:45:32.622793   10884 round_trippers.go:580]     Audit-Id: 5730861b-9d65-46f4-9332-0c88c1896885
	I0108 21:45:32.622793   10884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:45:32.622793   10884 round_trippers.go:580]     Content-Type: application/json
	I0108 21:45:32.622793   10884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9ebd4a85-36a3-4fdc-bd57-aab45239cbb7
	I0108 21:45:32.622793   10884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 15bc4c65-0928-4725-8730-353e6d4075e7
	I0108 21:45:32.622793   10884 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:45:32 GMT
	I0108 21:45:32.622793   10884 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1751"},"items":[{"metadata":{"name":"etcd-multinode-554300","namespace":"kube-system","uid":"55fb89f1-0f93-4967-877e-c170530dd9ed","resourceVersion":"1678","creationTimestamp":"2024-01-08T21:45:28Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.29.104.77:2379","kubernetes.io/config.hash":"eeac3c939de11db202bb72fb9d694f8f","kubernetes.io/config.mirror":"eeac3c939de11db202bb72fb9d694f8f","kubernetes.io/config.seen":"2024-01-08T21:45:22.563167670Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-554300","uid":"00d4501c-311c-4564-812c-b620304a4e8e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-08T21:45:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotatio
ns":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-client-urls":{},"f: [truncated 29350 chars]
	I0108 21:45:32.625470   10884 kubeadm.go:787] kubelet initialised
	I0108 21:45:32.625536   10884 kubeadm.go:788] duration metric: took 14.4536ms waiting for restarted kubelet to initialise ...
	I0108 21:45:32.625536   10884 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0108 21:45:32.625704   10884 round_trippers.go:463] GET https://172.29.104.77:8443/api/v1/namespaces/kube-system/pods
	I0108 21:45:32.625704   10884 round_trippers.go:469] Request Headers:
	I0108 21:45:32.625704   10884 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:45:32.625704   10884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0108 21:45:32.630959   10884 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0108 21:45:32.630959   10884 round_trippers.go:577] Response Headers:
	I0108 21:45:32.630959   10884 round_trippers.go:580]     Audit-Id: e588502c-2615-4773-9b23-7298e32da556
	I0108 21:45:32.630959   10884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:45:32.630959   10884 round_trippers.go:580]     Content-Type: application/json
	I0108 21:45:32.630959   10884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9ebd4a85-36a3-4fdc-bd57-aab45239cbb7
	I0108 21:45:32.630959   10884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 15bc4c65-0928-4725-8730-353e6d4075e7
	I0108 21:45:32.630959   10884 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:45:32 GMT
	I0108 21:45:32.633203   10884 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1751"},"items":[{"metadata":{"name":"coredns-5dd5756b68-q7vd7","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"fe215542-1a69-4152-9098-06937431fa74","resourceVersion":"1683","creationTimestamp":"2024-01-08T21:23:44Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"4ef51136-88a5-445c-bfe9-e1a45010851b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-08T21:23:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"4ef51136-88a5-445c-bfe9-e1a45010851b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 84123 chars]
	I0108 21:45:32.637138   10884 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-q7vd7" in "kube-system" namespace to be "Ready" ...
	I0108 21:45:32.637323   10884 round_trippers.go:463] GET https://172.29.104.77:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-q7vd7
	I0108 21:45:32.637412   10884 round_trippers.go:469] Request Headers:
	I0108 21:45:32.637412   10884 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:45:32.637412   10884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0108 21:45:32.641677   10884 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0108 21:45:32.641677   10884 round_trippers.go:577] Response Headers:
	I0108 21:45:32.641677   10884 round_trippers.go:580]     Audit-Id: 1bd374aa-e574-48de-bb2a-19f37a4137e9
	I0108 21:45:32.642002   10884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:45:32.642002   10884 round_trippers.go:580]     Content-Type: application/json
	I0108 21:45:32.642002   10884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9ebd4a85-36a3-4fdc-bd57-aab45239cbb7
	I0108 21:45:32.642002   10884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 15bc4c65-0928-4725-8730-353e6d4075e7
	I0108 21:45:32.642002   10884 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:45:32 GMT
	I0108 21:45:32.642267   10884 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-q7vd7","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"fe215542-1a69-4152-9098-06937431fa74","resourceVersion":"1683","creationTimestamp":"2024-01-08T21:23:44Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"4ef51136-88a5-445c-bfe9-e1a45010851b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-08T21:23:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"4ef51136-88a5-445c-bfe9-e1a45010851b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6545 chars]
	I0108 21:45:32.642855   10884 round_trippers.go:463] GET https://172.29.104.77:8443/api/v1/nodes/multinode-554300
	I0108 21:45:32.642855   10884 round_trippers.go:469] Request Headers:
	I0108 21:45:32.642855   10884 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:45:32.642855   10884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0108 21:45:32.648428   10884 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0108 21:45:32.648428   10884 round_trippers.go:577] Response Headers:
	I0108 21:45:32.648428   10884 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:45:32 GMT
	I0108 21:45:32.648428   10884 round_trippers.go:580]     Audit-Id: 4758ecd1-f9cf-486f-a74f-f65795800703
	I0108 21:45:32.648639   10884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:45:32.648639   10884 round_trippers.go:580]     Content-Type: application/json
	I0108 21:45:32.648639   10884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9ebd4a85-36a3-4fdc-bd57-aab45239cbb7
	I0108 21:45:32.648685   10884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 15bc4c65-0928-4725-8730-353e6d4075e7
	I0108 21:45:32.648934   10884 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-554300","uid":"00d4501c-311c-4564-812c-b620304a4e8e","resourceVersion":"1667","creationTimestamp":"2024-01-08T21:23:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-554300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-554300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T21_23_33_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-01-08T21:23:28Z","fieldsType":"FieldsV1","f [truncated 5364 chars]
	I0108 21:45:32.648993   10884 pod_ready.go:97] node "multinode-554300" hosting pod "coredns-5dd5756b68-q7vd7" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-554300" has status "Ready":"False"
	I0108 21:45:32.648993   10884 pod_ready.go:81] duration metric: took 11.7611ms waiting for pod "coredns-5dd5756b68-q7vd7" in "kube-system" namespace to be "Ready" ...
	E0108 21:45:32.648993   10884 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-554300" hosting pod "coredns-5dd5756b68-q7vd7" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-554300" has status "Ready":"False"
	I0108 21:45:32.648993   10884 pod_ready.go:78] waiting up to 4m0s for pod "etcd-multinode-554300" in "kube-system" namespace to be "Ready" ...
	I0108 21:45:32.648993   10884 round_trippers.go:463] GET https://172.29.104.77:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-554300
	I0108 21:45:32.648993   10884 round_trippers.go:469] Request Headers:
	I0108 21:45:32.648993   10884 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:45:32.648993   10884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0108 21:45:32.651571   10884 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 21:45:32.652596   10884 round_trippers.go:577] Response Headers:
	I0108 21:45:32.652596   10884 round_trippers.go:580]     Audit-Id: 6bb8522f-61c1-49d9-ac3d-898e222dba03
	I0108 21:45:32.652596   10884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:45:32.652596   10884 round_trippers.go:580]     Content-Type: application/json
	I0108 21:45:32.652596   10884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9ebd4a85-36a3-4fdc-bd57-aab45239cbb7
	I0108 21:45:32.652596   10884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 15bc4c65-0928-4725-8730-353e6d4075e7
	I0108 21:45:32.652596   10884 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:45:32 GMT
	I0108 21:45:32.652596   10884 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-554300","namespace":"kube-system","uid":"55fb89f1-0f93-4967-877e-c170530dd9ed","resourceVersion":"1678","creationTimestamp":"2024-01-08T21:45:28Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.29.104.77:2379","kubernetes.io/config.hash":"eeac3c939de11db202bb72fb9d694f8f","kubernetes.io/config.mirror":"eeac3c939de11db202bb72fb9d694f8f","kubernetes.io/config.seen":"2024-01-08T21:45:22.563167670Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-554300","uid":"00d4501c-311c-4564-812c-b620304a4e8e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-08T21:45:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6087 chars]
	I0108 21:45:32.653262   10884 round_trippers.go:463] GET https://172.29.104.77:8443/api/v1/nodes/multinode-554300
	I0108 21:45:32.653336   10884 round_trippers.go:469] Request Headers:
	I0108 21:45:32.653336   10884 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:45:32.653367   10884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0108 21:45:32.656468   10884 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 21:45:32.656468   10884 round_trippers.go:577] Response Headers:
	I0108 21:45:32.656468   10884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 15bc4c65-0928-4725-8730-353e6d4075e7
	I0108 21:45:32.656468   10884 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:45:32 GMT
	I0108 21:45:32.656468   10884 round_trippers.go:580]     Audit-Id: 3d40cb51-0e0a-4776-978e-28a7f2a74a4f
	I0108 21:45:32.656468   10884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:45:32.656468   10884 round_trippers.go:580]     Content-Type: application/json
	I0108 21:45:32.656468   10884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9ebd4a85-36a3-4fdc-bd57-aab45239cbb7
	I0108 21:45:32.656468   10884 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-554300","uid":"00d4501c-311c-4564-812c-b620304a4e8e","resourceVersion":"1667","creationTimestamp":"2024-01-08T21:23:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-554300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-554300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T21_23_33_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-01-08T21:23:28Z","fieldsType":"FieldsV1","f [truncated 5364 chars]
	I0108 21:45:32.656468   10884 pod_ready.go:97] node "multinode-554300" hosting pod "etcd-multinode-554300" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-554300" has status "Ready":"False"
	I0108 21:45:32.657302   10884 pod_ready.go:81] duration metric: took 8.3088ms waiting for pod "etcd-multinode-554300" in "kube-system" namespace to be "Ready" ...
	E0108 21:45:32.657302   10884 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-554300" hosting pod "etcd-multinode-554300" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-554300" has status "Ready":"False"
	I0108 21:45:32.657302   10884 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-multinode-554300" in "kube-system" namespace to be "Ready" ...
	I0108 21:45:32.657302   10884 round_trippers.go:463] GET https://172.29.104.77:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-554300
	I0108 21:45:32.657497   10884 round_trippers.go:469] Request Headers:
	I0108 21:45:32.657497   10884 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:45:32.657497   10884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0108 21:45:32.660169   10884 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 21:45:32.660169   10884 round_trippers.go:577] Response Headers:
	I0108 21:45:32.660169   10884 round_trippers.go:580]     Content-Type: application/json
	I0108 21:45:32.660169   10884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9ebd4a85-36a3-4fdc-bd57-aab45239cbb7
	I0108 21:45:32.660169   10884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 15bc4c65-0928-4725-8730-353e6d4075e7
	I0108 21:45:32.660169   10884 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:45:32 GMT
	I0108 21:45:32.660169   10884 round_trippers.go:580]     Audit-Id: fece025d-0f63-42bd-baf4-5d2454f33c05
	I0108 21:45:32.660169   10884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:45:32.661282   10884 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-554300","namespace":"kube-system","uid":"ad4821d4-6eff-483c-b12d-9123225ab172","resourceVersion":"1676","creationTimestamp":"2024-01-08T21:45:28Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.29.104.77:8443","kubernetes.io/config.hash":"2ecc3d0c6efe0268d7822d74e28f9f5b","kubernetes.io/config.mirror":"2ecc3d0c6efe0268d7822d74e28f9f5b","kubernetes.io/config.seen":"2024-01-08T21:45:22.563174170Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-554300","uid":"00d4501c-311c-4564-812c-b620304a4e8e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-08T21:45:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7644 chars]
	I0108 21:45:32.662003   10884 round_trippers.go:463] GET https://172.29.104.77:8443/api/v1/nodes/multinode-554300
	I0108 21:45:32.662100   10884 round_trippers.go:469] Request Headers:
	I0108 21:45:32.662100   10884 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:45:32.662100   10884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0108 21:45:32.664955   10884 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 21:45:32.664955   10884 round_trippers.go:577] Response Headers:
	I0108 21:45:32.664955   10884 round_trippers.go:580]     Audit-Id: 3dbd3215-19c3-4708-aec7-116e3d86baa4
	I0108 21:45:32.664955   10884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:45:32.664955   10884 round_trippers.go:580]     Content-Type: application/json
	I0108 21:45:32.664955   10884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9ebd4a85-36a3-4fdc-bd57-aab45239cbb7
	I0108 21:45:32.665864   10884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 15bc4c65-0928-4725-8730-353e6d4075e7
	I0108 21:45:32.665864   10884 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:45:32 GMT
	I0108 21:45:32.666031   10884 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-554300","uid":"00d4501c-311c-4564-812c-b620304a4e8e","resourceVersion":"1667","creationTimestamp":"2024-01-08T21:23:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-554300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-554300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T21_23_33_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-01-08T21:23:28Z","fieldsType":"FieldsV1","f [truncated 5364 chars]
	I0108 21:45:32.666235   10884 pod_ready.go:97] node "multinode-554300" hosting pod "kube-apiserver-multinode-554300" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-554300" has status "Ready":"False"
	I0108 21:45:32.666235   10884 pod_ready.go:81] duration metric: took 8.9327ms waiting for pod "kube-apiserver-multinode-554300" in "kube-system" namespace to be "Ready" ...
	E0108 21:45:32.666235   10884 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-554300" hosting pod "kube-apiserver-multinode-554300" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-554300" has status "Ready":"False"
	I0108 21:45:32.666235   10884 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-multinode-554300" in "kube-system" namespace to be "Ready" ...
	I0108 21:45:32.666235   10884 round_trippers.go:463] GET https://172.29.104.77:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-554300
	I0108 21:45:32.666235   10884 round_trippers.go:469] Request Headers:
	I0108 21:45:32.666235   10884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0108 21:45:32.666235   10884 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:45:32.670010   10884 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 21:45:32.670010   10884 round_trippers.go:577] Response Headers:
	I0108 21:45:32.670010   10884 round_trippers.go:580]     Audit-Id: 237b25e9-19ae-4dc2-9682-6432f82440a6
	I0108 21:45:32.670010   10884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:45:32.670010   10884 round_trippers.go:580]     Content-Type: application/json
	I0108 21:45:32.670010   10884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9ebd4a85-36a3-4fdc-bd57-aab45239cbb7
	I0108 21:45:32.670010   10884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 15bc4c65-0928-4725-8730-353e6d4075e7
	I0108 21:45:32.670010   10884 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:45:32 GMT
	I0108 21:45:32.670010   10884 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-554300","namespace":"kube-system","uid":"c5c47910-dee9-4e42-8623-dbc45d13564f","resourceVersion":"1666","creationTimestamp":"2024-01-08T21:23:32Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"8c1bbf537f0866640621fd36da3286e0","kubernetes.io/config.mirror":"8c1bbf537f0866640621fd36da3286e0","kubernetes.io/config.seen":"2024-01-08T21:23:32.232191792Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-554300","uid":"00d4501c-311c-4564-812c-b620304a4e8e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-08T21:23:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.
io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".": [truncated 7437 chars]
	I0108 21:45:32.671424   10884 round_trippers.go:463] GET https://172.29.104.77:8443/api/v1/nodes/multinode-554300
	I0108 21:45:32.671455   10884 round_trippers.go:469] Request Headers:
	I0108 21:45:32.671522   10884 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:45:32.671522   10884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0108 21:45:32.675343   10884 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 21:45:32.675343   10884 round_trippers.go:577] Response Headers:
	I0108 21:45:32.675343   10884 round_trippers.go:580]     Content-Type: application/json
	I0108 21:45:32.675343   10884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9ebd4a85-36a3-4fdc-bd57-aab45239cbb7
	I0108 21:45:32.675343   10884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 15bc4c65-0928-4725-8730-353e6d4075e7
	I0108 21:45:32.675343   10884 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:45:32 GMT
	I0108 21:45:32.675343   10884 round_trippers.go:580]     Audit-Id: b68443ac-dde7-4667-9cfb-3633241921b6
	I0108 21:45:32.675343   10884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:45:32.675343   10884 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-554300","uid":"00d4501c-311c-4564-812c-b620304a4e8e","resourceVersion":"1667","creationTimestamp":"2024-01-08T21:23:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-554300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-554300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T21_23_33_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-01-08T21:23:28Z","fieldsType":"FieldsV1","f [truncated 5364 chars]
	I0108 21:45:32.675343   10884 pod_ready.go:97] node "multinode-554300" hosting pod "kube-controller-manager-multinode-554300" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-554300" has status "Ready":"False"
	I0108 21:45:32.675343   10884 pod_ready.go:81] duration metric: took 9.1084ms waiting for pod "kube-controller-manager-multinode-554300" in "kube-system" namespace to be "Ready" ...
	E0108 21:45:32.675343   10884 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-554300" hosting pod "kube-controller-manager-multinode-554300" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-554300" has status "Ready":"False"
	I0108 21:45:32.675343   10884 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-jsq7c" in "kube-system" namespace to be "Ready" ...
	I0108 21:45:32.812836   10884 request.go:629] Waited for 137.4919ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.104.77:8443/api/v1/namespaces/kube-system/pods/kube-proxy-jsq7c
	I0108 21:45:32.812836   10884 round_trippers.go:463] GET https://172.29.104.77:8443/api/v1/namespaces/kube-system/pods/kube-proxy-jsq7c
	I0108 21:45:32.812836   10884 round_trippers.go:469] Request Headers:
	I0108 21:45:32.812836   10884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0108 21:45:32.812836   10884 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:45:32.816776   10884 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 21:45:32.816776   10884 round_trippers.go:577] Response Headers:
	I0108 21:45:32.816776   10884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 15bc4c65-0928-4725-8730-353e6d4075e7
	I0108 21:45:32.816776   10884 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:45:32 GMT
	I0108 21:45:32.817295   10884 round_trippers.go:580]     Audit-Id: c73c8ec8-5522-4ec4-bbff-527ec6a43ff8
	I0108 21:45:32.817295   10884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:45:32.817295   10884 round_trippers.go:580]     Content-Type: application/json
	I0108 21:45:32.817295   10884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9ebd4a85-36a3-4fdc-bd57-aab45239cbb7
	I0108 21:45:32.817637   10884 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-jsq7c","generateName":"kube-proxy-","namespace":"kube-system","uid":"cbc6a2d2-bb66-4af4-8a7d-315bc293cac0","resourceVersion":"1670","creationTimestamp":"2024-01-08T21:23:44Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"0597a1a7-46a6-4b69-bb4b-83cbc4a269de","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-08T21:23:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0597a1a7-46a6-4b69-bb4b-83cbc4a269de\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5929 chars]
	I0108 21:45:33.020278   10884 request.go:629] Waited for 201.3007ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.104.77:8443/api/v1/nodes/multinode-554300
	I0108 21:45:33.020278   10884 round_trippers.go:463] GET https://172.29.104.77:8443/api/v1/nodes/multinode-554300
	I0108 21:45:33.020497   10884 round_trippers.go:469] Request Headers:
	I0108 21:45:33.020497   10884 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:45:33.020553   10884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0108 21:45:33.025891   10884 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0108 21:45:33.025891   10884 round_trippers.go:577] Response Headers:
	I0108 21:45:33.025891   10884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 15bc4c65-0928-4725-8730-353e6d4075e7
	I0108 21:45:33.025891   10884 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:45:33 GMT
	I0108 21:45:33.025891   10884 round_trippers.go:580]     Audit-Id: 9c1aa5de-4b00-4779-9673-26a8060c8d6d
	I0108 21:45:33.025891   10884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:45:33.025891   10884 round_trippers.go:580]     Content-Type: application/json
	I0108 21:45:33.025891   10884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9ebd4a85-36a3-4fdc-bd57-aab45239cbb7
	I0108 21:45:33.027229   10884 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-554300","uid":"00d4501c-311c-4564-812c-b620304a4e8e","resourceVersion":"1667","creationTimestamp":"2024-01-08T21:23:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-554300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-554300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T21_23_33_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-01-08T21:23:28Z","fieldsType":"FieldsV1","f [truncated 5364 chars]
	I0108 21:45:33.027427   10884 pod_ready.go:97] node "multinode-554300" hosting pod "kube-proxy-jsq7c" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-554300" has status "Ready":"False"
	I0108 21:45:33.027427   10884 pod_ready.go:81] duration metric: took 352.0828ms waiting for pod "kube-proxy-jsq7c" in "kube-system" namespace to be "Ready" ...
	E0108 21:45:33.027427   10884 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-554300" hosting pod "kube-proxy-jsq7c" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-554300" has status "Ready":"False"
	I0108 21:45:33.027427   10884 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-nbzjb" in "kube-system" namespace to be "Ready" ...
	I0108 21:45:33.221462   10884 request.go:629] Waited for 193.8271ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.104.77:8443/api/v1/namespaces/kube-system/pods/kube-proxy-nbzjb
	I0108 21:45:33.221462   10884 round_trippers.go:463] GET https://172.29.104.77:8443/api/v1/namespaces/kube-system/pods/kube-proxy-nbzjb
	I0108 21:45:33.221462   10884 round_trippers.go:469] Request Headers:
	I0108 21:45:33.221840   10884 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:45:33.221840   10884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0108 21:45:33.226258   10884 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0108 21:45:33.226460   10884 round_trippers.go:577] Response Headers:
	I0108 21:45:33.226460   10884 round_trippers.go:580]     Audit-Id: f7249e99-3b7c-4a0d-a2a9-635d2be0c4fd
	I0108 21:45:33.226460   10884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:45:33.226531   10884 round_trippers.go:580]     Content-Type: application/json
	I0108 21:45:33.226531   10884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9ebd4a85-36a3-4fdc-bd57-aab45239cbb7
	I0108 21:45:33.226531   10884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 15bc4c65-0928-4725-8730-353e6d4075e7
	I0108 21:45:33.226531   10884 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:45:33 GMT
	I0108 21:45:33.226873   10884 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-nbzjb","generateName":"kube-proxy-","namespace":"kube-system","uid":"73b08d5a-2015-4712-92b4-2d12298e9fc3","resourceVersion":"624","creationTimestamp":"2024-01-08T21:26:42Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"0597a1a7-46a6-4b69-bb4b-83cbc4a269de","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-08T21:26:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0597a1a7-46a6-4b69-bb4b-83cbc4a269de\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5537 chars]
	I0108 21:45:33.424852   10884 request.go:629] Waited for 197.0711ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.104.77:8443/api/v1/nodes/multinode-554300-m02
	I0108 21:45:33.425012   10884 round_trippers.go:463] GET https://172.29.104.77:8443/api/v1/nodes/multinode-554300-m02
	I0108 21:45:33.425012   10884 round_trippers.go:469] Request Headers:
	I0108 21:45:33.425012   10884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0108 21:45:33.425012   10884 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:45:33.432294   10884 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0108 21:45:33.432836   10884 round_trippers.go:577] Response Headers:
	I0108 21:45:33.432836   10884 round_trippers.go:580]     Audit-Id: 1b9c00ef-f4e2-4008-8f06-1f85e3e0aba8
	I0108 21:45:33.432836   10884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:45:33.432836   10884 round_trippers.go:580]     Content-Type: application/json
	I0108 21:45:33.432836   10884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9ebd4a85-36a3-4fdc-bd57-aab45239cbb7
	I0108 21:45:33.432836   10884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 15bc4c65-0928-4725-8730-353e6d4075e7
	I0108 21:45:33.432836   10884 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:45:33 GMT
	I0108 21:45:33.432836   10884 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-554300-m02","uid":"77388bfb-eaa7-4617-992f-c60f1dbca8c7","resourceVersion":"1588","creationTimestamp":"2024-01-08T21:26:42Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-554300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-554300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T21_41_23_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-08T21:26:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:anno
tations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager": [truncated 3818 chars]
	I0108 21:45:33.433791   10884 pod_ready.go:92] pod "kube-proxy-nbzjb" in "kube-system" namespace has status "Ready":"True"
	I0108 21:45:33.433914   10884 pod_ready.go:81] duration metric: took 406.4843ms waiting for pod "kube-proxy-nbzjb" in "kube-system" namespace to be "Ready" ...
	I0108 21:45:33.433914   10884 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-pdt95" in "kube-system" namespace to be "Ready" ...
	I0108 21:45:33.612366   10884 request.go:629] Waited for 178.1617ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.104.77:8443/api/v1/namespaces/kube-system/pods/kube-proxy-pdt95
	I0108 21:45:33.612487   10884 round_trippers.go:463] GET https://172.29.104.77:8443/api/v1/namespaces/kube-system/pods/kube-proxy-pdt95
	I0108 21:45:33.612487   10884 round_trippers.go:469] Request Headers:
	I0108 21:45:33.612487   10884 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:45:33.612487   10884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0108 21:45:33.621013   10884 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0108 21:45:33.621013   10884 round_trippers.go:577] Response Headers:
	I0108 21:45:33.621013   10884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:45:33.621013   10884 round_trippers.go:580]     Content-Type: application/json
	I0108 21:45:33.621013   10884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9ebd4a85-36a3-4fdc-bd57-aab45239cbb7
	I0108 21:45:33.621013   10884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 15bc4c65-0928-4725-8730-353e6d4075e7
	I0108 21:45:33.621013   10884 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:45:33 GMT
	I0108 21:45:33.621013   10884 round_trippers.go:580]     Audit-Id: 11b73952-39cb-47f3-8c5a-fbe3cf21907c
	I0108 21:45:33.621013   10884 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-pdt95","generateName":"kube-proxy-","namespace":"kube-system","uid":"e4aa76bc-96be-46f8-bc0e-7f3a6caa9883","resourceVersion":"1590","creationTimestamp":"2024-01-08T21:31:11Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"0597a1a7-46a6-4b69-bb4b-83cbc4a269de","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-08T21:31:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0597a1a7-46a6-4b69-bb4b-83cbc4a269de\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5747 chars]
	I0108 21:45:33.812682   10884 request.go:629] Waited for 190.0171ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.104.77:8443/api/v1/nodes/multinode-554300-m03
	I0108 21:45:33.812841   10884 round_trippers.go:463] GET https://172.29.104.77:8443/api/v1/nodes/multinode-554300-m03
	I0108 21:45:33.812841   10884 round_trippers.go:469] Request Headers:
	I0108 21:45:33.812841   10884 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:45:33.812841   10884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0108 21:45:33.816587   10884 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 21:45:33.816587   10884 round_trippers.go:577] Response Headers:
	I0108 21:45:33.816587   10884 round_trippers.go:580]     Content-Type: application/json
	I0108 21:45:33.816587   10884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9ebd4a85-36a3-4fdc-bd57-aab45239cbb7
	I0108 21:45:33.816764   10884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 15bc4c65-0928-4725-8730-353e6d4075e7
	I0108 21:45:33.816764   10884 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:45:33 GMT
	I0108 21:45:33.816764   10884 round_trippers.go:580]     Audit-Id: ce184153-40f0-4c31-8fba-5a0604bf5799
	I0108 21:45:33.816764   10884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:45:33.817050   10884 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-554300-m03","uid":"fc944979-99f9-46c6-a35f-f2c3e1c020f4","resourceVersion":"1612","creationTimestamp":"2024-01-08T21:41:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-554300-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-554300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T21_41_23_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-08T21:41:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:anno
tations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-det [truncated 3636 chars]
	I0108 21:45:33.817582   10884 pod_ready.go:92] pod "kube-proxy-pdt95" in "kube-system" namespace has status "Ready":"True"
	I0108 21:45:33.817582   10884 pod_ready.go:81] duration metric: took 383.6666ms waiting for pod "kube-proxy-pdt95" in "kube-system" namespace to be "Ready" ...
	I0108 21:45:33.817582   10884 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-multinode-554300" in "kube-system" namespace to be "Ready" ...
	I0108 21:45:34.015233   10884 request.go:629] Waited for 197.3408ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.104.77:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-554300
	I0108 21:45:34.015600   10884 round_trippers.go:463] GET https://172.29.104.77:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-554300
	I0108 21:45:34.015600   10884 round_trippers.go:469] Request Headers:
	I0108 21:45:34.015600   10884 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:45:34.015600   10884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0108 21:45:34.019071   10884 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 21:45:34.019071   10884 round_trippers.go:577] Response Headers:
	I0108 21:45:34.019071   10884 round_trippers.go:580]     Audit-Id: 941a1db6-7616-4e76-9b3d-ddbd083bebcc
	I0108 21:45:34.019071   10884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:45:34.019071   10884 round_trippers.go:580]     Content-Type: application/json
	I0108 21:45:34.019071   10884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9ebd4a85-36a3-4fdc-bd57-aab45239cbb7
	I0108 21:45:34.019071   10884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 15bc4c65-0928-4725-8730-353e6d4075e7
	I0108 21:45:34.019071   10884 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:45:34 GMT
	I0108 21:45:34.019431   10884 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-554300","namespace":"kube-system","uid":"f5b78bba-6cd0-495b-b6d6-c9afd93b3534","resourceVersion":"1677","creationTimestamp":"2024-01-08T21:23:32Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"33f58bfb7438c4df2c3fb56db1f613ab","kubernetes.io/config.mirror":"33f58bfb7438c4df2c3fb56db1f613ab","kubernetes.io/config.seen":"2024-01-08T21:23:32.232192792Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-554300","uid":"00d4501c-311c-4564-812c-b620304a4e8e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-08T21:23:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{}
,"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{ [truncated 5149 chars]
	I0108 21:45:34.218264   10884 request.go:629] Waited for 198.0408ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.104.77:8443/api/v1/nodes/multinode-554300
	I0108 21:45:34.218559   10884 round_trippers.go:463] GET https://172.29.104.77:8443/api/v1/nodes/multinode-554300
	I0108 21:45:34.218559   10884 round_trippers.go:469] Request Headers:
	I0108 21:45:34.218559   10884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0108 21:45:34.218683   10884 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:45:34.222412   10884 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 21:45:34.222412   10884 round_trippers.go:577] Response Headers:
	I0108 21:45:34.222412   10884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:45:34.222412   10884 round_trippers.go:580]     Content-Type: application/json
	I0108 21:45:34.222412   10884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9ebd4a85-36a3-4fdc-bd57-aab45239cbb7
	I0108 21:45:34.222412   10884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 15bc4c65-0928-4725-8730-353e6d4075e7
	I0108 21:45:34.222412   10884 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:45:34 GMT
	I0108 21:45:34.222412   10884 round_trippers.go:580]     Audit-Id: 0671a751-8e0e-473a-9577-b76c1964d6f2
	I0108 21:45:34.223377   10884 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-554300","uid":"00d4501c-311c-4564-812c-b620304a4e8e","resourceVersion":"1667","creationTimestamp":"2024-01-08T21:23:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-554300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-554300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T21_23_33_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-01-08T21:23:28Z","fieldsType":"FieldsV1","f [truncated 5364 chars]
	I0108 21:45:34.224129   10884 pod_ready.go:97] node "multinode-554300" hosting pod "kube-scheduler-multinode-554300" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-554300" has status "Ready":"False"
	I0108 21:45:34.224220   10884 pod_ready.go:81] duration metric: took 406.6363ms waiting for pod "kube-scheduler-multinode-554300" in "kube-system" namespace to be "Ready" ...
	E0108 21:45:34.224220   10884 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-554300" hosting pod "kube-scheduler-multinode-554300" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-554300" has status "Ready":"False"
	I0108 21:45:34.224220   10884 pod_ready.go:38] duration metric: took 1.5986765s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0108 21:45:34.224220   10884 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0108 21:45:34.238783   10884 command_runner.go:130] > -16
	I0108 21:45:34.239696   10884 ops.go:34] apiserver oom_adj: -16
	I0108 21:45:34.239696   10884 kubeadm.go:640] restartCluster took 14.5777832s
	I0108 21:45:34.239696   10884 kubeadm.go:406] StartCluster complete in 14.6403684s
	I0108 21:45:34.239871   10884 settings.go:142] acquiring lock: {Name:mk2f48f1c2db86c45c5c20d13312e07e9c171d09 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 21:45:34.240238   10884 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	I0108 21:45:34.241346   10884 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\kubeconfig: {Name:mk3b9816be6135fef42a073128e9ed356868417a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 21:45:34.242792   10884 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0108 21:45:34.242933   10884 addons.go:505] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0108 21:45:34.244384   10884 out.go:177] * Enabled addons: 
	I0108 21:45:34.244692   10884 addons.go:508] enable addons completed in 1.836ms: enabled=[]
	I0108 21:45:34.243546   10884 config.go:182] Loaded profile config "multinode-554300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0108 21:45:34.252954   10884 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	I0108 21:45:34.254548   10884 kapi.go:59] client config for multinode-554300: &rest.Config{Host:"https://172.29.104.77:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\profiles\\multinode-554300\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\profiles\\multinode-554300\\client.key", CAFile:"C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAD
ata:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x188c140), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0108 21:45:34.256487   10884 cert_rotation.go:137] Starting client certificate rotation controller
	I0108 21:45:34.256718   10884 round_trippers.go:463] GET https://172.29.104.77:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0108 21:45:34.256718   10884 round_trippers.go:469] Request Headers:
	I0108 21:45:34.256718   10884 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:45:34.256718   10884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0108 21:45:34.267705   10884 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0108 21:45:34.267705   10884 round_trippers.go:577] Response Headers:
	I0108 21:45:34.268061   10884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9ebd4a85-36a3-4fdc-bd57-aab45239cbb7
	I0108 21:45:34.268061   10884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 15bc4c65-0928-4725-8730-353e6d4075e7
	I0108 21:45:34.268061   10884 round_trippers.go:580]     Content-Length: 292
	I0108 21:45:34.268061   10884 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:45:34 GMT
	I0108 21:45:34.268061   10884 round_trippers.go:580]     Audit-Id: 081d3091-0bbc-47c8-a334-5ed724a9dbb0
	I0108 21:45:34.268061   10884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:45:34.268134   10884 round_trippers.go:580]     Content-Type: application/json
	I0108 21:45:34.268134   10884 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"a05171e3-49ee-4610-ae26-e93c6a171dfe","resourceVersion":"1750","creationTimestamp":"2024-01-08T21:23:32Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0108 21:45:34.268404   10884 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-554300" context rescaled to 1 replicas
	I0108 21:45:34.268460   10884 start.go:223] Will wait 6m0s for node &{Name: IP:172.29.104.77 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0108 21:45:34.269314   10884 out.go:177] * Verifying Kubernetes components...
	I0108 21:45:34.283392   10884 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0108 21:45:34.402751   10884 command_runner.go:130] > apiVersion: v1
	I0108 21:45:34.402751   10884 command_runner.go:130] > data:
	I0108 21:45:34.402751   10884 command_runner.go:130] >   Corefile: |
	I0108 21:45:34.402751   10884 command_runner.go:130] >     .:53 {
	I0108 21:45:34.402751   10884 command_runner.go:130] >         log
	I0108 21:45:34.402751   10884 command_runner.go:130] >         errors
	I0108 21:45:34.402751   10884 command_runner.go:130] >         health {
	I0108 21:45:34.402751   10884 command_runner.go:130] >            lameduck 5s
	I0108 21:45:34.402751   10884 command_runner.go:130] >         }
	I0108 21:45:34.402751   10884 command_runner.go:130] >         ready
	I0108 21:45:34.402751   10884 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I0108 21:45:34.402751   10884 command_runner.go:130] >            pods insecure
	I0108 21:45:34.402751   10884 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I0108 21:45:34.402751   10884 command_runner.go:130] >            ttl 30
	I0108 21:45:34.402751   10884 command_runner.go:130] >         }
	I0108 21:45:34.402751   10884 command_runner.go:130] >         prometheus :9153
	I0108 21:45:34.402751   10884 command_runner.go:130] >         hosts {
	I0108 21:45:34.402751   10884 command_runner.go:130] >            172.29.96.1 host.minikube.internal
	I0108 21:45:34.402751   10884 command_runner.go:130] >            fallthrough
	I0108 21:45:34.402751   10884 command_runner.go:130] >         }
	I0108 21:45:34.402751   10884 command_runner.go:130] >         forward . /etc/resolv.conf {
	I0108 21:45:34.402751   10884 command_runner.go:130] >            max_concurrent 1000
	I0108 21:45:34.402751   10884 command_runner.go:130] >         }
	I0108 21:45:34.402751   10884 command_runner.go:130] >         cache 30
	I0108 21:45:34.402751   10884 command_runner.go:130] >         loop
	I0108 21:45:34.402751   10884 command_runner.go:130] >         reload
	I0108 21:45:34.402751   10884 command_runner.go:130] >         loadbalance
	I0108 21:45:34.402751   10884 command_runner.go:130] >     }
	I0108 21:45:34.402751   10884 command_runner.go:130] > kind: ConfigMap
	I0108 21:45:34.402751   10884 command_runner.go:130] > metadata:
	I0108 21:45:34.402751   10884 command_runner.go:130] >   creationTimestamp: "2024-01-08T21:23:32Z"
	I0108 21:45:34.402751   10884 command_runner.go:130] >   name: coredns
	I0108 21:45:34.402751   10884 command_runner.go:130] >   namespace: kube-system
	I0108 21:45:34.402751   10884 command_runner.go:130] >   resourceVersion: "395"
	I0108 21:45:34.402751   10884 command_runner.go:130] >   uid: 85d0c8c5-2dbc-4b73-acd3-3db46ce68b2b
	I0108 21:45:34.402751   10884 start.go:902] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0108 21:45:34.403284   10884 node_ready.go:35] waiting up to 6m0s for node "multinode-554300" to be "Ready" ...
	I0108 21:45:34.422980   10884 round_trippers.go:463] GET https://172.29.104.77:8443/api/v1/nodes/multinode-554300
	I0108 21:45:34.422980   10884 round_trippers.go:469] Request Headers:
	I0108 21:45:34.422980   10884 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:45:34.422980   10884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0108 21:45:34.426879   10884 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 21:45:34.426879   10884 round_trippers.go:577] Response Headers:
	I0108 21:45:34.426879   10884 round_trippers.go:580]     Audit-Id: 5d0f1e49-e361-4d31-8581-ac056a798af1
	I0108 21:45:34.426948   10884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:45:34.426948   10884 round_trippers.go:580]     Content-Type: application/json
	I0108 21:45:34.426948   10884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9ebd4a85-36a3-4fdc-bd57-aab45239cbb7
	I0108 21:45:34.426948   10884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 15bc4c65-0928-4725-8730-353e6d4075e7
	I0108 21:45:34.426948   10884 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:45:34 GMT
	I0108 21:45:34.426948   10884 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-554300","uid":"00d4501c-311c-4564-812c-b620304a4e8e","resourceVersion":"1667","creationTimestamp":"2024-01-08T21:23:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-554300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-554300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T21_23_33_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-01-08T21:23:28Z","fieldsType":"FieldsV1","f [truncated 5364 chars]
	I0108 21:45:34.908041   10884 round_trippers.go:463] GET https://172.29.104.77:8443/api/v1/nodes/multinode-554300
	I0108 21:45:34.908198   10884 round_trippers.go:469] Request Headers:
	I0108 21:45:34.908198   10884 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:45:34.908198   10884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0108 21:45:34.912593   10884 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0108 21:45:34.912593   10884 round_trippers.go:577] Response Headers:
	I0108 21:45:34.912593   10884 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:45:34 GMT
	I0108 21:45:34.912593   10884 round_trippers.go:580]     Audit-Id: b52792ab-fd8d-4a28-bdc2-fbae122f34b5
	I0108 21:45:34.912593   10884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:45:34.912593   10884 round_trippers.go:580]     Content-Type: application/json
	I0108 21:45:34.912593   10884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9ebd4a85-36a3-4fdc-bd57-aab45239cbb7
	I0108 21:45:34.912593   10884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 15bc4c65-0928-4725-8730-353e6d4075e7
	I0108 21:45:34.912593   10884 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-554300","uid":"00d4501c-311c-4564-812c-b620304a4e8e","resourceVersion":"1667","creationTimestamp":"2024-01-08T21:23:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-554300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-554300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T21_23_33_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-01-08T21:23:28Z","fieldsType":"FieldsV1","f [truncated 5364 chars]
	I0108 21:45:35.407101   10884 round_trippers.go:463] GET https://172.29.104.77:8443/api/v1/nodes/multinode-554300
	I0108 21:45:35.407167   10884 round_trippers.go:469] Request Headers:
	I0108 21:45:35.407167   10884 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:45:35.407167   10884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0108 21:45:35.411191   10884 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0108 21:45:35.411191   10884 round_trippers.go:577] Response Headers:
	I0108 21:45:35.411417   10884 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:45:35 GMT
	I0108 21:45:35.411417   10884 round_trippers.go:580]     Audit-Id: 1437c20e-cac2-4982-9cb2-02e78a5f11b3
	I0108 21:45:35.411417   10884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:45:35.411417   10884 round_trippers.go:580]     Content-Type: application/json
	I0108 21:45:35.411417   10884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9ebd4a85-36a3-4fdc-bd57-aab45239cbb7
	I0108 21:45:35.411417   10884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 15bc4c65-0928-4725-8730-353e6d4075e7
	I0108 21:45:35.411593   10884 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-554300","uid":"00d4501c-311c-4564-812c-b620304a4e8e","resourceVersion":"1667","creationTimestamp":"2024-01-08T21:23:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-554300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-554300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T21_23_33_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-01-08T21:23:28Z","fieldsType":"FieldsV1","f [truncated 5364 chars]
	I0108 21:45:35.910207   10884 round_trippers.go:463] GET https://172.29.104.77:8443/api/v1/nodes/multinode-554300
	I0108 21:45:35.910265   10884 round_trippers.go:469] Request Headers:
	I0108 21:45:35.910265   10884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0108 21:45:35.910265   10884 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:45:35.915321   10884 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0108 21:45:35.915496   10884 round_trippers.go:577] Response Headers:
	I0108 21:45:35.915496   10884 round_trippers.go:580]     Audit-Id: e222c9ec-83ad-435e-998b-2c3efaf8f03f
	I0108 21:45:35.915496   10884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:45:35.915496   10884 round_trippers.go:580]     Content-Type: application/json
	I0108 21:45:35.915496   10884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9ebd4a85-36a3-4fdc-bd57-aab45239cbb7
	I0108 21:45:35.915496   10884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 15bc4c65-0928-4725-8730-353e6d4075e7
	I0108 21:45:35.915496   10884 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:45:35 GMT
	I0108 21:45:35.915689   10884 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-554300","uid":"00d4501c-311c-4564-812c-b620304a4e8e","resourceVersion":"1667","creationTimestamp":"2024-01-08T21:23:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-554300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-554300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T21_23_33_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-01-08T21:23:28Z","fieldsType":"FieldsV1","f [truncated 5364 chars]
	I0108 21:45:36.410480   10884 round_trippers.go:463] GET https://172.29.104.77:8443/api/v1/nodes/multinode-554300
	I0108 21:45:36.410480   10884 round_trippers.go:469] Request Headers:
	I0108 21:45:36.410480   10884 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:45:36.410480   10884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0108 21:45:36.413468   10884 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 21:45:36.413468   10884 round_trippers.go:577] Response Headers:
	I0108 21:45:36.413468   10884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:45:36.413468   10884 round_trippers.go:580]     Content-Type: application/json
	I0108 21:45:36.413468   10884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9ebd4a85-36a3-4fdc-bd57-aab45239cbb7
	I0108 21:45:36.413468   10884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 15bc4c65-0928-4725-8730-353e6d4075e7
	I0108 21:45:36.413468   10884 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:45:36 GMT
	I0108 21:45:36.413468   10884 round_trippers.go:580]     Audit-Id: 4954c651-ffd7-46c0-9da5-6b104e12c2d9
	I0108 21:45:36.414467   10884 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-554300","uid":"00d4501c-311c-4564-812c-b620304a4e8e","resourceVersion":"1667","creationTimestamp":"2024-01-08T21:23:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-554300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-554300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T21_23_33_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-01-08T21:23:28Z","fieldsType":"FieldsV1","f [truncated 5364 chars]
	I0108 21:45:36.414467   10884 node_ready.go:58] node "multinode-554300" has status "Ready":"False"
	I0108 21:45:36.915231   10884 round_trippers.go:463] GET https://172.29.104.77:8443/api/v1/nodes/multinode-554300
	I0108 21:45:36.915231   10884 round_trippers.go:469] Request Headers:
	I0108 21:45:36.915231   10884 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:45:36.915231   10884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0108 21:45:36.920219   10884 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0108 21:45:36.920328   10884 round_trippers.go:577] Response Headers:
	I0108 21:45:36.920328   10884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9ebd4a85-36a3-4fdc-bd57-aab45239cbb7
	I0108 21:45:36.920328   10884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 15bc4c65-0928-4725-8730-353e6d4075e7
	I0108 21:45:36.920328   10884 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:45:36 GMT
	I0108 21:45:36.920328   10884 round_trippers.go:580]     Audit-Id: 7e8ee7c6-9538-455b-9c72-88b97926f7e8
	I0108 21:45:36.920328   10884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:45:36.920328   10884 round_trippers.go:580]     Content-Type: application/json
	I0108 21:45:36.920818   10884 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-554300","uid":"00d4501c-311c-4564-812c-b620304a4e8e","resourceVersion":"1667","creationTimestamp":"2024-01-08T21:23:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-554300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-554300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T21_23_33_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-01-08T21:23:28Z","fieldsType":"FieldsV1","f [truncated 5364 chars]
	I0108 21:45:37.417124   10884 round_trippers.go:463] GET https://172.29.104.77:8443/api/v1/nodes/multinode-554300
	I0108 21:45:37.417154   10884 round_trippers.go:469] Request Headers:
	I0108 21:45:37.417154   10884 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:45:37.417154   10884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0108 21:45:37.420450   10884 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 21:45:37.420450   10884 round_trippers.go:577] Response Headers:
	I0108 21:45:37.420450   10884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9ebd4a85-36a3-4fdc-bd57-aab45239cbb7
	I0108 21:45:37.420450   10884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 15bc4c65-0928-4725-8730-353e6d4075e7
	I0108 21:45:37.420450   10884 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:45:37 GMT
	I0108 21:45:37.420450   10884 round_trippers.go:580]     Audit-Id: 6d752480-7d66-49a6-aa7f-ec08288b1e2c
	I0108 21:45:37.420748   10884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:45:37.420748   10884 round_trippers.go:580]     Content-Type: application/json
	I0108 21:45:37.420921   10884 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-554300","uid":"00d4501c-311c-4564-812c-b620304a4e8e","resourceVersion":"1667","creationTimestamp":"2024-01-08T21:23:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-554300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-554300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T21_23_33_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-01-08T21:23:28Z","fieldsType":"FieldsV1","f [truncated 5364 chars]
	I0108 21:45:37.917633   10884 round_trippers.go:463] GET https://172.29.104.77:8443/api/v1/nodes/multinode-554300
	I0108 21:45:37.917762   10884 round_trippers.go:469] Request Headers:
	I0108 21:45:37.917762   10884 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:45:37.917762   10884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0108 21:45:37.924188   10884 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0108 21:45:37.924863   10884 round_trippers.go:577] Response Headers:
	I0108 21:45:37.924863   10884 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:45:37 GMT
	I0108 21:45:37.924863   10884 round_trippers.go:580]     Audit-Id: c3cdead8-4eba-45f4-887d-a210ceffbb3a
	I0108 21:45:37.924863   10884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:45:37.924863   10884 round_trippers.go:580]     Content-Type: application/json
	I0108 21:45:37.924863   10884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9ebd4a85-36a3-4fdc-bd57-aab45239cbb7
	I0108 21:45:37.924863   10884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 15bc4c65-0928-4725-8730-353e6d4075e7
	I0108 21:45:37.925641   10884 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-554300","uid":"00d4501c-311c-4564-812c-b620304a4e8e","resourceVersion":"1667","creationTimestamp":"2024-01-08T21:23:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-554300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-554300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T21_23_33_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-01-08T21:23:28Z","fieldsType":"FieldsV1","f [truncated 5364 chars]
	I0108 21:45:38.418903   10884 round_trippers.go:463] GET https://172.29.104.77:8443/api/v1/nodes/multinode-554300
	I0108 21:45:38.418903   10884 round_trippers.go:469] Request Headers:
	I0108 21:45:38.419445   10884 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:45:38.419445   10884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0108 21:45:38.427631   10884 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0108 21:45:38.427631   10884 round_trippers.go:577] Response Headers:
	I0108 21:45:38.427631   10884 round_trippers.go:580]     Audit-Id: 44cce035-8b5c-44b9-b618-8b20ee7d5944
	I0108 21:45:38.427631   10884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:45:38.427631   10884 round_trippers.go:580]     Content-Type: application/json
	I0108 21:45:38.427631   10884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9ebd4a85-36a3-4fdc-bd57-aab45239cbb7
	I0108 21:45:38.427631   10884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 15bc4c65-0928-4725-8730-353e6d4075e7
	I0108 21:45:38.427961   10884 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:45:38 GMT
	I0108 21:45:38.428173   10884 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-554300","uid":"00d4501c-311c-4564-812c-b620304a4e8e","resourceVersion":"1667","creationTimestamp":"2024-01-08T21:23:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-554300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-554300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T21_23_33_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-01-08T21:23:28Z","fieldsType":"FieldsV1","f [truncated 5364 chars]
	I0108 21:45:38.428805   10884 node_ready.go:58] node "multinode-554300" has status "Ready":"False"
	I0108 21:45:38.919206   10884 round_trippers.go:463] GET https://172.29.104.77:8443/api/v1/nodes/multinode-554300
	I0108 21:45:38.919206   10884 round_trippers.go:469] Request Headers:
	I0108 21:45:38.919206   10884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0108 21:45:38.919206   10884 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:45:38.925025   10884 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0108 21:45:38.925025   10884 round_trippers.go:577] Response Headers:
	I0108 21:45:38.925025   10884 round_trippers.go:580]     Content-Type: application/json
	I0108 21:45:38.925025   10884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9ebd4a85-36a3-4fdc-bd57-aab45239cbb7
	I0108 21:45:38.925108   10884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 15bc4c65-0928-4725-8730-353e6d4075e7
	I0108 21:45:38.925108   10884 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:45:38 GMT
	I0108 21:45:38.925108   10884 round_trippers.go:580]     Audit-Id: 0e7db9b9-be32-4f12-990c-ca29b3f1696d
	I0108 21:45:38.925108   10884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:45:38.925169   10884 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-554300","uid":"00d4501c-311c-4564-812c-b620304a4e8e","resourceVersion":"1667","creationTimestamp":"2024-01-08T21:23:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-554300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-554300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T21_23_33_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-01-08T21:23:28Z","fieldsType":"FieldsV1","f [truncated 5364 chars]
	I0108 21:45:39.404304   10884 round_trippers.go:463] GET https://172.29.104.77:8443/api/v1/nodes/multinode-554300
	I0108 21:45:39.404304   10884 round_trippers.go:469] Request Headers:
	I0108 21:45:39.404304   10884 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:45:39.404426   10884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0108 21:45:39.408824   10884 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0108 21:45:39.408824   10884 round_trippers.go:577] Response Headers:
	I0108 21:45:39.408824   10884 round_trippers.go:580]     Audit-Id: 8bd71cd2-89e8-4dc5-9a09-9d0c1cb0608f
	I0108 21:45:39.409083   10884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:45:39.409083   10884 round_trippers.go:580]     Content-Type: application/json
	I0108 21:45:39.409083   10884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9ebd4a85-36a3-4fdc-bd57-aab45239cbb7
	I0108 21:45:39.409083   10884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 15bc4c65-0928-4725-8730-353e6d4075e7
	I0108 21:45:39.409083   10884 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:45:39 GMT
	I0108 21:45:39.409278   10884 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-554300","uid":"00d4501c-311c-4564-812c-b620304a4e8e","resourceVersion":"1667","creationTimestamp":"2024-01-08T21:23:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-554300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-554300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T21_23_33_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-01-08T21:23:28Z","fieldsType":"FieldsV1","f [truncated 5364 chars]
	I0108 21:45:39.903835   10884 round_trippers.go:463] GET https://172.29.104.77:8443/api/v1/nodes/multinode-554300
	I0108 21:45:39.903835   10884 round_trippers.go:469] Request Headers:
	I0108 21:45:39.903835   10884 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:45:39.903835   10884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0108 21:45:39.911886   10884 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0108 21:45:39.911886   10884 round_trippers.go:577] Response Headers:
	I0108 21:45:39.911886   10884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9ebd4a85-36a3-4fdc-bd57-aab45239cbb7
	I0108 21:45:39.911886   10884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 15bc4c65-0928-4725-8730-353e6d4075e7
	I0108 21:45:39.911886   10884 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:45:39 GMT
	I0108 21:45:39.911886   10884 round_trippers.go:580]     Audit-Id: 6bfe3cc1-0eaf-4153-a94b-f23f49c9644d
	I0108 21:45:39.911886   10884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:45:39.911886   10884 round_trippers.go:580]     Content-Type: application/json
	I0108 21:45:39.912466   10884 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-554300","uid":"00d4501c-311c-4564-812c-b620304a4e8e","resourceVersion":"1667","creationTimestamp":"2024-01-08T21:23:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-554300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-554300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T21_23_33_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-01-08T21:23:28Z","fieldsType":"FieldsV1","f [truncated 5364 chars]
	I0108 21:45:40.417934   10884 round_trippers.go:463] GET https://172.29.104.77:8443/api/v1/nodes/multinode-554300
	I0108 21:45:40.418056   10884 round_trippers.go:469] Request Headers:
	I0108 21:45:40.418056   10884 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:45:40.418122   10884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0108 21:45:40.422888   10884 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0108 21:45:40.422888   10884 round_trippers.go:577] Response Headers:
	I0108 21:45:40.422888   10884 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:45:40 GMT
	I0108 21:45:40.422888   10884 round_trippers.go:580]     Audit-Id: 5b797d68-74d3-401a-887c-759f888b7629
	I0108 21:45:40.422888   10884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:45:40.422888   10884 round_trippers.go:580]     Content-Type: application/json
	I0108 21:45:40.422888   10884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9ebd4a85-36a3-4fdc-bd57-aab45239cbb7
	I0108 21:45:40.422991   10884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 15bc4c65-0928-4725-8730-353e6d4075e7
	I0108 21:45:40.422991   10884 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-554300","uid":"00d4501c-311c-4564-812c-b620304a4e8e","resourceVersion":"1667","creationTimestamp":"2024-01-08T21:23:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-554300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-554300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T21_23_33_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-01-08T21:23:28Z","fieldsType":"FieldsV1","f [truncated 5364 chars]
	I0108 21:45:40.917960   10884 round_trippers.go:463] GET https://172.29.104.77:8443/api/v1/nodes/multinode-554300
	I0108 21:45:40.918040   10884 round_trippers.go:469] Request Headers:
	I0108 21:45:40.918040   10884 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:45:40.918040   10884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0108 21:45:40.922381   10884 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0108 21:45:40.922381   10884 round_trippers.go:577] Response Headers:
	I0108 21:45:40.922844   10884 round_trippers.go:580]     Audit-Id: 1921da73-ab6e-492b-b6c0-297dcf8a7a4e
	I0108 21:45:40.922844   10884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:45:40.922844   10884 round_trippers.go:580]     Content-Type: application/json
	I0108 21:45:40.922881   10884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9ebd4a85-36a3-4fdc-bd57-aab45239cbb7
	I0108 21:45:40.922881   10884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 15bc4c65-0928-4725-8730-353e6d4075e7
	I0108 21:45:40.922881   10884 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:45:40 GMT
	I0108 21:45:40.925521   10884 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-554300","uid":"00d4501c-311c-4564-812c-b620304a4e8e","resourceVersion":"1667","creationTimestamp":"2024-01-08T21:23:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-554300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-554300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T21_23_33_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-01-08T21:23:28Z","fieldsType":"FieldsV1","f [truncated 5364 chars]
	I0108 21:45:40.926036   10884 node_ready.go:58] node "multinode-554300" has status "Ready":"False"
	I0108 21:45:41.408693   10884 round_trippers.go:463] GET https://172.29.104.77:8443/api/v1/nodes/multinode-554300
	I0108 21:45:41.408693   10884 round_trippers.go:469] Request Headers:
	I0108 21:45:41.408693   10884 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:45:41.408693   10884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0108 21:45:41.412342   10884 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 21:45:41.412920   10884 round_trippers.go:577] Response Headers:
	I0108 21:45:41.412920   10884 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:45:41 GMT
	I0108 21:45:41.412920   10884 round_trippers.go:580]     Audit-Id: 08df7462-2a74-47cc-93b2-99f297203380
	I0108 21:45:41.412920   10884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:45:41.412920   10884 round_trippers.go:580]     Content-Type: application/json
	I0108 21:45:41.412920   10884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9ebd4a85-36a3-4fdc-bd57-aab45239cbb7
	I0108 21:45:41.412920   10884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 15bc4c65-0928-4725-8730-353e6d4075e7
	I0108 21:45:41.413176   10884 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-554300","uid":"00d4501c-311c-4564-812c-b620304a4e8e","resourceVersion":"1801","creationTimestamp":"2024-01-08T21:23:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-554300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-554300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T21_23_33_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-01-08T21:23:28Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0108 21:45:41.909071   10884 round_trippers.go:463] GET https://172.29.104.77:8443/api/v1/nodes/multinode-554300
	I0108 21:45:41.909071   10884 round_trippers.go:469] Request Headers:
	I0108 21:45:41.909071   10884 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:45:41.909071   10884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0108 21:45:41.917476   10884 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0108 21:45:41.917476   10884 round_trippers.go:577] Response Headers:
	I0108 21:45:41.917476   10884 round_trippers.go:580]     Audit-Id: ff8ad44e-aa08-4ef4-bd91-3eba43ff1b64
	I0108 21:45:41.917476   10884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:45:41.917476   10884 round_trippers.go:580]     Content-Type: application/json
	I0108 21:45:41.917476   10884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9ebd4a85-36a3-4fdc-bd57-aab45239cbb7
	I0108 21:45:41.917476   10884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 15bc4c65-0928-4725-8730-353e6d4075e7
	I0108 21:45:41.917476   10884 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:45:41 GMT
	I0108 21:45:41.917476   10884 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-554300","uid":"00d4501c-311c-4564-812c-b620304a4e8e","resourceVersion":"1801","creationTimestamp":"2024-01-08T21:23:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-554300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-554300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T21_23_33_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-01-08T21:23:28Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0108 21:45:42.411668   10884 round_trippers.go:463] GET https://172.29.104.77:8443/api/v1/nodes/multinode-554300
	I0108 21:45:42.412048   10884 round_trippers.go:469] Request Headers:
	I0108 21:45:42.412101   10884 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:45:42.412101   10884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0108 21:45:42.415716   10884 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 21:45:42.415716   10884 round_trippers.go:577] Response Headers:
	I0108 21:45:42.415716   10884 round_trippers.go:580]     Audit-Id: 1a5440fc-8963-4214-9efc-b761bfea8430
	I0108 21:45:42.415716   10884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:45:42.415716   10884 round_trippers.go:580]     Content-Type: application/json
	I0108 21:45:42.416713   10884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9ebd4a85-36a3-4fdc-bd57-aab45239cbb7
	I0108 21:45:42.416713   10884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 15bc4c65-0928-4725-8730-353e6d4075e7
	I0108 21:45:42.416713   10884 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:45:42 GMT
	I0108 21:45:42.416945   10884 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-554300","uid":"00d4501c-311c-4564-812c-b620304a4e8e","resourceVersion":"1801","creationTimestamp":"2024-01-08T21:23:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-554300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-554300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T21_23_33_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-01-08T21:23:28Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0108 21:45:42.915286   10884 round_trippers.go:463] GET https://172.29.104.77:8443/api/v1/nodes/multinode-554300
	I0108 21:45:42.915286   10884 round_trippers.go:469] Request Headers:
	I0108 21:45:42.915286   10884 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:45:42.915368   10884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0108 21:45:42.918601   10884 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 21:45:42.918601   10884 round_trippers.go:577] Response Headers:
	I0108 21:45:42.919402   10884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 15bc4c65-0928-4725-8730-353e6d4075e7
	I0108 21:45:42.919402   10884 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:45:42 GMT
	I0108 21:45:42.919402   10884 round_trippers.go:580]     Audit-Id: c2dfbb84-991b-4513-b69d-c495bb8e6fa4
	I0108 21:45:42.919402   10884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:45:42.919402   10884 round_trippers.go:580]     Content-Type: application/json
	I0108 21:45:42.919402   10884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9ebd4a85-36a3-4fdc-bd57-aab45239cbb7
	I0108 21:45:42.919888   10884 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-554300","uid":"00d4501c-311c-4564-812c-b620304a4e8e","resourceVersion":"1801","creationTimestamp":"2024-01-08T21:23:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-554300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-554300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T21_23_33_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-01-08T21:23:28Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0108 21:45:43.406046   10884 round_trippers.go:463] GET https://172.29.104.77:8443/api/v1/nodes/multinode-554300
	I0108 21:45:43.406046   10884 round_trippers.go:469] Request Headers:
	I0108 21:45:43.406198   10884 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:45:43.406198   10884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0108 21:45:43.409462   10884 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 21:45:43.409462   10884 round_trippers.go:577] Response Headers:
	I0108 21:45:43.409462   10884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 15bc4c65-0928-4725-8730-353e6d4075e7
	I0108 21:45:43.409462   10884 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:45:43 GMT
	I0108 21:45:43.410345   10884 round_trippers.go:580]     Audit-Id: 59d5b2e4-0d4b-49db-88ae-dcbddb1ba3c6
	I0108 21:45:43.410345   10884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:45:43.410345   10884 round_trippers.go:580]     Content-Type: application/json
	I0108 21:45:43.410345   10884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9ebd4a85-36a3-4fdc-bd57-aab45239cbb7
	I0108 21:45:43.410567   10884 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-554300","uid":"00d4501c-311c-4564-812c-b620304a4e8e","resourceVersion":"1801","creationTimestamp":"2024-01-08T21:23:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-554300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-554300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T21_23_33_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-01-08T21:23:28Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0108 21:45:43.410944   10884 node_ready.go:58] node "multinode-554300" has status "Ready":"False"
	I0108 21:45:43.909355   10884 round_trippers.go:463] GET https://172.29.104.77:8443/api/v1/nodes/multinode-554300
	I0108 21:45:43.909355   10884 round_trippers.go:469] Request Headers:
	I0108 21:45:43.909528   10884 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:45:43.909528   10884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0108 21:45:43.913910   10884 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0108 21:45:43.913910   10884 round_trippers.go:577] Response Headers:
	I0108 21:45:43.914074   10884 round_trippers.go:580]     Audit-Id: 53cf06a7-98b7-4740-9b98-de05af6a62fd
	I0108 21:45:43.914074   10884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:45:43.914074   10884 round_trippers.go:580]     Content-Type: application/json
	I0108 21:45:43.914074   10884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9ebd4a85-36a3-4fdc-bd57-aab45239cbb7
	I0108 21:45:43.914074   10884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 15bc4c65-0928-4725-8730-353e6d4075e7
	I0108 21:45:43.914074   10884 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:45:43 GMT
	I0108 21:45:43.914443   10884 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-554300","uid":"00d4501c-311c-4564-812c-b620304a4e8e","resourceVersion":"1801","creationTimestamp":"2024-01-08T21:23:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-554300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-554300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T21_23_33_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-01-08T21:23:28Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0108 21:45:44.410203   10884 round_trippers.go:463] GET https://172.29.104.77:8443/api/v1/nodes/multinode-554300
	I0108 21:45:44.410203   10884 round_trippers.go:469] Request Headers:
	I0108 21:45:44.410203   10884 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:45:44.410203   10884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0108 21:45:44.414688   10884 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0108 21:45:44.414871   10884 round_trippers.go:577] Response Headers:
	I0108 21:45:44.414871   10884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:45:44.414954   10884 round_trippers.go:580]     Content-Type: application/json
	I0108 21:45:44.414954   10884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9ebd4a85-36a3-4fdc-bd57-aab45239cbb7
	I0108 21:45:44.414954   10884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 15bc4c65-0928-4725-8730-353e6d4075e7
	I0108 21:45:44.414954   10884 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:45:44 GMT
	I0108 21:45:44.414954   10884 round_trippers.go:580]     Audit-Id: 13145426-6e99-40a6-b17e-fbaac996fdb0
	I0108 21:45:44.414954   10884 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-554300","uid":"00d4501c-311c-4564-812c-b620304a4e8e","resourceVersion":"1801","creationTimestamp":"2024-01-08T21:23:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-554300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-554300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T21_23_33_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-01-08T21:23:28Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0108 21:45:44.904186   10884 round_trippers.go:463] GET https://172.29.104.77:8443/api/v1/nodes/multinode-554300
	I0108 21:45:44.904264   10884 round_trippers.go:469] Request Headers:
	I0108 21:45:44.904264   10884 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:45:44.904366   10884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0108 21:45:44.924388   10884 round_trippers.go:574] Response Status: 200 OK in 20 milliseconds
	I0108 21:45:44.924388   10884 round_trippers.go:577] Response Headers:
	I0108 21:45:44.924388   10884 round_trippers.go:580]     Audit-Id: 94e52bb8-85a7-40a2-83f3-e6cb0d4c192c
	I0108 21:45:44.924388   10884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:45:44.924388   10884 round_trippers.go:580]     Content-Type: application/json
	I0108 21:45:44.924388   10884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9ebd4a85-36a3-4fdc-bd57-aab45239cbb7
	I0108 21:45:44.924388   10884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 15bc4c65-0928-4725-8730-353e6d4075e7
	I0108 21:45:44.924388   10884 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:45:44 GMT
	I0108 21:45:44.924388   10884 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-554300","uid":"00d4501c-311c-4564-812c-b620304a4e8e","resourceVersion":"1801","creationTimestamp":"2024-01-08T21:23:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-554300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-554300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T21_23_33_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-01-08T21:23:28Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0108 21:45:45.412170   10884 round_trippers.go:463] GET https://172.29.104.77:8443/api/v1/nodes/multinode-554300
	I0108 21:45:45.412170   10884 round_trippers.go:469] Request Headers:
	I0108 21:45:45.412170   10884 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:45:45.412170   10884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0108 21:45:45.415320   10884 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 21:45:45.415320   10884 round_trippers.go:577] Response Headers:
	I0108 21:45:45.415320   10884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 15bc4c65-0928-4725-8730-353e6d4075e7
	I0108 21:45:45.415320   10884 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:45:45 GMT
	I0108 21:45:45.415320   10884 round_trippers.go:580]     Audit-Id: 024b852c-1e3e-419d-a9c2-48f59b60a691
	I0108 21:45:45.415320   10884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:45:45.415838   10884 round_trippers.go:580]     Content-Type: application/json
	I0108 21:45:45.415838   10884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9ebd4a85-36a3-4fdc-bd57-aab45239cbb7
	I0108 21:45:45.416060   10884 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-554300","uid":"00d4501c-311c-4564-812c-b620304a4e8e","resourceVersion":"1801","creationTimestamp":"2024-01-08T21:23:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-554300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-554300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T21_23_33_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-01-08T21:23:28Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0108 21:45:45.416696   10884 node_ready.go:58] node "multinode-554300" has status "Ready":"False"
	I0108 21:45:45.918418   10884 round_trippers.go:463] GET https://172.29.104.77:8443/api/v1/nodes/multinode-554300
	I0108 21:45:45.918478   10884 round_trippers.go:469] Request Headers:
	I0108 21:45:45.918478   10884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0108 21:45:45.918478   10884 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:45:45.926933   10884 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0108 21:45:45.926933   10884 round_trippers.go:577] Response Headers:
	I0108 21:45:45.926933   10884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 15bc4c65-0928-4725-8730-353e6d4075e7
	I0108 21:45:45.927100   10884 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:45:45 GMT
	I0108 21:45:45.927100   10884 round_trippers.go:580]     Audit-Id: fe90003e-530a-41af-b578-9f4e8562b2f6
	I0108 21:45:45.927100   10884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:45:45.927100   10884 round_trippers.go:580]     Content-Type: application/json
	I0108 21:45:45.927100   10884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9ebd4a85-36a3-4fdc-bd57-aab45239cbb7
	I0108 21:45:45.927255   10884 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-554300","uid":"00d4501c-311c-4564-812c-b620304a4e8e","resourceVersion":"1801","creationTimestamp":"2024-01-08T21:23:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-554300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-554300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T21_23_33_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-01-08T21:23:28Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0108 21:45:46.407920   10884 round_trippers.go:463] GET https://172.29.104.77:8443/api/v1/nodes/multinode-554300
	I0108 21:45:46.408032   10884 round_trippers.go:469] Request Headers:
	I0108 21:45:46.408032   10884 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:45:46.408032   10884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0108 21:45:46.411964   10884 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 21:45:46.412576   10884 round_trippers.go:577] Response Headers:
	I0108 21:45:46.412576   10884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 15bc4c65-0928-4725-8730-353e6d4075e7
	I0108 21:45:46.412576   10884 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:45:46 GMT
	I0108 21:45:46.412576   10884 round_trippers.go:580]     Audit-Id: e4d54451-52ac-44a1-a663-de0ad8cfd689
	I0108 21:45:46.412576   10884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:45:46.412656   10884 round_trippers.go:580]     Content-Type: application/json
	I0108 21:45:46.412656   10884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9ebd4a85-36a3-4fdc-bd57-aab45239cbb7
	I0108 21:45:46.412959   10884 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-554300","uid":"00d4501c-311c-4564-812c-b620304a4e8e","resourceVersion":"1801","creationTimestamp":"2024-01-08T21:23:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-554300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-554300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T21_23_33_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-01-08T21:23:28Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0108 21:45:46.909374   10884 round_trippers.go:463] GET https://172.29.104.77:8443/api/v1/nodes/multinode-554300
	I0108 21:45:46.909374   10884 round_trippers.go:469] Request Headers:
	I0108 21:45:46.909374   10884 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:45:46.909487   10884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0108 21:45:46.912809   10884 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 21:45:46.912809   10884 round_trippers.go:577] Response Headers:
	I0108 21:45:46.912809   10884 round_trippers.go:580]     Audit-Id: 9d8e60f8-e60b-4712-9bc3-dc23a20e2c73
	I0108 21:45:46.912809   10884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:45:46.912809   10884 round_trippers.go:580]     Content-Type: application/json
	I0108 21:45:46.912809   10884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9ebd4a85-36a3-4fdc-bd57-aab45239cbb7
	I0108 21:45:46.912809   10884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 15bc4c65-0928-4725-8730-353e6d4075e7
	I0108 21:45:46.912809   10884 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:45:46 GMT
	I0108 21:45:46.912809   10884 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-554300","uid":"00d4501c-311c-4564-812c-b620304a4e8e","resourceVersion":"1801","creationTimestamp":"2024-01-08T21:23:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-554300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-554300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T21_23_33_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-01-08T21:23:28Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0108 21:45:47.409416   10884 round_trippers.go:463] GET https://172.29.104.77:8443/api/v1/nodes/multinode-554300
	I0108 21:45:47.409495   10884 round_trippers.go:469] Request Headers:
	I0108 21:45:47.409495   10884 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:45:47.409495   10884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0108 21:45:47.413989   10884 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0108 21:45:47.413989   10884 round_trippers.go:577] Response Headers:
	I0108 21:45:47.413989   10884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:45:47.414519   10884 round_trippers.go:580]     Content-Type: application/json
	I0108 21:45:47.414519   10884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9ebd4a85-36a3-4fdc-bd57-aab45239cbb7
	I0108 21:45:47.414519   10884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 15bc4c65-0928-4725-8730-353e6d4075e7
	I0108 21:45:47.414519   10884 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:45:47 GMT
	I0108 21:45:47.414576   10884 round_trippers.go:580]     Audit-Id: 77282a9e-9071-4fba-9467-634e1a7d3a3f
	I0108 21:45:47.414688   10884 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-554300","uid":"00d4501c-311c-4564-812c-b620304a4e8e","resourceVersion":"1801","creationTimestamp":"2024-01-08T21:23:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-554300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-554300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T21_23_33_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-01-08T21:23:28Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0108 21:45:47.909473   10884 round_trippers.go:463] GET https://172.29.104.77:8443/api/v1/nodes/multinode-554300
	I0108 21:45:47.909549   10884 round_trippers.go:469] Request Headers:
	I0108 21:45:47.909549   10884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0108 21:45:47.909549   10884 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:45:47.912947   10884 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 21:45:47.912947   10884 round_trippers.go:577] Response Headers:
	I0108 21:45:47.912947   10884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 15bc4c65-0928-4725-8730-353e6d4075e7
	I0108 21:45:47.912947   10884 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:45:47 GMT
	I0108 21:45:47.912947   10884 round_trippers.go:580]     Audit-Id: b9d7b26b-dcc3-4e75-b594-834eb73304db
	I0108 21:45:47.912947   10884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:45:47.912947   10884 round_trippers.go:580]     Content-Type: application/json
	I0108 21:45:47.912947   10884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9ebd4a85-36a3-4fdc-bd57-aab45239cbb7
	I0108 21:45:47.914457   10884 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-554300","uid":"00d4501c-311c-4564-812c-b620304a4e8e","resourceVersion":"1801","creationTimestamp":"2024-01-08T21:23:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-554300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-554300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T21_23_33_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-01-08T21:23:28Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0108 21:45:47.915024   10884 node_ready.go:58] node "multinode-554300" has status "Ready":"False"
	I0108 21:45:48.410097   10884 round_trippers.go:463] GET https://172.29.104.77:8443/api/v1/nodes/multinode-554300
	I0108 21:45:48.410157   10884 round_trippers.go:469] Request Headers:
	I0108 21:45:48.410157   10884 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:45:48.410157   10884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0108 21:45:48.413976   10884 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 21:45:48.414979   10884 round_trippers.go:577] Response Headers:
	I0108 21:45:48.414979   10884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 15bc4c65-0928-4725-8730-353e6d4075e7
	I0108 21:45:48.414979   10884 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:45:48 GMT
	I0108 21:45:48.414979   10884 round_trippers.go:580]     Audit-Id: b8715f9d-6ac3-41c3-8993-0d388eedd7df
	I0108 21:45:48.415038   10884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:45:48.415038   10884 round_trippers.go:580]     Content-Type: application/json
	I0108 21:45:48.415093   10884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9ebd4a85-36a3-4fdc-bd57-aab45239cbb7
	I0108 21:45:48.415275   10884 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-554300","uid":"00d4501c-311c-4564-812c-b620304a4e8e","resourceVersion":"1801","creationTimestamp":"2024-01-08T21:23:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-554300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-554300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T21_23_33_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-01-08T21:23:28Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0108 21:45:48.906890   10884 round_trippers.go:463] GET https://172.29.104.77:8443/api/v1/nodes/multinode-554300
	I0108 21:45:48.906986   10884 round_trippers.go:469] Request Headers:
	I0108 21:45:48.907042   10884 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:45:48.907042   10884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0108 21:45:48.910658   10884 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 21:45:48.910658   10884 round_trippers.go:577] Response Headers:
	I0108 21:45:48.911585   10884 round_trippers.go:580]     Audit-Id: f109f03f-f68d-42e6-a8e3-c9593d97f784
	I0108 21:45:48.911585   10884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:45:48.911585   10884 round_trippers.go:580]     Content-Type: application/json
	I0108 21:45:48.911585   10884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9ebd4a85-36a3-4fdc-bd57-aab45239cbb7
	I0108 21:45:48.911585   10884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 15bc4c65-0928-4725-8730-353e6d4075e7
	I0108 21:45:48.911585   10884 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:45:48 GMT
	I0108 21:45:48.911706   10884 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-554300","uid":"00d4501c-311c-4564-812c-b620304a4e8e","resourceVersion":"1801","creationTimestamp":"2024-01-08T21:23:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-554300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-554300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T21_23_33_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-01-08T21:23:28Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0108 21:45:49.408047   10884 round_trippers.go:463] GET https://172.29.104.77:8443/api/v1/nodes/multinode-554300
	I0108 21:45:49.408047   10884 round_trippers.go:469] Request Headers:
	I0108 21:45:49.408047   10884 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:45:49.408047   10884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0108 21:45:49.411725   10884 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 21:45:49.411725   10884 round_trippers.go:577] Response Headers:
	I0108 21:45:49.411725   10884 round_trippers.go:580]     Audit-Id: 877173f2-bca2-4d81-85f0-2bc0dfb7eeed
	I0108 21:45:49.412710   10884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:45:49.412710   10884 round_trippers.go:580]     Content-Type: application/json
	I0108 21:45:49.412742   10884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9ebd4a85-36a3-4fdc-bd57-aab45239cbb7
	I0108 21:45:49.412742   10884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 15bc4c65-0928-4725-8730-353e6d4075e7
	I0108 21:45:49.412742   10884 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:45:49 GMT
	I0108 21:45:49.412985   10884 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-554300","uid":"00d4501c-311c-4564-812c-b620304a4e8e","resourceVersion":"1834","creationTimestamp":"2024-01-08T21:23:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-554300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-554300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T21_23_33_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-01-08T21:23:28Z","fieldsType":"FieldsV1","f [truncated 5357 chars]
	I0108 21:45:49.413529   10884 node_ready.go:49] node "multinode-554300" has status "Ready":"True"
	I0108 21:45:49.413529   10884 node_ready.go:38] duration metric: took 15.010102s waiting for node "multinode-554300" to be "Ready" ...
	I0108 21:45:49.413529   10884 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0108 21:45:49.413529   10884 round_trippers.go:463] GET https://172.29.104.77:8443/api/v1/namespaces/kube-system/pods
	I0108 21:45:49.413529   10884 round_trippers.go:469] Request Headers:
	I0108 21:45:49.413529   10884 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:45:49.413529   10884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0108 21:45:49.421107   10884 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0108 21:45:49.421107   10884 round_trippers.go:577] Response Headers:
	I0108 21:45:49.421107   10884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 15bc4c65-0928-4725-8730-353e6d4075e7
	I0108 21:45:49.421107   10884 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:45:49 GMT
	I0108 21:45:49.421107   10884 round_trippers.go:580]     Audit-Id: 5edd684c-398a-4e14-976b-f316942bdd42
	I0108 21:45:49.421107   10884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:45:49.421107   10884 round_trippers.go:580]     Content-Type: application/json
	I0108 21:45:49.421107   10884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9ebd4a85-36a3-4fdc-bd57-aab45239cbb7
	I0108 21:45:49.423037   10884 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1834"},"items":[{"metadata":{"name":"coredns-5dd5756b68-q7vd7","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"fe215542-1a69-4152-9098-06937431fa74","resourceVersion":"1820","creationTimestamp":"2024-01-08T21:23:44Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"4ef51136-88a5-445c-bfe9-e1a45010851b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-08T21:23:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"4ef51136-88a5-445c-bfe9-e1a45010851b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 82736 chars]
	I0108 21:45:49.427563   10884 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-q7vd7" in "kube-system" namespace to be "Ready" ...
	I0108 21:45:49.427563   10884 round_trippers.go:463] GET https://172.29.104.77:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-q7vd7
	I0108 21:45:49.427563   10884 round_trippers.go:469] Request Headers:
	I0108 21:45:49.427563   10884 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:45:49.427563   10884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0108 21:45:49.431433   10884 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 21:45:49.431433   10884 round_trippers.go:577] Response Headers:
	I0108 21:45:49.431433   10884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:45:49.431433   10884 round_trippers.go:580]     Content-Type: application/json
	I0108 21:45:49.431433   10884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9ebd4a85-36a3-4fdc-bd57-aab45239cbb7
	I0108 21:45:49.431433   10884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 15bc4c65-0928-4725-8730-353e6d4075e7
	I0108 21:45:49.431433   10884 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:45:49 GMT
	I0108 21:45:49.431433   10884 round_trippers.go:580]     Audit-Id: bca8faca-516b-46e4-b2c1-19969d3b19ef
	I0108 21:45:49.431433   10884 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-q7vd7","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"fe215542-1a69-4152-9098-06937431fa74","resourceVersion":"1820","creationTimestamp":"2024-01-08T21:23:44Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"4ef51136-88a5-445c-bfe9-e1a45010851b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-08T21:23:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"4ef51136-88a5-445c-bfe9-e1a45010851b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6722 chars]
	I0108 21:45:49.432347   10884 round_trippers.go:463] GET https://172.29.104.77:8443/api/v1/nodes/multinode-554300
	I0108 21:45:49.432425   10884 round_trippers.go:469] Request Headers:
	I0108 21:45:49.432425   10884 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:45:49.432425   10884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0108 21:45:49.435250   10884 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 21:45:49.435777   10884 round_trippers.go:577] Response Headers:
	I0108 21:45:49.435777   10884 round_trippers.go:580]     Audit-Id: d8a1b5fe-b117-4d0f-a861-03afbe80e09d
	I0108 21:45:49.435777   10884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:45:49.435777   10884 round_trippers.go:580]     Content-Type: application/json
	I0108 21:45:49.435777   10884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9ebd4a85-36a3-4fdc-bd57-aab45239cbb7
	I0108 21:45:49.435777   10884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 15bc4c65-0928-4725-8730-353e6d4075e7
	I0108 21:45:49.435777   10884 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:45:49 GMT
	I0108 21:45:49.436051   10884 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-554300","uid":"00d4501c-311c-4564-812c-b620304a4e8e","resourceVersion":"1834","creationTimestamp":"2024-01-08T21:23:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-554300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-554300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T21_23_33_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-01-08T21:23:28Z","fieldsType":"FieldsV1","f [truncated 5357 chars]
	I0108 21:45:49.937870   10884 round_trippers.go:463] GET https://172.29.104.77:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-q7vd7
	I0108 21:45:49.937870   10884 round_trippers.go:469] Request Headers:
	I0108 21:45:49.937994   10884 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:45:49.937994   10884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0108 21:45:49.942679   10884 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 21:45:49.942679   10884 round_trippers.go:577] Response Headers:
	I0108 21:45:49.942679   10884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9ebd4a85-36a3-4fdc-bd57-aab45239cbb7
	I0108 21:45:49.942679   10884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 15bc4c65-0928-4725-8730-353e6d4075e7
	I0108 21:45:49.942679   10884 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:45:49 GMT
	I0108 21:45:49.942679   10884 round_trippers.go:580]     Audit-Id: 8fcace46-25c4-4f6b-9883-fc165e16f2bd
	I0108 21:45:49.942679   10884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:45:49.942679   10884 round_trippers.go:580]     Content-Type: application/json
	I0108 21:45:49.942679   10884 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-q7vd7","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"fe215542-1a69-4152-9098-06937431fa74","resourceVersion":"1820","creationTimestamp":"2024-01-08T21:23:44Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"4ef51136-88a5-445c-bfe9-e1a45010851b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-08T21:23:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"4ef51136-88a5-445c-bfe9-e1a45010851b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6722 chars]
	I0108 21:45:49.944085   10884 round_trippers.go:463] GET https://172.29.104.77:8443/api/v1/nodes/multinode-554300
	I0108 21:45:49.944085   10884 round_trippers.go:469] Request Headers:
	I0108 21:45:49.944085   10884 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:45:49.944085   10884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0108 21:45:49.947782   10884 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 21:45:49.947782   10884 round_trippers.go:577] Response Headers:
	I0108 21:45:49.947782   10884 round_trippers.go:580]     Audit-Id: f3893807-e80b-4eea-a698-417eb4ef7b09
	I0108 21:45:49.947782   10884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:45:49.947782   10884 round_trippers.go:580]     Content-Type: application/json
	I0108 21:45:49.947782   10884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9ebd4a85-36a3-4fdc-bd57-aab45239cbb7
	I0108 21:45:49.947782   10884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 15bc4c65-0928-4725-8730-353e6d4075e7
	I0108 21:45:49.947782   10884 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:45:49 GMT
	I0108 21:45:49.947991   10884 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-554300","uid":"00d4501c-311c-4564-812c-b620304a4e8e","resourceVersion":"1834","creationTimestamp":"2024-01-08T21:23:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-554300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-554300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T21_23_33_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-01-08T21:23:28Z","fieldsType":"FieldsV1","f [truncated 5357 chars]
	I0108 21:45:50.438926   10884 round_trippers.go:463] GET https://172.29.104.77:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-q7vd7
	I0108 21:45:50.438926   10884 round_trippers.go:469] Request Headers:
	I0108 21:45:50.439013   10884 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:45:50.439013   10884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0108 21:45:50.443864   10884 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0108 21:45:50.444118   10884 round_trippers.go:577] Response Headers:
	I0108 21:45:50.444118   10884 round_trippers.go:580]     Content-Type: application/json
	I0108 21:45:50.444118   10884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9ebd4a85-36a3-4fdc-bd57-aab45239cbb7
	I0108 21:45:50.444118   10884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 15bc4c65-0928-4725-8730-353e6d4075e7
	I0108 21:45:50.444118   10884 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:45:50 GMT
	I0108 21:45:50.444118   10884 round_trippers.go:580]     Audit-Id: a5d0d204-1fc1-4f83-9980-ab60980e4a18
	I0108 21:45:50.444118   10884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:45:50.444411   10884 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-q7vd7","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"fe215542-1a69-4152-9098-06937431fa74","resourceVersion":"1820","creationTimestamp":"2024-01-08T21:23:44Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"4ef51136-88a5-445c-bfe9-e1a45010851b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-08T21:23:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"4ef51136-88a5-445c-bfe9-e1a45010851b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6722 chars]
	I0108 21:45:50.445295   10884 round_trippers.go:463] GET https://172.29.104.77:8443/api/v1/nodes/multinode-554300
	I0108 21:45:50.445295   10884 round_trippers.go:469] Request Headers:
	I0108 21:45:50.445295   10884 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:45:50.445295   10884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0108 21:45:50.448189   10884 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 21:45:50.448250   10884 round_trippers.go:577] Response Headers:
	I0108 21:45:50.448250   10884 round_trippers.go:580]     Content-Type: application/json
	I0108 21:45:50.448250   10884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9ebd4a85-36a3-4fdc-bd57-aab45239cbb7
	I0108 21:45:50.448250   10884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 15bc4c65-0928-4725-8730-353e6d4075e7
	I0108 21:45:50.448250   10884 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:45:50 GMT
	I0108 21:45:50.448250   10884 round_trippers.go:580]     Audit-Id: 99231c3b-34aa-4021-accb-76d7a8652de8
	I0108 21:45:50.448414   10884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:45:50.448739   10884 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-554300","uid":"00d4501c-311c-4564-812c-b620304a4e8e","resourceVersion":"1834","creationTimestamp":"2024-01-08T21:23:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-554300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-554300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T21_23_33_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-01-08T21:23:28Z","fieldsType":"FieldsV1","f [truncated 5357 chars]
	I0108 21:45:50.938015   10884 round_trippers.go:463] GET https://172.29.104.77:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-q7vd7
	I0108 21:45:50.938015   10884 round_trippers.go:469] Request Headers:
	I0108 21:45:50.938015   10884 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:45:50.938015   10884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0108 21:45:50.942354   10884 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0108 21:45:50.943065   10884 round_trippers.go:577] Response Headers:
	I0108 21:45:50.943246   10884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 15bc4c65-0928-4725-8730-353e6d4075e7
	I0108 21:45:50.943341   10884 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:45:50 GMT
	I0108 21:45:50.943591   10884 round_trippers.go:580]     Audit-Id: 8a75a1e7-3975-46d3-bbf9-547fd78d5cea
	I0108 21:45:50.943591   10884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:45:50.943591   10884 round_trippers.go:580]     Content-Type: application/json
	I0108 21:45:50.943591   10884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9ebd4a85-36a3-4fdc-bd57-aab45239cbb7
	I0108 21:45:50.943591   10884 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-q7vd7","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"fe215542-1a69-4152-9098-06937431fa74","resourceVersion":"1820","creationTimestamp":"2024-01-08T21:23:44Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"4ef51136-88a5-445c-bfe9-e1a45010851b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-08T21:23:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"4ef51136-88a5-445c-bfe9-e1a45010851b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6722 chars]
	I0108 21:45:50.944531   10884 round_trippers.go:463] GET https://172.29.104.77:8443/api/v1/nodes/multinode-554300
	I0108 21:45:50.944531   10884 round_trippers.go:469] Request Headers:
	I0108 21:45:50.945061   10884 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:45:50.945061   10884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0108 21:45:50.951569   10884 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0108 21:45:50.951648   10884 round_trippers.go:577] Response Headers:
	I0108 21:45:50.951648   10884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:45:50.951648   10884 round_trippers.go:580]     Content-Type: application/json
	I0108 21:45:50.951690   10884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9ebd4a85-36a3-4fdc-bd57-aab45239cbb7
	I0108 21:45:50.951690   10884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 15bc4c65-0928-4725-8730-353e6d4075e7
	I0108 21:45:50.951690   10884 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:45:50 GMT
	I0108 21:45:50.951690   10884 round_trippers.go:580]     Audit-Id: 29138015-159b-435d-b904-cf48aece8909
	I0108 21:45:50.952123   10884 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-554300","uid":"00d4501c-311c-4564-812c-b620304a4e8e","resourceVersion":"1834","creationTimestamp":"2024-01-08T21:23:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-554300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-554300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T21_23_33_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-01-08T21:23:28Z","fieldsType":"FieldsV1","f [truncated 5357 chars]
	I0108 21:45:51.441913   10884 round_trippers.go:463] GET https://172.29.104.77:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-q7vd7
	I0108 21:45:51.442005   10884 round_trippers.go:469] Request Headers:
	I0108 21:45:51.442005   10884 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:45:51.442005   10884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0108 21:45:51.446422   10884 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0108 21:45:51.446797   10884 round_trippers.go:577] Response Headers:
	I0108 21:45:51.446797   10884 round_trippers.go:580]     Audit-Id: e52c30cd-5dc7-4253-8891-6dfff97a27c9
	I0108 21:45:51.446797   10884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:45:51.446797   10884 round_trippers.go:580]     Content-Type: application/json
	I0108 21:45:51.446797   10884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9ebd4a85-36a3-4fdc-bd57-aab45239cbb7
	I0108 21:45:51.446797   10884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 15bc4c65-0928-4725-8730-353e6d4075e7
	I0108 21:45:51.446797   10884 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:45:51 GMT
	I0108 21:45:51.446911   10884 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-q7vd7","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"fe215542-1a69-4152-9098-06937431fa74","resourceVersion":"1820","creationTimestamp":"2024-01-08T21:23:44Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"4ef51136-88a5-445c-bfe9-e1a45010851b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-08T21:23:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"4ef51136-88a5-445c-bfe9-e1a45010851b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6722 chars]
	I0108 21:45:51.447817   10884 round_trippers.go:463] GET https://172.29.104.77:8443/api/v1/nodes/multinode-554300
	I0108 21:45:51.447883   10884 round_trippers.go:469] Request Headers:
	I0108 21:45:51.447883   10884 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:45:51.447883   10884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0108 21:45:51.451254   10884 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 21:45:51.451605   10884 round_trippers.go:577] Response Headers:
	I0108 21:45:51.451605   10884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9ebd4a85-36a3-4fdc-bd57-aab45239cbb7
	I0108 21:45:51.451605   10884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 15bc4c65-0928-4725-8730-353e6d4075e7
	I0108 21:45:51.451605   10884 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:45:51 GMT
	I0108 21:45:51.451605   10884 round_trippers.go:580]     Audit-Id: 7c9ed591-28e1-48a6-b1d4-43a9dc447626
	I0108 21:45:51.451605   10884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:45:51.451605   10884 round_trippers.go:580]     Content-Type: application/json
	I0108 21:45:51.451605   10884 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-554300","uid":"00d4501c-311c-4564-812c-b620304a4e8e","resourceVersion":"1838","creationTimestamp":"2024-01-08T21:23:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-554300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-554300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T21_23_33_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-01-08T21:23:28Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0108 21:45:51.452618   10884 pod_ready.go:102] pod "coredns-5dd5756b68-q7vd7" in "kube-system" namespace has status "Ready":"False"
	I0108 21:45:51.940063   10884 round_trippers.go:463] GET https://172.29.104.77:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-q7vd7
	I0108 21:45:51.940404   10884 round_trippers.go:469] Request Headers:
	I0108 21:45:51.940404   10884 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:45:51.940404   10884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0108 21:45:51.946922   10884 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0108 21:45:51.946922   10884 round_trippers.go:577] Response Headers:
	I0108 21:45:51.946922   10884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:45:51.946922   10884 round_trippers.go:580]     Content-Type: application/json
	I0108 21:45:51.946922   10884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9ebd4a85-36a3-4fdc-bd57-aab45239cbb7
	I0108 21:45:51.946922   10884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 15bc4c65-0928-4725-8730-353e6d4075e7
	I0108 21:45:51.946922   10884 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:45:51 GMT
	I0108 21:45:51.946922   10884 round_trippers.go:580]     Audit-Id: fc19cec8-06ed-477a-8d2a-5f0ae17d729d
	I0108 21:45:51.947490   10884 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-q7vd7","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"fe215542-1a69-4152-9098-06937431fa74","resourceVersion":"1820","creationTimestamp":"2024-01-08T21:23:44Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"4ef51136-88a5-445c-bfe9-e1a45010851b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-08T21:23:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"4ef51136-88a5-445c-bfe9-e1a45010851b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6722 chars]
	I0108 21:45:51.948040   10884 round_trippers.go:463] GET https://172.29.104.77:8443/api/v1/nodes/multinode-554300
	I0108 21:45:51.948040   10884 round_trippers.go:469] Request Headers:
	I0108 21:45:51.948040   10884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0108 21:45:51.948040   10884 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:45:51.951909   10884 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 21:45:51.951909   10884 round_trippers.go:577] Response Headers:
	I0108 21:45:51.951909   10884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 15bc4c65-0928-4725-8730-353e6d4075e7
	I0108 21:45:51.951909   10884 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:45:51 GMT
	I0108 21:45:51.951909   10884 round_trippers.go:580]     Audit-Id: c9609738-c1f1-4098-9d54-a99b87e29089
	I0108 21:45:51.952798   10884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:45:51.952798   10884 round_trippers.go:580]     Content-Type: application/json
	I0108 21:45:51.952798   10884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9ebd4a85-36a3-4fdc-bd57-aab45239cbb7
	I0108 21:45:51.952798   10884 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-554300","uid":"00d4501c-311c-4564-812c-b620304a4e8e","resourceVersion":"1838","creationTimestamp":"2024-01-08T21:23:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-554300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-554300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T21_23_33_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-01-08T21:23:28Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0108 21:45:52.437645   10884 round_trippers.go:463] GET https://172.29.104.77:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-q7vd7
	I0108 21:45:52.437645   10884 round_trippers.go:469] Request Headers:
	I0108 21:45:52.437645   10884 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:45:52.437645   10884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0108 21:45:52.442334   10884 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0108 21:45:52.442334   10884 round_trippers.go:577] Response Headers:
	I0108 21:45:52.442334   10884 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:45:52 GMT
	I0108 21:45:52.442334   10884 round_trippers.go:580]     Audit-Id: 0a507dd7-eb86-46bd-af30-4f2d5eba77b6
	I0108 21:45:52.442486   10884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:45:52.442486   10884 round_trippers.go:580]     Content-Type: application/json
	I0108 21:45:52.442486   10884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9ebd4a85-36a3-4fdc-bd57-aab45239cbb7
	I0108 21:45:52.442486   10884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 15bc4c65-0928-4725-8730-353e6d4075e7
	I0108 21:45:52.442569   10884 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-q7vd7","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"fe215542-1a69-4152-9098-06937431fa74","resourceVersion":"1820","creationTimestamp":"2024-01-08T21:23:44Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"4ef51136-88a5-445c-bfe9-e1a45010851b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-08T21:23:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"4ef51136-88a5-445c-bfe9-e1a45010851b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6722 chars]
	I0108 21:45:52.443640   10884 round_trippers.go:463] GET https://172.29.104.77:8443/api/v1/nodes/multinode-554300
	I0108 21:45:52.443695   10884 round_trippers.go:469] Request Headers:
	I0108 21:45:52.443695   10884 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:45:52.443695   10884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0108 21:45:52.446690   10884 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 21:45:52.446690   10884 round_trippers.go:577] Response Headers:
	I0108 21:45:52.446690   10884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9ebd4a85-36a3-4fdc-bd57-aab45239cbb7
	I0108 21:45:52.446690   10884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 15bc4c65-0928-4725-8730-353e6d4075e7
	I0108 21:45:52.446690   10884 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:45:52 GMT
	I0108 21:45:52.446690   10884 round_trippers.go:580]     Audit-Id: 6c3ae97e-fad1-4779-9ce0-981f5a5d836b
	I0108 21:45:52.446690   10884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:45:52.447232   10884 round_trippers.go:580]     Content-Type: application/json
	I0108 21:45:52.447232   10884 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-554300","uid":"00d4501c-311c-4564-812c-b620304a4e8e","resourceVersion":"1838","creationTimestamp":"2024-01-08T21:23:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-554300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-554300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T21_23_33_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-01-08T21:23:28Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0108 21:45:52.939353   10884 round_trippers.go:463] GET https://172.29.104.77:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-q7vd7
	I0108 21:45:52.939353   10884 round_trippers.go:469] Request Headers:
	I0108 21:45:52.939472   10884 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:45:52.939472   10884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0108 21:45:52.942875   10884 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 21:45:52.942875   10884 round_trippers.go:577] Response Headers:
	I0108 21:45:52.942875   10884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:45:52.942875   10884 round_trippers.go:580]     Content-Type: application/json
	I0108 21:45:52.942875   10884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9ebd4a85-36a3-4fdc-bd57-aab45239cbb7
	I0108 21:45:52.942875   10884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 15bc4c65-0928-4725-8730-353e6d4075e7
	I0108 21:45:52.942875   10884 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:45:52 GMT
	I0108 21:45:52.943954   10884 round_trippers.go:580]     Audit-Id: 3c026481-9137-460b-8d8d-d70596734135
	I0108 21:45:52.944086   10884 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-q7vd7","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"fe215542-1a69-4152-9098-06937431fa74","resourceVersion":"1820","creationTimestamp":"2024-01-08T21:23:44Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"4ef51136-88a5-445c-bfe9-e1a45010851b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-08T21:23:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"4ef51136-88a5-445c-bfe9-e1a45010851b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6722 chars]
	I0108 21:45:52.944801   10884 round_trippers.go:463] GET https://172.29.104.77:8443/api/v1/nodes/multinode-554300
	I0108 21:45:52.944901   10884 round_trippers.go:469] Request Headers:
	I0108 21:45:52.944901   10884 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:45:52.944901   10884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0108 21:45:52.948178   10884 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 21:45:52.948178   10884 round_trippers.go:577] Response Headers:
	I0108 21:45:52.948178   10884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9ebd4a85-36a3-4fdc-bd57-aab45239cbb7
	I0108 21:45:52.948178   10884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 15bc4c65-0928-4725-8730-353e6d4075e7
	I0108 21:45:52.948178   10884 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:45:52 GMT
	I0108 21:45:52.948678   10884 round_trippers.go:580]     Audit-Id: 3443f2cd-92e3-47fd-a232-11bbe4090c87
	I0108 21:45:52.948678   10884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:45:52.948678   10884 round_trippers.go:580]     Content-Type: application/json
	I0108 21:45:52.949117   10884 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-554300","uid":"00d4501c-311c-4564-812c-b620304a4e8e","resourceVersion":"1838","creationTimestamp":"2024-01-08T21:23:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-554300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-554300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T21_23_33_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-01-08T21:23:28Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0108 21:45:53.443866   10884 round_trippers.go:463] GET https://172.29.104.77:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-q7vd7
	I0108 21:45:53.443866   10884 round_trippers.go:469] Request Headers:
	I0108 21:45:53.443866   10884 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:45:53.443866   10884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0108 21:45:53.448514   10884 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0108 21:45:53.448514   10884 round_trippers.go:577] Response Headers:
	I0108 21:45:53.448514   10884 round_trippers.go:580]     Content-Type: application/json
	I0108 21:45:53.448514   10884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9ebd4a85-36a3-4fdc-bd57-aab45239cbb7
	I0108 21:45:53.448514   10884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 15bc4c65-0928-4725-8730-353e6d4075e7
	I0108 21:45:53.448514   10884 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:45:53 GMT
	I0108 21:45:53.448514   10884 round_trippers.go:580]     Audit-Id: 8b1221b2-f07c-4e05-b8fa-faff94340d0a
	I0108 21:45:53.448514   10884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:45:53.448514   10884 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-q7vd7","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"fe215542-1a69-4152-9098-06937431fa74","resourceVersion":"1820","creationTimestamp":"2024-01-08T21:23:44Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"4ef51136-88a5-445c-bfe9-e1a45010851b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-08T21:23:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"4ef51136-88a5-445c-bfe9-e1a45010851b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6722 chars]
	I0108 21:45:53.449481   10884 round_trippers.go:463] GET https://172.29.104.77:8443/api/v1/nodes/multinode-554300
	I0108 21:45:53.449481   10884 round_trippers.go:469] Request Headers:
	I0108 21:45:53.449481   10884 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:45:53.449481   10884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0108 21:45:53.451493   10884 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 21:45:53.452497   10884 round_trippers.go:577] Response Headers:
	I0108 21:45:53.452497   10884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 15bc4c65-0928-4725-8730-353e6d4075e7
	I0108 21:45:53.452497   10884 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:45:53 GMT
	I0108 21:45:53.452497   10884 round_trippers.go:580]     Audit-Id: 11b9b3a8-b01a-48b9-8896-142ed243f4d2
	I0108 21:45:53.452497   10884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:45:53.452497   10884 round_trippers.go:580]     Content-Type: application/json
	I0108 21:45:53.452497   10884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9ebd4a85-36a3-4fdc-bd57-aab45239cbb7
	I0108 21:45:53.452497   10884 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-554300","uid":"00d4501c-311c-4564-812c-b620304a4e8e","resourceVersion":"1838","creationTimestamp":"2024-01-08T21:23:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-554300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-554300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T21_23_33_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-01-08T21:23:28Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0108 21:45:53.453273   10884 pod_ready.go:102] pod "coredns-5dd5756b68-q7vd7" in "kube-system" namespace has status "Ready":"False"
	I0108 21:45:53.929412   10884 round_trippers.go:463] GET https://172.29.104.77:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-q7vd7
	I0108 21:45:53.929726   10884 round_trippers.go:469] Request Headers:
	I0108 21:45:53.929726   10884 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:45:53.929726   10884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0108 21:45:53.933481   10884 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 21:45:53.934190   10884 round_trippers.go:577] Response Headers:
	I0108 21:45:53.934190   10884 round_trippers.go:580]     Audit-Id: ee09052f-4f29-4fdb-8442-3fe8850422be
	I0108 21:45:53.934190   10884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:45:53.934190   10884 round_trippers.go:580]     Content-Type: application/json
	I0108 21:45:53.934190   10884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9ebd4a85-36a3-4fdc-bd57-aab45239cbb7
	I0108 21:45:53.934190   10884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 15bc4c65-0928-4725-8730-353e6d4075e7
	I0108 21:45:53.934190   10884 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:45:53 GMT
	I0108 21:45:53.934479   10884 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-q7vd7","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"fe215542-1a69-4152-9098-06937431fa74","resourceVersion":"1820","creationTimestamp":"2024-01-08T21:23:44Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"4ef51136-88a5-445c-bfe9-e1a45010851b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-08T21:23:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"4ef51136-88a5-445c-bfe9-e1a45010851b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6722 chars]
	I0108 21:45:53.935454   10884 round_trippers.go:463] GET https://172.29.104.77:8443/api/v1/nodes/multinode-554300
	I0108 21:45:53.935521   10884 round_trippers.go:469] Request Headers:
	I0108 21:45:53.935521   10884 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:45:53.935521   10884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0108 21:45:53.940908   10884 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0108 21:45:53.940997   10884 round_trippers.go:577] Response Headers:
	I0108 21:45:53.940997   10884 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:45:53 GMT
	I0108 21:45:53.940997   10884 round_trippers.go:580]     Audit-Id: ab5095b0-42f3-4893-a788-fc339a94da3d
	I0108 21:45:53.941048   10884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:45:53.941048   10884 round_trippers.go:580]     Content-Type: application/json
	I0108 21:45:53.941072   10884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9ebd4a85-36a3-4fdc-bd57-aab45239cbb7
	I0108 21:45:53.941103   10884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 15bc4c65-0928-4725-8730-353e6d4075e7
	I0108 21:45:53.941309   10884 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-554300","uid":"00d4501c-311c-4564-812c-b620304a4e8e","resourceVersion":"1838","creationTimestamp":"2024-01-08T21:23:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-554300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-554300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T21_23_33_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-01-08T21:23:28Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0108 21:45:54.442786   10884 round_trippers.go:463] GET https://172.29.104.77:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-q7vd7
	I0108 21:45:54.442786   10884 round_trippers.go:469] Request Headers:
	I0108 21:45:54.442786   10884 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:45:54.442786   10884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0108 21:45:54.446284   10884 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 21:45:54.447365   10884 round_trippers.go:577] Response Headers:
	I0108 21:45:54.447365   10884 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:45:54 GMT
	I0108 21:45:54.447365   10884 round_trippers.go:580]     Audit-Id: b7999dc7-3256-499d-be24-2330f026f38c
	I0108 21:45:54.447414   10884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:45:54.447414   10884 round_trippers.go:580]     Content-Type: application/json
	I0108 21:45:54.447414   10884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9ebd4a85-36a3-4fdc-bd57-aab45239cbb7
	I0108 21:45:54.447414   10884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 15bc4c65-0928-4725-8730-353e6d4075e7
	I0108 21:45:54.447644   10884 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-q7vd7","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"fe215542-1a69-4152-9098-06937431fa74","resourceVersion":"1820","creationTimestamp":"2024-01-08T21:23:44Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"4ef51136-88a5-445c-bfe9-e1a45010851b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-08T21:23:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"4ef51136-88a5-445c-bfe9-e1a45010851b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6722 chars]
	I0108 21:45:54.448367   10884 round_trippers.go:463] GET https://172.29.104.77:8443/api/v1/nodes/multinode-554300
	I0108 21:45:54.448438   10884 round_trippers.go:469] Request Headers:
	I0108 21:45:54.448438   10884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0108 21:45:54.448438   10884 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:45:54.453865   10884 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0108 21:45:54.453865   10884 round_trippers.go:577] Response Headers:
	I0108 21:45:54.453865   10884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:45:54.453865   10884 round_trippers.go:580]     Content-Type: application/json
	I0108 21:45:54.453865   10884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9ebd4a85-36a3-4fdc-bd57-aab45239cbb7
	I0108 21:45:54.453865   10884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 15bc4c65-0928-4725-8730-353e6d4075e7
	I0108 21:45:54.453865   10884 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:45:54 GMT
	I0108 21:45:54.453865   10884 round_trippers.go:580]     Audit-Id: ea548bfe-6710-4264-8601-13bb380ab97f
	I0108 21:45:54.453865   10884 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-554300","uid":"00d4501c-311c-4564-812c-b620304a4e8e","resourceVersion":"1838","creationTimestamp":"2024-01-08T21:23:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-554300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-554300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T21_23_33_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-01-08T21:23:28Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0108 21:45:54.943640   10884 round_trippers.go:463] GET https://172.29.104.77:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-q7vd7
	I0108 21:45:54.943640   10884 round_trippers.go:469] Request Headers:
	I0108 21:45:54.943640   10884 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:45:54.943640   10884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0108 21:45:54.948322   10884 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0108 21:45:54.948378   10884 round_trippers.go:577] Response Headers:
	I0108 21:45:54.948378   10884 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:45:54 GMT
	I0108 21:45:54.948378   10884 round_trippers.go:580]     Audit-Id: aaa90260-c24a-4e74-97ee-b0e38daf91a5
	I0108 21:45:54.948442   10884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:45:54.948442   10884 round_trippers.go:580]     Content-Type: application/json
	I0108 21:45:54.948442   10884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9ebd4a85-36a3-4fdc-bd57-aab45239cbb7
	I0108 21:45:54.948442   10884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 15bc4c65-0928-4725-8730-353e6d4075e7
	I0108 21:45:54.948442   10884 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-q7vd7","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"fe215542-1a69-4152-9098-06937431fa74","resourceVersion":"1842","creationTimestamp":"2024-01-08T21:23:44Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"4ef51136-88a5-445c-bfe9-e1a45010851b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-08T21:23:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"4ef51136-88a5-445c-bfe9-e1a45010851b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6493 chars]
	I0108 21:45:54.949451   10884 round_trippers.go:463] GET https://172.29.104.77:8443/api/v1/nodes/multinode-554300
	I0108 21:45:54.949451   10884 round_trippers.go:469] Request Headers:
	I0108 21:45:54.949533   10884 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:45:54.949533   10884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0108 21:45:54.952814   10884 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 21:45:54.953139   10884 round_trippers.go:577] Response Headers:
	I0108 21:45:54.953139   10884 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:45:54 GMT
	I0108 21:45:54.953139   10884 round_trippers.go:580]     Audit-Id: 8aebd4fe-2771-48ea-b03f-71407000ec62
	I0108 21:45:54.953211   10884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:45:54.953211   10884 round_trippers.go:580]     Content-Type: application/json
	I0108 21:45:54.953211   10884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9ebd4a85-36a3-4fdc-bd57-aab45239cbb7
	I0108 21:45:54.953211   10884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 15bc4c65-0928-4725-8730-353e6d4075e7
	I0108 21:45:54.953211   10884 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-554300","uid":"00d4501c-311c-4564-812c-b620304a4e8e","resourceVersion":"1838","creationTimestamp":"2024-01-08T21:23:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-554300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-554300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T21_23_33_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-01-08T21:23:28Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0108 21:45:54.953804   10884 pod_ready.go:92] pod "coredns-5dd5756b68-q7vd7" in "kube-system" namespace has status "Ready":"True"
	I0108 21:45:54.953804   10884 pod_ready.go:81] duration metric: took 5.5262143s waiting for pod "coredns-5dd5756b68-q7vd7" in "kube-system" namespace to be "Ready" ...
	I0108 21:45:54.953804   10884 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-554300" in "kube-system" namespace to be "Ready" ...
	I0108 21:45:54.953804   10884 round_trippers.go:463] GET https://172.29.104.77:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-554300
	I0108 21:45:54.953804   10884 round_trippers.go:469] Request Headers:
	I0108 21:45:54.953804   10884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0108 21:45:54.953804   10884 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:45:54.957014   10884 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 21:45:54.957014   10884 round_trippers.go:577] Response Headers:
	I0108 21:45:54.957014   10884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:45:54.957014   10884 round_trippers.go:580]     Content-Type: application/json
	I0108 21:45:54.957014   10884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9ebd4a85-36a3-4fdc-bd57-aab45239cbb7
	I0108 21:45:54.957014   10884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 15bc4c65-0928-4725-8730-353e6d4075e7
	I0108 21:45:54.957014   10884 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:45:54 GMT
	I0108 21:45:54.957014   10884 round_trippers.go:580]     Audit-Id: 82cb46e2-80b2-4198-9bf2-a36eb8c37c7a
	I0108 21:45:54.957014   10884 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-554300","namespace":"kube-system","uid":"55fb89f1-0f93-4967-877e-c170530dd9ed","resourceVersion":"1804","creationTimestamp":"2024-01-08T21:45:28Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.29.104.77:2379","kubernetes.io/config.hash":"eeac3c939de11db202bb72fb9d694f8f","kubernetes.io/config.mirror":"eeac3c939de11db202bb72fb9d694f8f","kubernetes.io/config.seen":"2024-01-08T21:45:22.563167670Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-554300","uid":"00d4501c-311c-4564-812c-b620304a4e8e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-08T21:45:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 5863 chars]
	I0108 21:45:54.957829   10884 round_trippers.go:463] GET https://172.29.104.77:8443/api/v1/nodes/multinode-554300
	I0108 21:45:54.957829   10884 round_trippers.go:469] Request Headers:
	I0108 21:45:54.957829   10884 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:45:54.957829   10884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0108 21:45:54.961415   10884 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 21:45:54.961956   10884 round_trippers.go:577] Response Headers:
	I0108 21:45:54.961956   10884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9ebd4a85-36a3-4fdc-bd57-aab45239cbb7
	I0108 21:45:54.961956   10884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 15bc4c65-0928-4725-8730-353e6d4075e7
	I0108 21:45:54.961956   10884 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:45:54 GMT
	I0108 21:45:54.961956   10884 round_trippers.go:580]     Audit-Id: 7e9b26dc-f613-4c48-b32f-4b98aae6b84c
	I0108 21:45:54.961956   10884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:45:54.961956   10884 round_trippers.go:580]     Content-Type: application/json
	I0108 21:45:54.962267   10884 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-554300","uid":"00d4501c-311c-4564-812c-b620304a4e8e","resourceVersion":"1838","creationTimestamp":"2024-01-08T21:23:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-554300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-554300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T21_23_33_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-01-08T21:23:28Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0108 21:45:54.962621   10884 pod_ready.go:92] pod "etcd-multinode-554300" in "kube-system" namespace has status "Ready":"True"
	I0108 21:45:54.962621   10884 pod_ready.go:81] duration metric: took 8.8173ms waiting for pod "etcd-multinode-554300" in "kube-system" namespace to be "Ready" ...
	I0108 21:45:54.962621   10884 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-554300" in "kube-system" namespace to be "Ready" ...
	I0108 21:45:54.962782   10884 round_trippers.go:463] GET https://172.29.104.77:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-554300
	I0108 21:45:54.962831   10884 round_trippers.go:469] Request Headers:
	I0108 21:45:54.962831   10884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0108 21:45:54.962876   10884 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:45:54.965760   10884 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 21:45:54.966028   10884 round_trippers.go:577] Response Headers:
	I0108 21:45:54.966028   10884 round_trippers.go:580]     Audit-Id: 15f3ed18-3054-4782-8906-ca4b38fa5278
	I0108 21:45:54.966121   10884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:45:54.966121   10884 round_trippers.go:580]     Content-Type: application/json
	I0108 21:45:54.966121   10884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9ebd4a85-36a3-4fdc-bd57-aab45239cbb7
	I0108 21:45:54.966121   10884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 15bc4c65-0928-4725-8730-353e6d4075e7
	I0108 21:45:54.966121   10884 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:45:54 GMT
	I0108 21:45:54.966121   10884 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-554300","namespace":"kube-system","uid":"ad4821d4-6eff-483c-b12d-9123225ab172","resourceVersion":"1805","creationTimestamp":"2024-01-08T21:45:28Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.29.104.77:8443","kubernetes.io/config.hash":"2ecc3d0c6efe0268d7822d74e28f9f5b","kubernetes.io/config.mirror":"2ecc3d0c6efe0268d7822d74e28f9f5b","kubernetes.io/config.seen":"2024-01-08T21:45:22.563174170Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-554300","uid":"00d4501c-311c-4564-812c-b620304a4e8e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-08T21:45:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7400 chars]
	I0108 21:45:54.966900   10884 round_trippers.go:463] GET https://172.29.104.77:8443/api/v1/nodes/multinode-554300
	I0108 21:45:54.966900   10884 round_trippers.go:469] Request Headers:
	I0108 21:45:54.966900   10884 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:45:54.966900   10884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0108 21:45:54.969497   10884 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 21:45:54.969497   10884 round_trippers.go:577] Response Headers:
	I0108 21:45:54.969497   10884 round_trippers.go:580]     Audit-Id: 15d34de3-fb9b-4c35-89b3-f4beabeeaab2
	I0108 21:45:54.969497   10884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:45:54.969497   10884 round_trippers.go:580]     Content-Type: application/json
	I0108 21:45:54.969497   10884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9ebd4a85-36a3-4fdc-bd57-aab45239cbb7
	I0108 21:45:54.969497   10884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 15bc4c65-0928-4725-8730-353e6d4075e7
	I0108 21:45:54.969497   10884 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:45:54 GMT
	I0108 21:45:54.970660   10884 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-554300","uid":"00d4501c-311c-4564-812c-b620304a4e8e","resourceVersion":"1838","creationTimestamp":"2024-01-08T21:23:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-554300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-554300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T21_23_33_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-01-08T21:23:28Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0108 21:45:54.971253   10884 pod_ready.go:92] pod "kube-apiserver-multinode-554300" in "kube-system" namespace has status "Ready":"True"
	I0108 21:45:54.971253   10884 pod_ready.go:81] duration metric: took 8.5611ms waiting for pod "kube-apiserver-multinode-554300" in "kube-system" namespace to be "Ready" ...
	I0108 21:45:54.971253   10884 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-554300" in "kube-system" namespace to be "Ready" ...
	I0108 21:45:54.971428   10884 round_trippers.go:463] GET https://172.29.104.77:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-554300
	I0108 21:45:54.971469   10884 round_trippers.go:469] Request Headers:
	I0108 21:45:54.971469   10884 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:45:54.971469   10884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0108 21:45:54.974876   10884 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 21:45:54.974876   10884 round_trippers.go:577] Response Headers:
	I0108 21:45:54.975045   10884 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:45:54 GMT
	I0108 21:45:54.975045   10884 round_trippers.go:580]     Audit-Id: a32a578d-518d-4813-ba23-6962f217455e
	I0108 21:45:54.975045   10884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:45:54.975045   10884 round_trippers.go:580]     Content-Type: application/json
	I0108 21:45:54.975045   10884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9ebd4a85-36a3-4fdc-bd57-aab45239cbb7
	I0108 21:45:54.975045   10884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 15bc4c65-0928-4725-8730-353e6d4075e7
	I0108 21:45:54.975045   10884 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-554300","namespace":"kube-system","uid":"c5c47910-dee9-4e42-8623-dbc45d13564f","resourceVersion":"1813","creationTimestamp":"2024-01-08T21:23:32Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"8c1bbf537f0866640621fd36da3286e0","kubernetes.io/config.mirror":"8c1bbf537f0866640621fd36da3286e0","kubernetes.io/config.seen":"2024-01-08T21:23:32.232191792Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-554300","uid":"00d4501c-311c-4564-812c-b620304a4e8e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-08T21:23:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.
io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".": [truncated 7175 chars]
	I0108 21:45:54.975919   10884 round_trippers.go:463] GET https://172.29.104.77:8443/api/v1/nodes/multinode-554300
	I0108 21:45:54.975919   10884 round_trippers.go:469] Request Headers:
	I0108 21:45:54.975919   10884 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:45:54.975919   10884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0108 21:45:54.979603   10884 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 21:45:54.979603   10884 round_trippers.go:577] Response Headers:
	I0108 21:45:54.980137   10884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 15bc4c65-0928-4725-8730-353e6d4075e7
	I0108 21:45:54.980137   10884 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:45:54 GMT
	I0108 21:45:54.980137   10884 round_trippers.go:580]     Audit-Id: 5cf55218-57f2-434b-9160-782d9316c16f
	I0108 21:45:54.980174   10884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:45:54.980174   10884 round_trippers.go:580]     Content-Type: application/json
	I0108 21:45:54.980174   10884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9ebd4a85-36a3-4fdc-bd57-aab45239cbb7
	I0108 21:45:54.980174   10884 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-554300","uid":"00d4501c-311c-4564-812c-b620304a4e8e","resourceVersion":"1838","creationTimestamp":"2024-01-08T21:23:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-554300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-554300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T21_23_33_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-01-08T21:23:28Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0108 21:45:54.980174   10884 pod_ready.go:92] pod "kube-controller-manager-multinode-554300" in "kube-system" namespace has status "Ready":"True"
	I0108 21:45:54.980174   10884 pod_ready.go:81] duration metric: took 8.9213ms waiting for pod "kube-controller-manager-multinode-554300" in "kube-system" namespace to be "Ready" ...
	I0108 21:45:54.980174   10884 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-jsq7c" in "kube-system" namespace to be "Ready" ...
	I0108 21:45:54.980780   10884 round_trippers.go:463] GET https://172.29.104.77:8443/api/v1/namespaces/kube-system/pods/kube-proxy-jsq7c
	I0108 21:45:54.980780   10884 round_trippers.go:469] Request Headers:
	I0108 21:45:54.980780   10884 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:45:54.980780   10884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0108 21:45:54.983041   10884 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 21:45:54.983041   10884 round_trippers.go:577] Response Headers:
	I0108 21:45:54.983891   10884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9ebd4a85-36a3-4fdc-bd57-aab45239cbb7
	I0108 21:45:54.983891   10884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 15bc4c65-0928-4725-8730-353e6d4075e7
	I0108 21:45:54.983891   10884 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:45:54 GMT
	I0108 21:45:54.983891   10884 round_trippers.go:580]     Audit-Id: c84252a9-5d4e-4101-81e8-9ed7ce4e3fac
	I0108 21:45:54.983891   10884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:45:54.983891   10884 round_trippers.go:580]     Content-Type: application/json
	I0108 21:45:54.983891   10884 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-jsq7c","generateName":"kube-proxy-","namespace":"kube-system","uid":"cbc6a2d2-bb66-4af4-8a7d-315bc293cac0","resourceVersion":"1807","creationTimestamp":"2024-01-08T21:23:44Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"0597a1a7-46a6-4b69-bb4b-83cbc4a269de","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-08T21:23:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0597a1a7-46a6-4b69-bb4b-83cbc4a269de\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5739 chars]
	I0108 21:45:54.984539   10884 round_trippers.go:463] GET https://172.29.104.77:8443/api/v1/nodes/multinode-554300
	I0108 21:45:54.984539   10884 round_trippers.go:469] Request Headers:
	I0108 21:45:54.984539   10884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0108 21:45:54.984539   10884 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:45:54.990042   10884 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0108 21:45:54.990042   10884 round_trippers.go:577] Response Headers:
	I0108 21:45:54.990042   10884 round_trippers.go:580]     Audit-Id: 675937f4-4d89-43f4-95ab-2a8d7fc81f74
	I0108 21:45:54.990042   10884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:45:54.990042   10884 round_trippers.go:580]     Content-Type: application/json
	I0108 21:45:54.990042   10884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9ebd4a85-36a3-4fdc-bd57-aab45239cbb7
	I0108 21:45:54.990042   10884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 15bc4c65-0928-4725-8730-353e6d4075e7
	I0108 21:45:54.990042   10884 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:45:54 GMT
	I0108 21:45:54.990042   10884 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-554300","uid":"00d4501c-311c-4564-812c-b620304a4e8e","resourceVersion":"1838","creationTimestamp":"2024-01-08T21:23:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-554300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-554300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T21_23_33_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-01-08T21:23:28Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0108 21:45:54.990886   10884 pod_ready.go:92] pod "kube-proxy-jsq7c" in "kube-system" namespace has status "Ready":"True"
	I0108 21:45:54.990886   10884 pod_ready.go:81] duration metric: took 10.7113ms waiting for pod "kube-proxy-jsq7c" in "kube-system" namespace to be "Ready" ...
	I0108 21:45:54.990886   10884 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-nbzjb" in "kube-system" namespace to be "Ready" ...
	I0108 21:45:55.146708   10884 request.go:629] Waited for 155.6266ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.104.77:8443/api/v1/namespaces/kube-system/pods/kube-proxy-nbzjb
	I0108 21:45:55.146896   10884 round_trippers.go:463] GET https://172.29.104.77:8443/api/v1/namespaces/kube-system/pods/kube-proxy-nbzjb
	I0108 21:45:55.147000   10884 round_trippers.go:469] Request Headers:
	I0108 21:45:55.147000   10884 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:45:55.147000   10884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0108 21:45:55.150554   10884 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 21:45:55.151099   10884 round_trippers.go:577] Response Headers:
	I0108 21:45:55.151099   10884 round_trippers.go:580]     Content-Type: application/json
	I0108 21:45:55.151099   10884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9ebd4a85-36a3-4fdc-bd57-aab45239cbb7
	I0108 21:45:55.151099   10884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 15bc4c65-0928-4725-8730-353e6d4075e7
	I0108 21:45:55.151099   10884 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:45:55 GMT
	I0108 21:45:55.151099   10884 round_trippers.go:580]     Audit-Id: a394daef-d29d-421f-9126-4f684663e2a5
	I0108 21:45:55.151099   10884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:45:55.151494   10884 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-nbzjb","generateName":"kube-proxy-","namespace":"kube-system","uid":"73b08d5a-2015-4712-92b4-2d12298e9fc3","resourceVersion":"624","creationTimestamp":"2024-01-08T21:26:42Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"0597a1a7-46a6-4b69-bb4b-83cbc4a269de","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-08T21:26:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0597a1a7-46a6-4b69-bb4b-83cbc4a269de\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5537 chars]
	I0108 21:45:55.353150   10884 request.go:629] Waited for 200.6035ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.104.77:8443/api/v1/nodes/multinode-554300-m02
	I0108 21:45:55.353350   10884 round_trippers.go:463] GET https://172.29.104.77:8443/api/v1/nodes/multinode-554300-m02
	I0108 21:45:55.353350   10884 round_trippers.go:469] Request Headers:
	I0108 21:45:55.353350   10884 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:45:55.353479   10884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0108 21:45:55.358091   10884 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0108 21:45:55.358091   10884 round_trippers.go:577] Response Headers:
	I0108 21:45:55.358091   10884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9ebd4a85-36a3-4fdc-bd57-aab45239cbb7
	I0108 21:45:55.358091   10884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 15bc4c65-0928-4725-8730-353e6d4075e7
	I0108 21:45:55.358319   10884 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:45:55 GMT
	I0108 21:45:55.358319   10884 round_trippers.go:580]     Audit-Id: 654eaebe-79ae-4a2b-8d02-a63574bfaa25
	I0108 21:45:55.358319   10884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:45:55.358319   10884 round_trippers.go:580]     Content-Type: application/json
	I0108 21:45:55.358569   10884 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-554300-m02","uid":"77388bfb-eaa7-4617-992f-c60f1dbca8c7","resourceVersion":"1588","creationTimestamp":"2024-01-08T21:26:42Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-554300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-554300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T21_41_23_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-08T21:26:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:anno
tations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager": [truncated 3818 chars]
	I0108 21:45:55.358639   10884 pod_ready.go:92] pod "kube-proxy-nbzjb" in "kube-system" namespace has status "Ready":"True"
	I0108 21:45:55.358639   10884 pod_ready.go:81] duration metric: took 367.7512ms waiting for pod "kube-proxy-nbzjb" in "kube-system" namespace to be "Ready" ...
	I0108 21:45:55.358639   10884 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-pdt95" in "kube-system" namespace to be "Ready" ...
	I0108 21:45:55.556208   10884 request.go:629] Waited for 197.0146ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.104.77:8443/api/v1/namespaces/kube-system/pods/kube-proxy-pdt95
	I0108 21:45:55.556208   10884 round_trippers.go:463] GET https://172.29.104.77:8443/api/v1/namespaces/kube-system/pods/kube-proxy-pdt95
	I0108 21:45:55.556208   10884 round_trippers.go:469] Request Headers:
	I0108 21:45:55.556208   10884 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:45:55.556208   10884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0108 21:45:55.560934   10884 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0108 21:45:55.560934   10884 round_trippers.go:577] Response Headers:
	I0108 21:45:55.561370   10884 round_trippers.go:580]     Audit-Id: b734d959-0dfc-443d-b44f-66b8262eaeab
	I0108 21:45:55.561370   10884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:45:55.561370   10884 round_trippers.go:580]     Content-Type: application/json
	I0108 21:45:55.561370   10884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9ebd4a85-36a3-4fdc-bd57-aab45239cbb7
	I0108 21:45:55.561370   10884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 15bc4c65-0928-4725-8730-353e6d4075e7
	I0108 21:45:55.561370   10884 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:45:55 GMT
	I0108 21:45:55.561619   10884 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-pdt95","generateName":"kube-proxy-","namespace":"kube-system","uid":"e4aa76bc-96be-46f8-bc0e-7f3a6caa9883","resourceVersion":"1590","creationTimestamp":"2024-01-08T21:31:11Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"0597a1a7-46a6-4b69-bb4b-83cbc4a269de","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-08T21:31:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0597a1a7-46a6-4b69-bb4b-83cbc4a269de\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5747 chars]
	I0108 21:45:55.744497   10884 request.go:629] Waited for 182.199ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.104.77:8443/api/v1/nodes/multinode-554300-m03
	I0108 21:45:55.744729   10884 round_trippers.go:463] GET https://172.29.104.77:8443/api/v1/nodes/multinode-554300-m03
	I0108 21:45:55.744729   10884 round_trippers.go:469] Request Headers:
	I0108 21:45:55.744729   10884 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:45:55.744729   10884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0108 21:45:55.749298   10884 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0108 21:45:55.749298   10884 round_trippers.go:577] Response Headers:
	I0108 21:45:55.750276   10884 round_trippers.go:580]     Audit-Id: 1c8d1bf3-d5c3-496b-a153-4f4787279776
	I0108 21:45:55.750276   10884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:45:55.750307   10884 round_trippers.go:580]     Content-Type: application/json
	I0108 21:45:55.750307   10884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9ebd4a85-36a3-4fdc-bd57-aab45239cbb7
	I0108 21:45:55.750307   10884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 15bc4c65-0928-4725-8730-353e6d4075e7
	I0108 21:45:55.750307   10884 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:45:55 GMT
	I0108 21:45:55.750596   10884 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-554300-m03","uid":"fc944979-99f9-46c6-a35f-f2c3e1c020f4","resourceVersion":"1612","creationTimestamp":"2024-01-08T21:41:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-554300-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-554300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T21_41_23_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-08T21:41:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:anno
tations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-det [truncated 3636 chars]
	I0108 21:45:55.751084   10884 pod_ready.go:92] pod "kube-proxy-pdt95" in "kube-system" namespace has status "Ready":"True"
	I0108 21:45:55.751084   10884 pod_ready.go:81] duration metric: took 392.443ms waiting for pod "kube-proxy-pdt95" in "kube-system" namespace to be "Ready" ...
	I0108 21:45:55.751084   10884 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-554300" in "kube-system" namespace to be "Ready" ...
	I0108 21:45:55.946396   10884 request.go:629] Waited for 195.3111ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.104.77:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-554300
	I0108 21:45:55.946724   10884 round_trippers.go:463] GET https://172.29.104.77:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-554300
	I0108 21:45:55.946724   10884 round_trippers.go:469] Request Headers:
	I0108 21:45:55.946891   10884 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:45:55.946891   10884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0108 21:45:55.954652   10884 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0108 21:45:55.954652   10884 round_trippers.go:577] Response Headers:
	I0108 21:45:55.954652   10884 round_trippers.go:580]     Content-Type: application/json
	I0108 21:45:55.954652   10884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9ebd4a85-36a3-4fdc-bd57-aab45239cbb7
	I0108 21:45:55.954652   10884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 15bc4c65-0928-4725-8730-353e6d4075e7
	I0108 21:45:55.954652   10884 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:45:55 GMT
	I0108 21:45:55.954652   10884 round_trippers.go:580]     Audit-Id: 4d522692-862d-4045-940f-9432ee2e853e
	I0108 21:45:55.954652   10884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:45:55.955214   10884 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-554300","namespace":"kube-system","uid":"f5b78bba-6cd0-495b-b6d6-c9afd93b3534","resourceVersion":"1806","creationTimestamp":"2024-01-08T21:23:32Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"33f58bfb7438c4df2c3fb56db1f613ab","kubernetes.io/config.mirror":"33f58bfb7438c4df2c3fb56db1f613ab","kubernetes.io/config.seen":"2024-01-08T21:23:32.232192792Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-554300","uid":"00d4501c-311c-4564-812c-b620304a4e8e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-08T21:23:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{}
,"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{ [truncated 4905 chars]
	I0108 21:45:56.148415   10884 request.go:629] Waited for 192.8194ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.104.77:8443/api/v1/nodes/multinode-554300
	I0108 21:45:56.148415   10884 round_trippers.go:463] GET https://172.29.104.77:8443/api/v1/nodes/multinode-554300
	I0108 21:45:56.148415   10884 round_trippers.go:469] Request Headers:
	I0108 21:45:56.148415   10884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0108 21:45:56.148415   10884 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:45:56.152038   10884 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 21:45:56.152038   10884 round_trippers.go:577] Response Headers:
	I0108 21:45:56.152943   10884 round_trippers.go:580]     Audit-Id: 233e56c2-ab80-4a0b-8bf0-f2ff2484ac93
	I0108 21:45:56.152943   10884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:45:56.152943   10884 round_trippers.go:580]     Content-Type: application/json
	I0108 21:45:56.152943   10884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9ebd4a85-36a3-4fdc-bd57-aab45239cbb7
	I0108 21:45:56.152943   10884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 15bc4c65-0928-4725-8730-353e6d4075e7
	I0108 21:45:56.152943   10884 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:45:56 GMT
	I0108 21:45:56.153245   10884 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-554300","uid":"00d4501c-311c-4564-812c-b620304a4e8e","resourceVersion":"1838","creationTimestamp":"2024-01-08T21:23:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-554300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-554300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T21_23_33_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-01-08T21:23:28Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0108 21:45:56.153797   10884 pod_ready.go:92] pod "kube-scheduler-multinode-554300" in "kube-system" namespace has status "Ready":"True"
	I0108 21:45:56.153797   10884 pod_ready.go:81] duration metric: took 402.7114ms waiting for pod "kube-scheduler-multinode-554300" in "kube-system" namespace to be "Ready" ...
	I0108 21:45:56.153797   10884 pod_ready.go:38] duration metric: took 6.7402354s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0108 21:45:56.153797   10884 api_server.go:52] waiting for apiserver process to appear ...
	I0108 21:45:56.166833   10884 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 21:45:56.186987   10884 command_runner.go:130] > 1871
	I0108 21:45:56.186987   10884 api_server.go:72] duration metric: took 21.9183104s to wait for apiserver process to appear ...
	I0108 21:45:56.186987   10884 api_server.go:88] waiting for apiserver healthz status ...
	I0108 21:45:56.187216   10884 api_server.go:253] Checking apiserver healthz at https://172.29.104.77:8443/healthz ...
	I0108 21:45:56.196775   10884 api_server.go:279] https://172.29.104.77:8443/healthz returned 200:
	ok
	I0108 21:45:56.197120   10884 round_trippers.go:463] GET https://172.29.104.77:8443/version
	I0108 21:45:56.197156   10884 round_trippers.go:469] Request Headers:
	I0108 21:45:56.197190   10884 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:45:56.197190   10884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0108 21:45:56.199105   10884 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0108 21:45:56.199105   10884 round_trippers.go:577] Response Headers:
	I0108 21:45:56.199105   10884 round_trippers.go:580]     Content-Type: application/json
	I0108 21:45:56.199105   10884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9ebd4a85-36a3-4fdc-bd57-aab45239cbb7
	I0108 21:45:56.199105   10884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 15bc4c65-0928-4725-8730-353e6d4075e7
	I0108 21:45:56.199105   10884 round_trippers.go:580]     Content-Length: 264
	I0108 21:45:56.199105   10884 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:45:56 GMT
	I0108 21:45:56.199105   10884 round_trippers.go:580]     Audit-Id: 80ee8cd8-18c5-4b3f-ad70-ea76542bb372
	I0108 21:45:56.199105   10884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:45:56.199105   10884 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "28",
	  "gitVersion": "v1.28.4",
	  "gitCommit": "bae2c62678db2b5053817bc97181fcc2e8388103",
	  "gitTreeState": "clean",
	  "buildDate": "2023-11-15T16:48:54Z",
	  "goVersion": "go1.20.11",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0108 21:45:56.199105   10884 api_server.go:141] control plane version: v1.28.4
	I0108 21:45:56.199105   10884 api_server.go:131] duration metric: took 12.1179ms to wait for apiserver health ...
	I0108 21:45:56.199105   10884 system_pods.go:43] waiting for kube-system pods to appear ...
	I0108 21:45:56.350911   10884 request.go:629] Waited for 151.8046ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.104.77:8443/api/v1/namespaces/kube-system/pods
	I0108 21:45:56.351109   10884 round_trippers.go:463] GET https://172.29.104.77:8443/api/v1/namespaces/kube-system/pods
	I0108 21:45:56.351109   10884 round_trippers.go:469] Request Headers:
	I0108 21:45:56.351109   10884 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:45:56.351224   10884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0108 21:45:56.359326   10884 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0108 21:45:56.359326   10884 round_trippers.go:577] Response Headers:
	I0108 21:45:56.359326   10884 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:45:56 GMT
	I0108 21:45:56.359387   10884 round_trippers.go:580]     Audit-Id: 76511a5f-6835-4cd6-a8e2-27591a810bd1
	I0108 21:45:56.359387   10884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:45:56.359387   10884 round_trippers.go:580]     Content-Type: application/json
	I0108 21:45:56.359387   10884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9ebd4a85-36a3-4fdc-bd57-aab45239cbb7
	I0108 21:45:56.359465   10884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 15bc4c65-0928-4725-8730-353e6d4075e7
	I0108 21:45:56.362239   10884 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1846"},"items":[{"metadata":{"name":"coredns-5dd5756b68-q7vd7","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"fe215542-1a69-4152-9098-06937431fa74","resourceVersion":"1842","creationTimestamp":"2024-01-08T21:23:44Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"4ef51136-88a5-445c-bfe9-e1a45010851b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-08T21:23:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"4ef51136-88a5-445c-bfe9-e1a45010851b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 82507 chars]
	I0108 21:45:56.367246   10884 system_pods.go:59] 12 kube-system pods found
	I0108 21:45:56.367300   10884 system_pods.go:61] "coredns-5dd5756b68-q7vd7" [fe215542-1a69-4152-9098-06937431fa74] Running
	I0108 21:45:56.367345   10884 system_pods.go:61] "etcd-multinode-554300" [55fb89f1-0f93-4967-877e-c170530dd9ed] Running
	I0108 21:45:56.367345   10884 system_pods.go:61] "kindnet-4q524" [f633fa0f-0091-439f-b152-02f668039214] Running
	I0108 21:45:56.367397   10884 system_pods.go:61] "kindnet-5r79t" [275c1f53-70c6-4922-9ba4-d931e1515729] Running
	I0108 21:45:56.367397   10884 system_pods.go:61] "kindnet-dnjjm" [4c6605a5-1db1-49f6-ae23-e2fbba50ecbc] Running
	I0108 21:45:56.367444   10884 system_pods.go:61] "kube-apiserver-multinode-554300" [ad4821d4-6eff-483c-b12d-9123225ab172] Running
	I0108 21:45:56.367444   10884 system_pods.go:61] "kube-controller-manager-multinode-554300" [c5c47910-dee9-4e42-8623-dbc45d13564f] Running
	I0108 21:45:56.367444   10884 system_pods.go:61] "kube-proxy-jsq7c" [cbc6a2d2-bb66-4af4-8a7d-315bc293cac0] Running
	I0108 21:45:56.367444   10884 system_pods.go:61] "kube-proxy-nbzjb" [73b08d5a-2015-4712-92b4-2d12298e9fc3] Running
	I0108 21:45:56.367444   10884 system_pods.go:61] "kube-proxy-pdt95" [e4aa76bc-96be-46f8-bc0e-7f3a6caa9883] Running
	I0108 21:45:56.367444   10884 system_pods.go:61] "kube-scheduler-multinode-554300" [f5b78bba-6cd0-495b-b6d6-c9afd93b3534] Running
	I0108 21:45:56.367493   10884 system_pods.go:61] "storage-provisioner" [2fb8721f-01cc-4078-b45c-964d73e3da98] Running
	I0108 21:45:56.367493   10884 system_pods.go:74] duration metric: took 168.387ms to wait for pod list to return data ...
	I0108 21:45:56.367493   10884 default_sa.go:34] waiting for default service account to be created ...
	I0108 21:45:56.557181   10884 request.go:629] Waited for 189.3577ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.104.77:8443/api/v1/namespaces/default/serviceaccounts
	I0108 21:45:56.557297   10884 round_trippers.go:463] GET https://172.29.104.77:8443/api/v1/namespaces/default/serviceaccounts
	I0108 21:45:56.557424   10884 round_trippers.go:469] Request Headers:
	I0108 21:45:56.557424   10884 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:45:56.557424   10884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0108 21:45:56.561853   10884 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0108 21:45:56.561853   10884 round_trippers.go:577] Response Headers:
	I0108 21:45:56.561853   10884 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:45:56 GMT
	I0108 21:45:56.561853   10884 round_trippers.go:580]     Audit-Id: e42d3edf-601a-4966-a732-85f8043e000e
	I0108 21:45:56.561853   10884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:45:56.561853   10884 round_trippers.go:580]     Content-Type: application/json
	I0108 21:45:56.561853   10884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9ebd4a85-36a3-4fdc-bd57-aab45239cbb7
	I0108 21:45:56.561853   10884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 15bc4c65-0928-4725-8730-353e6d4075e7
	I0108 21:45:56.561853   10884 round_trippers.go:580]     Content-Length: 262
	I0108 21:45:56.562447   10884 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"1846"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"d2fb8b50-fecf-4612-b557-5a63ee90f2f3","resourceVersion":"365","creationTimestamp":"2024-01-08T21:23:44Z"}}]}
	I0108 21:45:56.562787   10884 default_sa.go:45] found service account: "default"
	I0108 21:45:56.562787   10884 default_sa.go:55] duration metric: took 195.2479ms for default service account to be created ...
	I0108 21:45:56.562787   10884 system_pods.go:116] waiting for k8s-apps to be running ...
	I0108 21:45:56.743991   10884 request.go:629] Waited for 180.8632ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.104.77:8443/api/v1/namespaces/kube-system/pods
	I0108 21:45:56.744086   10884 round_trippers.go:463] GET https://172.29.104.77:8443/api/v1/namespaces/kube-system/pods
	I0108 21:45:56.744086   10884 round_trippers.go:469] Request Headers:
	I0108 21:45:56.744086   10884 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:45:56.744086   10884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0108 21:45:56.751017   10884 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0108 21:45:56.751017   10884 round_trippers.go:577] Response Headers:
	I0108 21:45:56.751017   10884 round_trippers.go:580]     Audit-Id: 58689a8f-3fa3-4430-a7aa-ff1d3bda12ac
	I0108 21:45:56.751017   10884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:45:56.751017   10884 round_trippers.go:580]     Content-Type: application/json
	I0108 21:45:56.751017   10884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9ebd4a85-36a3-4fdc-bd57-aab45239cbb7
	I0108 21:45:56.751017   10884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 15bc4c65-0928-4725-8730-353e6d4075e7
	I0108 21:45:56.751017   10884 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:45:56 GMT
	I0108 21:45:56.752967   10884 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1846"},"items":[{"metadata":{"name":"coredns-5dd5756b68-q7vd7","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"fe215542-1a69-4152-9098-06937431fa74","resourceVersion":"1842","creationTimestamp":"2024-01-08T21:23:44Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"4ef51136-88a5-445c-bfe9-e1a45010851b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-08T21:23:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"4ef51136-88a5-445c-bfe9-e1a45010851b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 82507 chars]
	I0108 21:45:56.756752   10884 system_pods.go:86] 12 kube-system pods found
	I0108 21:45:56.756752   10884 system_pods.go:89] "coredns-5dd5756b68-q7vd7" [fe215542-1a69-4152-9098-06937431fa74] Running
	I0108 21:45:56.756752   10884 system_pods.go:89] "etcd-multinode-554300" [55fb89f1-0f93-4967-877e-c170530dd9ed] Running
	I0108 21:45:56.756752   10884 system_pods.go:89] "kindnet-4q524" [f633fa0f-0091-439f-b152-02f668039214] Running
	I0108 21:45:56.756752   10884 system_pods.go:89] "kindnet-5r79t" [275c1f53-70c6-4922-9ba4-d931e1515729] Running
	I0108 21:45:56.756752   10884 system_pods.go:89] "kindnet-dnjjm" [4c6605a5-1db1-49f6-ae23-e2fbba50ecbc] Running
	I0108 21:45:56.756752   10884 system_pods.go:89] "kube-apiserver-multinode-554300" [ad4821d4-6eff-483c-b12d-9123225ab172] Running
	I0108 21:45:56.756752   10884 system_pods.go:89] "kube-controller-manager-multinode-554300" [c5c47910-dee9-4e42-8623-dbc45d13564f] Running
	I0108 21:45:56.756752   10884 system_pods.go:89] "kube-proxy-jsq7c" [cbc6a2d2-bb66-4af4-8a7d-315bc293cac0] Running
	I0108 21:45:56.756752   10884 system_pods.go:89] "kube-proxy-nbzjb" [73b08d5a-2015-4712-92b4-2d12298e9fc3] Running
	I0108 21:45:56.756752   10884 system_pods.go:89] "kube-proxy-pdt95" [e4aa76bc-96be-46f8-bc0e-7f3a6caa9883] Running
	I0108 21:45:56.756752   10884 system_pods.go:89] "kube-scheduler-multinode-554300" [f5b78bba-6cd0-495b-b6d6-c9afd93b3534] Running
	I0108 21:45:56.756752   10884 system_pods.go:89] "storage-provisioner" [2fb8721f-01cc-4078-b45c-964d73e3da98] Running
	I0108 21:45:56.756752   10884 system_pods.go:126] duration metric: took 193.964ms to wait for k8s-apps to be running ...
	I0108 21:45:56.756752   10884 system_svc.go:44] waiting for kubelet service to be running ....
	I0108 21:45:56.768499   10884 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0108 21:45:56.791383   10884 system_svc.go:56] duration metric: took 34.6305ms WaitForService to wait for kubelet.
	I0108 21:45:56.791488   10884 kubeadm.go:581] duration metric: took 22.5228084s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0108 21:45:56.791488   10884 node_conditions.go:102] verifying NodePressure condition ...
	I0108 21:45:56.946389   10884 request.go:629] Waited for 154.533ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.104.77:8443/api/v1/nodes
	I0108 21:45:56.946551   10884 round_trippers.go:463] GET https://172.29.104.77:8443/api/v1/nodes
	I0108 21:45:56.946551   10884 round_trippers.go:469] Request Headers:
	I0108 21:45:56.946551   10884 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:45:56.946625   10884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0108 21:45:56.950168   10884 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 21:45:56.950168   10884 round_trippers.go:577] Response Headers:
	I0108 21:45:56.950168   10884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 15bc4c65-0928-4725-8730-353e6d4075e7
	I0108 21:45:56.950168   10884 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:45:56 GMT
	I0108 21:45:56.950168   10884 round_trippers.go:580]     Audit-Id: e46ab8aa-c159-4867-97bd-0c727fd4345e
	I0108 21:45:56.950168   10884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:45:56.950168   10884 round_trippers.go:580]     Content-Type: application/json
	I0108 21:45:56.950455   10884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9ebd4a85-36a3-4fdc-bd57-aab45239cbb7
	I0108 21:45:56.950961   10884 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"1846"},"items":[{"metadata":{"name":"multinode-554300","uid":"00d4501c-311c-4564-812c-b620304a4e8e","resourceVersion":"1838","creationTimestamp":"2024-01-08T21:23:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-554300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-554300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T21_23_33_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFi
elds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","tim [truncated 14729 chars]
	I0108 21:45:56.952100   10884 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0108 21:45:56.952205   10884 node_conditions.go:123] node cpu capacity is 2
	I0108 21:45:56.952205   10884 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0108 21:45:56.952205   10884 node_conditions.go:123] node cpu capacity is 2
	I0108 21:45:56.952205   10884 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0108 21:45:56.952205   10884 node_conditions.go:123] node cpu capacity is 2
	I0108 21:45:56.952205   10884 node_conditions.go:105] duration metric: took 160.6385ms to run NodePressure ...
	I0108 21:45:56.952302   10884 start.go:228] waiting for startup goroutines ...
	I0108 21:45:56.952302   10884 start.go:233] waiting for cluster config update ...
	I0108 21:45:56.952302   10884 start.go:242] writing updated cluster config ...
	I0108 21:45:56.966628   10884 config.go:182] Loaded profile config "multinode-554300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0108 21:45:56.966628   10884 profile.go:148] Saving config to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-554300\config.json ...
	I0108 21:45:56.970775   10884 out.go:177] * Starting worker node multinode-554300-m02 in cluster multinode-554300
	I0108 21:45:56.971299   10884 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0108 21:45:56.971499   10884 cache.go:56] Caching tarball of preloaded images
	I0108 21:45:56.971803   10884 preload.go:174] Found C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0108 21:45:56.971985   10884 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on docker
	I0108 21:45:56.972172   10884 profile.go:148] Saving config to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-554300\config.json ...
	I0108 21:45:56.974895   10884 start.go:365] acquiring machines lock for multinode-554300-m02: {Name:mk814f158b6187cc9297257c36fdbe0d2871c950 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0108 21:45:56.974895   10884 start.go:369] acquired machines lock for "multinode-554300-m02" in 0s
	I0108 21:45:56.974895   10884 start.go:96] Skipping create...Using existing machine configuration
	I0108 21:45:56.974895   10884 fix.go:54] fixHost starting: m02
	I0108 21:45:56.975660   10884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-554300-m02 ).state
	I0108 21:45:59.099968   10884 main.go:141] libmachine: [stdout =====>] : Off
	
	I0108 21:45:59.099968   10884 main.go:141] libmachine: [stderr =====>] : 
	I0108 21:45:59.099968   10884 fix.go:102] recreateIfNeeded on multinode-554300-m02: state=Stopped err=<nil>
	W0108 21:45:59.100102   10884 fix.go:128] unexpected machine state, will restart: <nil>
	I0108 21:45:59.101212   10884 out.go:177] * Restarting existing hyperv VM for "multinode-554300-m02" ...
	I0108 21:45:59.101670   10884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-554300-m02
	I0108 21:46:02.003838   10884 main.go:141] libmachine: [stdout =====>] : 
	I0108 21:46:02.003933   10884 main.go:141] libmachine: [stderr =====>] : 
	I0108 21:46:02.003933   10884 main.go:141] libmachine: Waiting for host to start...
	I0108 21:46:02.003933   10884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-554300-m02 ).state
	I0108 21:46:04.268131   10884 main.go:141] libmachine: [stdout =====>] : Running
	
	I0108 21:46:04.268131   10884 main.go:141] libmachine: [stderr =====>] : 
	I0108 21:46:04.268343   10884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-554300-m02 ).networkadapters[0]).ipaddresses[0]
	I0108 21:46:06.778709   10884 main.go:141] libmachine: [stdout =====>] : 
	I0108 21:46:06.778760   10884 main.go:141] libmachine: [stderr =====>] : 
	I0108 21:46:07.779947   10884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-554300-m02 ).state
	I0108 21:46:10.007389   10884 main.go:141] libmachine: [stdout =====>] : Running
	
	I0108 21:46:10.007389   10884 main.go:141] libmachine: [stderr =====>] : 
	I0108 21:46:10.007554   10884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-554300-m02 ).networkadapters[0]).ipaddresses[0]
	I0108 21:46:12.523623   10884 main.go:141] libmachine: [stdout =====>] : 
	I0108 21:46:12.523842   10884 main.go:141] libmachine: [stderr =====>] : 
	I0108 21:46:13.527720   10884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-554300-m02 ).state
	I0108 21:46:15.750327   10884 main.go:141] libmachine: [stdout =====>] : Running
	
	I0108 21:46:15.750327   10884 main.go:141] libmachine: [stderr =====>] : 
	I0108 21:46:15.750744   10884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-554300-m02 ).networkadapters[0]).ipaddresses[0]
	I0108 21:46:18.295821   10884 main.go:141] libmachine: [stdout =====>] : 
	I0108 21:46:18.295821   10884 main.go:141] libmachine: [stderr =====>] : 
	I0108 21:46:19.303013   10884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-554300-m02 ).state
	I0108 21:46:21.493611   10884 main.go:141] libmachine: [stdout =====>] : Running
	
	I0108 21:46:21.493921   10884 main.go:141] libmachine: [stderr =====>] : 
	I0108 21:46:21.494041   10884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-554300-m02 ).networkadapters[0]).ipaddresses[0]
	I0108 21:46:24.078592   10884 main.go:141] libmachine: [stdout =====>] : 
	I0108 21:46:24.078716   10884 main.go:141] libmachine: [stderr =====>] : 
	I0108 21:46:25.084817   10884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-554300-m02 ).state
	I0108 21:46:27.309985   10884 main.go:141] libmachine: [stdout =====>] : Running
	
	I0108 21:46:27.309985   10884 main.go:141] libmachine: [stderr =====>] : 
	I0108 21:46:27.310089   10884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-554300-m02 ).networkadapters[0]).ipaddresses[0]
	I0108 21:46:29.884127   10884 main.go:141] libmachine: [stdout =====>] : 172.29.97.220
	
	I0108 21:46:29.884326   10884 main.go:141] libmachine: [stderr =====>] : 
	I0108 21:46:29.887641   10884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-554300-m02 ).state
	I0108 21:46:32.005976   10884 main.go:141] libmachine: [stdout =====>] : Running
	
	I0108 21:46:32.005976   10884 main.go:141] libmachine: [stderr =====>] : 
	I0108 21:46:32.006247   10884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-554300-m02 ).networkadapters[0]).ipaddresses[0]
	I0108 21:46:34.567201   10884 main.go:141] libmachine: [stdout =====>] : 172.29.97.220
	
	I0108 21:46:34.567383   10884 main.go:141] libmachine: [stderr =====>] : 
	I0108 21:46:34.567877   10884 profile.go:148] Saving config to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-554300\config.json ...
	I0108 21:46:34.570276   10884 machine.go:88] provisioning docker machine ...
	I0108 21:46:34.570386   10884 buildroot.go:166] provisioning hostname "multinode-554300-m02"
	I0108 21:46:34.570386   10884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-554300-m02 ).state
	I0108 21:46:36.689438   10884 main.go:141] libmachine: [stdout =====>] : Running
	
	I0108 21:46:36.689438   10884 main.go:141] libmachine: [stderr =====>] : 
	I0108 21:46:36.689580   10884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-554300-m02 ).networkadapters[0]).ipaddresses[0]
	I0108 21:46:39.222413   10884 main.go:141] libmachine: [stdout =====>] : 172.29.97.220
	
	I0108 21:46:39.222413   10884 main.go:141] libmachine: [stderr =====>] : 
	I0108 21:46:39.227943   10884 main.go:141] libmachine: Using SSH client type: native
	I0108 21:46:39.228730   10884 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4a6120] 0x4a8c60 <nil>  [] 0s} 172.29.97.220 22 <nil> <nil>}
	I0108 21:46:39.228730   10884 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-554300-m02 && echo "multinode-554300-m02" | sudo tee /etc/hostname
	I0108 21:46:39.409174   10884 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-554300-m02
	
	I0108 21:46:39.409287   10884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-554300-m02 ).state
	I0108 21:46:41.542441   10884 main.go:141] libmachine: [stdout =====>] : Running
	
	I0108 21:46:41.542441   10884 main.go:141] libmachine: [stderr =====>] : 
	I0108 21:46:41.542553   10884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-554300-m02 ).networkadapters[0]).ipaddresses[0]
	I0108 21:46:44.038207   10884 main.go:141] libmachine: [stdout =====>] : 172.29.97.220
	
	I0108 21:46:44.038319   10884 main.go:141] libmachine: [stderr =====>] : 
	I0108 21:46:44.043595   10884 main.go:141] libmachine: Using SSH client type: native
	I0108 21:46:44.044183   10884 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4a6120] 0x4a8c60 <nil>  [] 0s} 172.29.97.220 22 <nil> <nil>}
	I0108 21:46:44.044718   10884 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-554300-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-554300-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-554300-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0108 21:46:44.214306   10884 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0108 21:46:44.214306   10884 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube7\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube7\minikube-integration\.minikube}
	I0108 21:46:44.214306   10884 buildroot.go:174] setting up certificates
	I0108 21:46:44.214306   10884 provision.go:83] configureAuth start
	I0108 21:46:44.214306   10884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-554300-m02 ).state
	I0108 21:46:46.316908   10884 main.go:141] libmachine: [stdout =====>] : Running
	
	I0108 21:46:46.316908   10884 main.go:141] libmachine: [stderr =====>] : 
	I0108 21:46:46.316997   10884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-554300-m02 ).networkadapters[0]).ipaddresses[0]
	I0108 21:46:48.824213   10884 main.go:141] libmachine: [stdout =====>] : 172.29.97.220
	
	I0108 21:46:48.824213   10884 main.go:141] libmachine: [stderr =====>] : 
	I0108 21:46:48.824311   10884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-554300-m02 ).state
	I0108 21:46:50.933400   10884 main.go:141] libmachine: [stdout =====>] : Running
	
	I0108 21:46:50.933400   10884 main.go:141] libmachine: [stderr =====>] : 
	I0108 21:46:50.933503   10884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-554300-m02 ).networkadapters[0]).ipaddresses[0]
	I0108 21:46:53.477249   10884 main.go:141] libmachine: [stdout =====>] : 172.29.97.220
	
	I0108 21:46:53.477306   10884 main.go:141] libmachine: [stderr =====>] : 
	I0108 21:46:53.477306   10884 provision.go:138] copyHostCerts
	I0108 21:46:53.477306   10884 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube7\minikube-integration\.minikube/ca.pem
	I0108 21:46:53.477830   10884 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/ca.pem, removing ...
	I0108 21:46:53.477969   10884 exec_runner.go:203] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.pem
	I0108 21:46:53.478764   10884 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0108 21:46:53.480001   10884 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube7\minikube-integration\.minikube/cert.pem
	I0108 21:46:53.480566   10884 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/cert.pem, removing ...
	I0108 21:46:53.480660   10884 exec_runner.go:203] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cert.pem
	I0108 21:46:53.480660   10884 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0108 21:46:53.482720   10884 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube7\minikube-integration\.minikube/key.pem
	I0108 21:46:53.482720   10884 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/key.pem, removing ...
	I0108 21:46:53.482720   10884 exec_runner.go:203] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\key.pem
	I0108 21:46:53.483994   10884 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/key.pem (1679 bytes)
	I0108 21:46:53.485106   10884 provision.go:112] generating server cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-554300-m02 san=[172.29.97.220 172.29.97.220 localhost 127.0.0.1 minikube multinode-554300-m02]
	I0108 21:46:53.717213   10884 provision.go:172] copyRemoteCerts
	I0108 21:46:53.730214   10884 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0108 21:46:53.730214   10884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-554300-m02 ).state
	I0108 21:46:55.858535   10884 main.go:141] libmachine: [stdout =====>] : Running
	
	I0108 21:46:55.858535   10884 main.go:141] libmachine: [stderr =====>] : 
	I0108 21:46:55.858633   10884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-554300-m02 ).networkadapters[0]).ipaddresses[0]
	I0108 21:46:58.341549   10884 main.go:141] libmachine: [stdout =====>] : 172.29.97.220
	
	I0108 21:46:58.341549   10884 main.go:141] libmachine: [stderr =====>] : 
	I0108 21:46:58.341728   10884 sshutil.go:53] new ssh client: &{IP:172.29.97.220 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-554300-m02\id_rsa Username:docker}
	I0108 21:46:58.466675   10884 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.7363827s)
	I0108 21:46:58.466724   10884 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0108 21:46:58.467254   10884 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0108 21:46:58.506328   10884 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0108 21:46:58.506877   10884 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1237 bytes)
	I0108 21:46:58.544310   10884 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0108 21:46:58.544686   10884 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0108 21:46:58.581091   10884 provision.go:86] duration metric: configureAuth took 14.3667167s
	I0108 21:46:58.581091   10884 buildroot.go:189] setting minikube options for container-runtime
	I0108 21:46:58.581855   10884 config.go:182] Loaded profile config "multinode-554300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0108 21:46:58.581953   10884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-554300-m02 ).state
	I0108 21:47:00.747665   10884 main.go:141] libmachine: [stdout =====>] : Running
	
	I0108 21:47:00.747665   10884 main.go:141] libmachine: [stderr =====>] : 
	I0108 21:47:00.747665   10884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-554300-m02 ).networkadapters[0]).ipaddresses[0]
	I0108 21:47:03.274670   10884 main.go:141] libmachine: [stdout =====>] : 172.29.97.220
	
	I0108 21:47:03.274670   10884 main.go:141] libmachine: [stderr =====>] : 
	I0108 21:47:03.279556   10884 main.go:141] libmachine: Using SSH client type: native
	I0108 21:47:03.280865   10884 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4a6120] 0x4a8c60 <nil>  [] 0s} 172.29.97.220 22 <nil> <nil>}
	I0108 21:47:03.280865   10884 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0108 21:47:03.439015   10884 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0108 21:47:03.439015   10884 buildroot.go:70] root file system type: tmpfs
	I0108 21:47:03.439252   10884 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0108 21:47:03.439351   10884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-554300-m02 ).state
	I0108 21:47:05.558522   10884 main.go:141] libmachine: [stdout =====>] : Running
	
	I0108 21:47:05.558522   10884 main.go:141] libmachine: [stderr =====>] : 
	I0108 21:47:05.558646   10884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-554300-m02 ).networkadapters[0]).ipaddresses[0]
	I0108 21:47:08.066393   10884 main.go:141] libmachine: [stdout =====>] : 172.29.97.220
	
	I0108 21:47:08.066470   10884 main.go:141] libmachine: [stderr =====>] : 
	I0108 21:47:08.072041   10884 main.go:141] libmachine: Using SSH client type: native
	I0108 21:47:08.072865   10884 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4a6120] 0x4a8c60 <nil>  [] 0s} 172.29.97.220 22 <nil> <nil>}
	I0108 21:47:08.072865   10884 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.29.104.77"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0108 21:47:08.252186   10884 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.29.104.77
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0108 21:47:08.252186   10884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-554300-m02 ).state
	I0108 21:47:10.370473   10884 main.go:141] libmachine: [stdout =====>] : Running
	
	I0108 21:47:10.370473   10884 main.go:141] libmachine: [stderr =====>] : 
	I0108 21:47:10.370473   10884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-554300-m02 ).networkadapters[0]).ipaddresses[0]
	I0108 21:47:12.888629   10884 main.go:141] libmachine: [stdout =====>] : 172.29.97.220
	
	I0108 21:47:12.888926   10884 main.go:141] libmachine: [stderr =====>] : 
	I0108 21:47:12.894095   10884 main.go:141] libmachine: Using SSH client type: native
	I0108 21:47:12.894825   10884 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4a6120] 0x4a8c60 <nil>  [] 0s} 172.29.97.220 22 <nil> <nil>}
	I0108 21:47:12.894825   10884 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0108 21:47:14.012817   10884 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0108 21:47:14.012817   10884 machine.go:91] provisioned docker machine in 39.4423556s
	I0108 21:47:14.012926   10884 start.go:300] post-start starting for "multinode-554300-m02" (driver="hyperv")
	I0108 21:47:14.012926   10884 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0108 21:47:14.025987   10884 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0108 21:47:14.025987   10884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-554300-m02 ).state
	I0108 21:47:16.136438   10884 main.go:141] libmachine: [stdout =====>] : Running
	
	I0108 21:47:16.136864   10884 main.go:141] libmachine: [stderr =====>] : 
	I0108 21:47:16.136864   10884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-554300-m02 ).networkadapters[0]).ipaddresses[0]
	I0108 21:47:18.646237   10884 main.go:141] libmachine: [stdout =====>] : 172.29.97.220
	
	I0108 21:47:18.647228   10884 main.go:141] libmachine: [stderr =====>] : 
	I0108 21:47:18.647779   10884 sshutil.go:53] new ssh client: &{IP:172.29.97.220 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-554300-m02\id_rsa Username:docker}
	I0108 21:47:18.754451   10884 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.7284421s)
	I0108 21:47:18.767049   10884 ssh_runner.go:195] Run: cat /etc/os-release
	I0108 21:47:18.773195   10884 command_runner.go:130] > NAME=Buildroot
	I0108 21:47:18.773195   10884 command_runner.go:130] > VERSION=2021.02.12-1-gae27a7b-dirty
	I0108 21:47:18.773195   10884 command_runner.go:130] > ID=buildroot
	I0108 21:47:18.773269   10884 command_runner.go:130] > VERSION_ID=2021.02.12
	I0108 21:47:18.773269   10884 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
	I0108 21:47:18.773269   10884 info.go:137] Remote host: Buildroot 2021.02.12
	I0108 21:47:18.773269   10884 filesync.go:126] Scanning C:\Users\jenkins.minikube7\minikube-integration\.minikube\addons for local assets ...
	I0108 21:47:18.773269   10884 filesync.go:126] Scanning C:\Users\jenkins.minikube7\minikube-integration\.minikube\files for local assets ...
	I0108 21:47:18.774913   10884 filesync.go:149] local asset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\30082.pem -> 30082.pem in /etc/ssl/certs
	I0108 21:47:18.775471   10884 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\30082.pem -> /etc/ssl/certs/30082.pem
	I0108 21:47:18.791338   10884 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0108 21:47:18.806512   10884 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\30082.pem --> /etc/ssl/certs/30082.pem (1708 bytes)
	I0108 21:47:18.848238   10884 start.go:303] post-start completed in 4.8352892s
	I0108 21:47:18.848238   10884 fix.go:56] fixHost completed within 1m21.8729581s
	I0108 21:47:18.848238   10884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-554300-m02 ).state
	I0108 21:47:21.021006   10884 main.go:141] libmachine: [stdout =====>] : Running
	
	I0108 21:47:21.021006   10884 main.go:141] libmachine: [stderr =====>] : 
	I0108 21:47:21.021006   10884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-554300-m02 ).networkadapters[0]).ipaddresses[0]
	I0108 21:47:23.627374   10884 main.go:141] libmachine: [stdout =====>] : 172.29.97.220
	
	I0108 21:47:23.627374   10884 main.go:141] libmachine: [stderr =====>] : 
	I0108 21:47:23.632289   10884 main.go:141] libmachine: Using SSH client type: native
	I0108 21:47:23.633042   10884 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4a6120] 0x4a8c60 <nil>  [] 0s} 172.29.97.220 22 <nil> <nil>}
	I0108 21:47:23.633042   10884 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0108 21:47:23.788890   10884 main.go:141] libmachine: SSH cmd err, output: <nil>: 1704750443.794334566
	
	I0108 21:47:23.788890   10884 fix.go:206] guest clock: 1704750443.794334566
	I0108 21:47:23.788890   10884 fix.go:219] Guest: 2024-01-08 21:47:23.794334566 +0000 UTC Remote: 2024-01-08 21:47:18.8482388 +0000 UTC m=+231.102851701 (delta=4.946095766s)
	I0108 21:47:23.788890   10884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-554300-m02 ).state
	I0108 21:47:25.931547   10884 main.go:141] libmachine: [stdout =====>] : Running
	
	I0108 21:47:25.931797   10884 main.go:141] libmachine: [stderr =====>] : 
	I0108 21:47:25.931895   10884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-554300-m02 ).networkadapters[0]).ipaddresses[0]
	I0108 21:47:28.492613   10884 main.go:141] libmachine: [stdout =====>] : 172.29.97.220
	
	I0108 21:47:28.492613   10884 main.go:141] libmachine: [stderr =====>] : 
	I0108 21:47:28.498742   10884 main.go:141] libmachine: Using SSH client type: native
	I0108 21:47:28.499316   10884 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4a6120] 0x4a8c60 <nil>  [] 0s} 172.29.97.220 22 <nil> <nil>}
	I0108 21:47:28.499461   10884 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1704750443
	I0108 21:47:28.665164   10884 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Jan  8 21:47:23 UTC 2024
	
	I0108 21:47:28.665258   10884 fix.go:226] clock set: Mon Jan  8 21:47:23 UTC 2024
	 (err=<nil>)
	I0108 21:47:28.665282   10884 start.go:83] releasing machines lock for "multinode-554300-m02", held for 1m31.6899555s
	I0108 21:47:28.665535   10884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-554300-m02 ).state
	I0108 21:47:30.808810   10884 main.go:141] libmachine: [stdout =====>] : Running
	
	I0108 21:47:30.808810   10884 main.go:141] libmachine: [stderr =====>] : 
	I0108 21:47:30.808810   10884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-554300-m02 ).networkadapters[0]).ipaddresses[0]
	I0108 21:47:33.367818   10884 main.go:141] libmachine: [stdout =====>] : 172.29.97.220
	
	I0108 21:47:33.367818   10884 main.go:141] libmachine: [stderr =====>] : 
	I0108 21:47:33.368170   10884 out.go:177] * Found network options:
	I0108 21:47:33.369334   10884 out.go:177]   - NO_PROXY=172.29.104.77
	W0108 21:47:33.369897   10884 proxy.go:119] fail to check proxy env: Error ip not in block
	I0108 21:47:33.370840   10884 out.go:177]   - NO_PROXY=172.29.104.77
	W0108 21:47:33.371605   10884 proxy.go:119] fail to check proxy env: Error ip not in block
	W0108 21:47:33.372801   10884 proxy.go:119] fail to check proxy env: Error ip not in block
	I0108 21:47:33.376444   10884 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0108 21:47:33.376621   10884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-554300-m02 ).state
	I0108 21:47:33.387259   10884 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0108 21:47:33.387259   10884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-554300-m02 ).state
	I0108 21:47:35.575248   10884 main.go:141] libmachine: [stdout =====>] : Running
	
	I0108 21:47:35.575425   10884 main.go:141] libmachine: [stderr =====>] : 
	I0108 21:47:35.575425   10884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-554300-m02 ).networkadapters[0]).ipaddresses[0]
	I0108 21:47:35.592433   10884 main.go:141] libmachine: [stdout =====>] : Running
	
	I0108 21:47:35.592433   10884 main.go:141] libmachine: [stderr =====>] : 
	I0108 21:47:35.592433   10884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-554300-m02 ).networkadapters[0]).ipaddresses[0]
	I0108 21:47:38.312950   10884 main.go:141] libmachine: [stdout =====>] : 172.29.97.220
	
	I0108 21:47:38.312950   10884 main.go:141] libmachine: [stderr =====>] : 
	I0108 21:47:38.313246   10884 sshutil.go:53] new ssh client: &{IP:172.29.97.220 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-554300-m02\id_rsa Username:docker}
	I0108 21:47:38.329119   10884 main.go:141] libmachine: [stdout =====>] : 172.29.97.220
	
	I0108 21:47:38.329179   10884 main.go:141] libmachine: [stderr =====>] : 
	I0108 21:47:38.329305   10884 sshutil.go:53] new ssh client: &{IP:172.29.97.220 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-554300-m02\id_rsa Username:docker}
	I0108 21:47:38.518953   10884 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	I0108 21:47:38.519983   10884 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (5.1326999s)
	I0108 21:47:38.519983   10884 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0108 21:47:38.520131   10884 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (5.1436625s)
	W0108 21:47:38.519983   10884 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0108 21:47:38.532306   10884 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0108 21:47:38.558636   10884 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0108 21:47:38.558636   10884 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0108 21:47:38.559054   10884 start.go:475] detecting cgroup driver to use...
	I0108 21:47:38.559218   10884 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0108 21:47:38.588095   10884 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0108 21:47:38.603659   10884 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0108 21:47:38.636011   10884 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0108 21:47:38.651709   10884 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0108 21:47:38.664337   10884 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0108 21:47:38.694542   10884 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0108 21:47:38.726766   10884 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0108 21:47:38.758335   10884 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0108 21:47:38.787092   10884 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0108 21:47:38.818297   10884 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0108 21:47:38.849022   10884 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0108 21:47:38.865062   10884 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0108 21:47:38.878490   10884 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0108 21:47:38.915142   10884 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0108 21:47:39.084137   10884 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0108 21:47:39.111754   10884 start.go:475] detecting cgroup driver to use...
	I0108 21:47:39.124111   10884 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0108 21:47:39.145680   10884 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0108 21:47:39.145740   10884 command_runner.go:130] > [Unit]
	I0108 21:47:39.145796   10884 command_runner.go:130] > Description=Docker Application Container Engine
	I0108 21:47:39.145796   10884 command_runner.go:130] > Documentation=https://docs.docker.com
	I0108 21:47:39.145796   10884 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0108 21:47:39.145796   10884 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0108 21:47:39.145796   10884 command_runner.go:130] > StartLimitBurst=3
	I0108 21:47:39.145796   10884 command_runner.go:130] > StartLimitIntervalSec=60
	I0108 21:47:39.145796   10884 command_runner.go:130] > [Service]
	I0108 21:47:39.145796   10884 command_runner.go:130] > Type=notify
	I0108 21:47:39.145796   10884 command_runner.go:130] > Restart=on-failure
	I0108 21:47:39.145796   10884 command_runner.go:130] > Environment=NO_PROXY=172.29.104.77
	I0108 21:47:39.145796   10884 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0108 21:47:39.145796   10884 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0108 21:47:39.145796   10884 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0108 21:47:39.145796   10884 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0108 21:47:39.145796   10884 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0108 21:47:39.145796   10884 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0108 21:47:39.145796   10884 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0108 21:47:39.145796   10884 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0108 21:47:39.145796   10884 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0108 21:47:39.145796   10884 command_runner.go:130] > ExecStart=
	I0108 21:47:39.145796   10884 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0108 21:47:39.146373   10884 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0108 21:47:39.146373   10884 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0108 21:47:39.146502   10884 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0108 21:47:39.146502   10884 command_runner.go:130] > LimitNOFILE=infinity
	I0108 21:47:39.146599   10884 command_runner.go:130] > LimitNPROC=infinity
	I0108 21:47:39.146599   10884 command_runner.go:130] > LimitCORE=infinity
	I0108 21:47:39.146599   10884 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0108 21:47:39.146665   10884 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0108 21:47:39.146665   10884 command_runner.go:130] > TasksMax=infinity
	I0108 21:47:39.146665   10884 command_runner.go:130] > TimeoutStartSec=0
	I0108 21:47:39.146723   10884 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0108 21:47:39.146723   10884 command_runner.go:130] > Delegate=yes
	I0108 21:47:39.146786   10884 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0108 21:47:39.146786   10884 command_runner.go:130] > KillMode=process
	I0108 21:47:39.146786   10884 command_runner.go:130] > [Install]
	I0108 21:47:39.146786   10884 command_runner.go:130] > WantedBy=multi-user.target
	I0108 21:47:39.164565   10884 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0108 21:47:39.195897   10884 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0108 21:47:39.240341   10884 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0108 21:47:39.277609   10884 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0108 21:47:39.313432   10884 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0108 21:47:39.382732   10884 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0108 21:47:39.404831   10884 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0108 21:47:39.431215   10884 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0108 21:47:39.448537   10884 ssh_runner.go:195] Run: which cri-dockerd
	I0108 21:47:39.454399   10884 command_runner.go:130] > /usr/bin/cri-dockerd
	I0108 21:47:39.468544   10884 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0108 21:47:39.485493   10884 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0108 21:47:39.527546   10884 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0108 21:47:39.715933   10884 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0108 21:47:39.884954   10884 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0108 21:47:39.885100   10884 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0108 21:47:39.933961   10884 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0108 21:47:40.105922   10884 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0108 21:47:41.661192   10884 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.5551969s)
	I0108 21:47:41.675148   10884 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0108 21:47:41.708157   10884 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0108 21:47:41.741706   10884 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0108 21:47:41.909305   10884 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0108 21:47:42.077774   10884 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0108 21:47:42.240738   10884 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0108 21:47:42.278170   10884 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0108 21:47:42.314743   10884 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0108 21:47:42.483935   10884 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0108 21:47:42.589223   10884 start.go:522] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0108 21:47:42.602213   10884 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0108 21:47:42.609209   10884 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0108 21:47:42.609209   10884 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0108 21:47:42.609922   10884 command_runner.go:130] > Device: 16h/22d	Inode: 897         Links: 1
	I0108 21:47:42.609922   10884 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0108 21:47:42.609922   10884 command_runner.go:130] > Access: 2024-01-08 21:47:42.510430701 +0000
	I0108 21:47:42.609922   10884 command_runner.go:130] > Modify: 2024-01-08 21:47:42.510430701 +0000
	I0108 21:47:42.609922   10884 command_runner.go:130] > Change: 2024-01-08 21:47:42.514430701 +0000
	I0108 21:47:42.609922   10884 command_runner.go:130] >  Birth: -
	I0108 21:47:42.610106   10884 start.go:543] Will wait 60s for crictl version
	I0108 21:47:42.623835   10884 ssh_runner.go:195] Run: which crictl
	I0108 21:47:42.628711   10884 command_runner.go:130] > /usr/bin/crictl
	I0108 21:47:42.640617   10884 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0108 21:47:42.704071   10884 command_runner.go:130] > Version:  0.1.0
	I0108 21:47:42.704071   10884 command_runner.go:130] > RuntimeName:  docker
	I0108 21:47:42.704071   10884 command_runner.go:130] > RuntimeVersion:  24.0.7
	I0108 21:47:42.704071   10884 command_runner.go:130] > RuntimeApiVersion:  v1
	I0108 21:47:42.704071   10884 start.go:559] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  24.0.7
	RuntimeApiVersion:  v1
	I0108 21:47:42.714069   10884 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0108 21:47:42.748455   10884 command_runner.go:130] > 24.0.7
	I0108 21:47:42.757881   10884 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0108 21:47:42.791761   10884 command_runner.go:130] > 24.0.7
	I0108 21:47:42.792588   10884 out.go:204] * Preparing Kubernetes v1.28.4 on Docker 24.0.7 ...
	I0108 21:47:42.793381   10884 out.go:177]   - env NO_PROXY=172.29.104.77
	I0108 21:47:42.793381   10884 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0108 21:47:42.797424   10884 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0108 21:47:42.797424   10884 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0108 21:47:42.797424   10884 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0108 21:47:42.797424   10884 ip.go:207] Found interface: {Index:10 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:21:91:d6 Flags:up|broadcast|multicast|running}
	I0108 21:47:42.800379   10884 ip.go:210] interface addr: fe80::2be2:9d7a:5cc4:f25c/64
	I0108 21:47:42.800379   10884 ip.go:210] interface addr: 172.29.96.1/20
	I0108 21:47:42.812386   10884 ssh_runner.go:195] Run: grep 172.29.96.1	host.minikube.internal$ /etc/hosts
	I0108 21:47:42.818128   10884 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.29.96.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0108 21:47:42.834913   10884 certs.go:56] Setting up C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-554300 for IP: 172.29.97.220
	I0108 21:47:42.835018   10884 certs.go:190] acquiring lock for shared ca certs: {Name:mk3bd6e475aadf28590677101b36f61c132d4290 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 21:47:42.835634   10884 certs.go:199] skipping minikubeCA CA generation: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.key
	I0108 21:47:42.836057   10884 certs.go:199] skipping proxyClientCA CA generation: C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.key
	I0108 21:47:42.836057   10884 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0108 21:47:42.836617   10884 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0108 21:47:42.836909   10884 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0108 21:47:42.836968   10884 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0108 21:47:42.837498   10884 certs.go:437] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\3008.pem (1338 bytes)
	W0108 21:47:42.837905   10884 certs.go:433] ignoring C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\3008_empty.pem, impossibly tiny 0 bytes
	I0108 21:47:42.838045   10884 certs.go:437] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I0108 21:47:42.838185   10884 certs.go:437] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0108 21:47:42.838185   10884 certs.go:437] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0108 21:47:42.839059   10884 certs.go:437] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0108 21:47:42.839158   10884 certs.go:437] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\30082.pem (1708 bytes)
	I0108 21:47:42.839158   10884 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0108 21:47:42.839713   10884 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\3008.pem -> /usr/share/ca-certificates/3008.pem
	I0108 21:47:42.840003   10884 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\30082.pem -> /usr/share/ca-certificates/30082.pem
	I0108 21:47:42.840627   10884 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0108 21:47:42.880170   10884 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0108 21:47:42.917621   10884 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0108 21:47:42.955064   10884 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0108 21:47:42.995991   10884 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0108 21:47:43.032232   10884 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\3008.pem --> /usr/share/ca-certificates/3008.pem (1338 bytes)
	I0108 21:47:43.069216   10884 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\30082.pem --> /usr/share/ca-certificates/30082.pem (1708 bytes)
	I0108 21:47:43.120508   10884 ssh_runner.go:195] Run: openssl version
	I0108 21:47:43.127506   10884 command_runner.go:130] > OpenSSL 1.1.1n  15 Mar 2022
	I0108 21:47:43.138502   10884 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/30082.pem && ln -fs /usr/share/ca-certificates/30082.pem /etc/ssl/certs/30082.pem"
	I0108 21:47:43.170004   10884 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/30082.pem
	I0108 21:47:43.174869   10884 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Jan  8 20:28 /usr/share/ca-certificates/30082.pem
	I0108 21:47:43.175904   10884 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan  8 20:28 /usr/share/ca-certificates/30082.pem
	I0108 21:47:43.186861   10884 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/30082.pem
	I0108 21:47:43.196503   10884 command_runner.go:130] > 3ec20f2e
	I0108 21:47:43.210084   10884 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/30082.pem /etc/ssl/certs/3ec20f2e.0"
	I0108 21:47:43.239300   10884 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0108 21:47:43.266143   10884 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0108 21:47:43.272132   10884 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Jan  8 20:14 /usr/share/ca-certificates/minikubeCA.pem
	I0108 21:47:43.272132   10884 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan  8 20:14 /usr/share/ca-certificates/minikubeCA.pem
	I0108 21:47:43.284117   10884 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0108 21:47:43.292214   10884 command_runner.go:130] > b5213941
	I0108 21:47:43.307367   10884 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0108 21:47:43.338353   10884 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3008.pem && ln -fs /usr/share/ca-certificates/3008.pem /etc/ssl/certs/3008.pem"
	I0108 21:47:43.367386   10884 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3008.pem
	I0108 21:47:43.373600   10884 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Jan  8 20:28 /usr/share/ca-certificates/3008.pem
	I0108 21:47:43.373600   10884 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan  8 20:28 /usr/share/ca-certificates/3008.pem
	I0108 21:47:43.387485   10884 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3008.pem
	I0108 21:47:43.394763   10884 command_runner.go:130] > 51391683
	I0108 21:47:43.409140   10884 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/3008.pem /etc/ssl/certs/51391683.0"
	I0108 21:47:43.440559   10884 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0108 21:47:43.445553   10884 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0108 21:47:43.446625   10884 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0108 21:47:43.455905   10884 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0108 21:47:43.493616   10884 command_runner.go:130] > cgroupfs
	I0108 21:47:43.494353   10884 cni.go:84] Creating CNI manager for ""
	I0108 21:47:43.494516   10884 cni.go:136] 3 nodes found, recommending kindnet
	I0108 21:47:43.494516   10884 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0108 21:47:43.494516   10884 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.29.97.220 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-554300 NodeName:multinode-554300-m02 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.29.104.77"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.29.97.220 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPa
th:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0108 21:47:43.494516   10884 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.29.97.220
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "multinode-554300-m02"
	  kubeletExtraArgs:
	    node-ip: 172.29.97.220
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.29.104.77"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0108 21:47:43.494516   10884 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=multinode-554300-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.29.97.220
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:multinode-554300 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0108 21:47:43.509829   10884 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0108 21:47:43.528828   10884 command_runner.go:130] > kubeadm
	I0108 21:47:43.528828   10884 command_runner.go:130] > kubectl
	I0108 21:47:43.528828   10884 command_runner.go:130] > kubelet
	I0108 21:47:43.529824   10884 binaries.go:44] Found k8s binaries, skipping transfer
	I0108 21:47:43.540828   10884 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I0108 21:47:43.558471   10884 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (382 bytes)
	I0108 21:47:43.586322   10884 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0108 21:47:43.631360   10884 ssh_runner.go:195] Run: grep 172.29.104.77	control-plane.minikube.internal$ /etc/hosts
	I0108 21:47:43.636993   10884 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.29.104.77	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0108 21:47:43.657211   10884 host.go:66] Checking if "multinode-554300" exists ...
	I0108 21:47:43.658226   10884 config.go:182] Loaded profile config "multinode-554300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0108 21:47:43.658226   10884 start.go:304] JoinCluster: &{Name:multinode-554300 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17806/minikube-v1.32.1-1702708929-17806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion
:v1.28.4 ClusterName:multinode-554300 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:172.29.104.77 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.29.97.220 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.29.100.57 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingres
s:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:dock
er BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I0108 21:47:43.658226   10884 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0108 21:47:43.658226   10884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-554300 ).state
	I0108 21:47:45.795628   10884 main.go:141] libmachine: [stdout =====>] : Running
	
	I0108 21:47:45.795881   10884 main.go:141] libmachine: [stderr =====>] : 
	I0108 21:47:45.795881   10884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-554300 ).networkadapters[0]).ipaddresses[0]
	I0108 21:47:48.326511   10884 main.go:141] libmachine: [stdout =====>] : 172.29.104.77
	
	I0108 21:47:48.326511   10884 main.go:141] libmachine: [stderr =====>] : 
	I0108 21:47:48.327109   10884 sshutil.go:53] new ssh client: &{IP:172.29.104.77 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-554300\id_rsa Username:docker}
	I0108 21:47:48.540470   10884 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token p7yz5v.f6wh1k1yweu7gnz2 --discovery-token-ca-cert-hash sha256:5d4a576aad216c3b59d844299451f9173aefcd7f6ecd29b777d9935fab24b02c 
	I0108 21:47:48.540829   10884 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm token create --print-join-command --ttl=0": (4.8825804s)
	I0108 21:47:48.540967   10884 start.go:317] removing existing worker node "m02" before attempting to rejoin cluster: &{Name:m02 IP:172.29.97.220 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0108 21:47:48.541009   10884 host.go:66] Checking if "multinode-554300" exists ...
	I0108 21:47:48.555256   10884 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl drain multinode-554300-m02 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data --delete-local-data
	I0108 21:47:48.555256   10884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-554300 ).state
	I0108 21:47:50.681236   10884 main.go:141] libmachine: [stdout =====>] : Running
	
	I0108 21:47:50.681236   10884 main.go:141] libmachine: [stderr =====>] : 
	I0108 21:47:50.681317   10884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-554300 ).networkadapters[0]).ipaddresses[0]
	I0108 21:47:53.265916   10884 main.go:141] libmachine: [stdout =====>] : 172.29.104.77
	
	I0108 21:47:53.265916   10884 main.go:141] libmachine: [stderr =====>] : 
	I0108 21:47:53.265916   10884 sshutil.go:53] new ssh client: &{IP:172.29.104.77 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-554300\id_rsa Username:docker}
	I0108 21:47:53.464108   10884 command_runner.go:130] ! Flag --delete-local-data has been deprecated, This option is deprecated and will be deleted. Use --delete-emptydir-data.
	I0108 21:47:53.543834   10884 command_runner.go:130] ! Warning: ignoring DaemonSet-managed Pods: kube-system/kindnet-4q524, kube-system/kube-proxy-nbzjb
	I0108 21:47:56.566625   10884 command_runner.go:130] > node/multinode-554300-m02 cordoned
	I0108 21:47:56.566625   10884 command_runner.go:130] > pod "busybox-5bc68d56bd-w2zbn" has DeletionTimestamp older than 1 seconds, skipping
	I0108 21:47:56.566625   10884 command_runner.go:130] > node/multinode-554300-m02 drained
	I0108 21:47:56.566935   10884 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl drain multinode-554300-m02 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data --delete-local-data: (8.0115205s)
	I0108 21:47:56.566935   10884 node.go:108] successfully drained node "m02"
	I0108 21:47:56.568069   10884 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	I0108 21:47:56.568859   10884 kapi.go:59] client config for multinode-554300: &rest.Config{Host:"https://172.29.104.77:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\profiles\\multinode-554300\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\profiles\\multinode-554300\\client.key", CAFile:"C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAD
ata:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x188c140), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0108 21:47:56.570137   10884 request.go:1212] Request Body: {"kind":"DeleteOptions","apiVersion":"v1"}
	I0108 21:47:56.570209   10884 round_trippers.go:463] DELETE https://172.29.104.77:8443/api/v1/nodes/multinode-554300-m02
	I0108 21:47:56.570209   10884 round_trippers.go:469] Request Headers:
	I0108 21:47:56.570209   10884 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:47:56.570209   10884 round_trippers.go:473]     Content-Type: application/json
	I0108 21:47:56.570302   10884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0108 21:47:56.586526   10884 round_trippers.go:574] Response Status: 200 OK in 16 milliseconds
	I0108 21:47:56.586526   10884 round_trippers.go:577] Response Headers:
	I0108 21:47:56.586526   10884 round_trippers.go:580]     Audit-Id: 6d0fb01d-fb63-4846-a7a9-45341bb8e55f
	I0108 21:47:56.586526   10884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:47:56.586526   10884 round_trippers.go:580]     Content-Type: application/json
	I0108 21:47:56.586526   10884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9ebd4a85-36a3-4fdc-bd57-aab45239cbb7
	I0108 21:47:56.586526   10884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 15bc4c65-0928-4725-8730-353e6d4075e7
	I0108 21:47:56.586526   10884 round_trippers.go:580]     Content-Length: 171
	I0108 21:47:56.586526   10884 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:47:56 GMT
	I0108 21:47:56.587179   10884 request.go:1212] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Success","details":{"name":"multinode-554300-m02","kind":"nodes","uid":"77388bfb-eaa7-4617-992f-c60f1dbca8c7"}}
	I0108 21:47:56.587536   10884 node.go:124] successfully deleted node "m02"
	I0108 21:47:56.587536   10884 start.go:321] successfully removed existing worker node "m02" from cluster: &{Name:m02 IP:172.29.97.220 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0108 21:47:56.587536   10884 start.go:325] trying to join worker node "m02" to cluster: &{Name:m02 IP:172.29.97.220 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0108 21:47:56.587536   10884 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm join control-plane.minikube.internal:8443 --token p7yz5v.f6wh1k1yweu7gnz2 --discovery-token-ca-cert-hash sha256:5d4a576aad216c3b59d844299451f9173aefcd7f6ecd29b777d9935fab24b02c --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-554300-m02"
	I0108 21:47:56.851334   10884 command_runner.go:130] ! W0108 21:47:56.858868    1362 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I0108 21:47:57.393962   10884 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0108 21:47:59.195481   10884 command_runner.go:130] > [preflight] Running pre-flight checks
	I0108 21:47:59.195481   10884 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I0108 21:47:59.195481   10884 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I0108 21:47:59.195481   10884 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0108 21:47:59.195481   10884 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0108 21:47:59.195481   10884 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0108 21:47:59.195481   10884 command_runner.go:130] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
	I0108 21:47:59.195481   10884 command_runner.go:130] > This node has joined the cluster:
	I0108 21:47:59.195481   10884 command_runner.go:130] > * Certificate signing request was sent to apiserver and a response was received.
	I0108 21:47:59.195481   10884 command_runner.go:130] > * The Kubelet was informed of the new secure connection details.
	I0108 21:47:59.195481   10884 command_runner.go:130] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I0108 21:47:59.195481   10884 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm join control-plane.minikube.internal:8443 --token p7yz5v.f6wh1k1yweu7gnz2 --discovery-token-ca-cert-hash sha256:5d4a576aad216c3b59d844299451f9173aefcd7f6ecd29b777d9935fab24b02c --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-554300-m02": (2.6079326s)
	I0108 21:47:59.196039   10884 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0108 21:47:59.478371   10884 command_runner.go:130] ! Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /usr/lib/systemd/system/kubelet.service.
	I0108 21:47:59.750666   10884 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=255792ad75c0218cbe9d2c7121633a1b1d442f28 minikube.k8s.io/name=multinode-554300 minikube.k8s.io/updated_at=2024_01_08T21_47_59_0700 minikube.k8s.io/primary=false "-l minikube.k8s.io/primary!=true" --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:47:59.897614   10884 command_runner.go:130] > node/multinode-554300-m02 labeled
	I0108 21:47:59.897614   10884 command_runner.go:130] > node/multinode-554300-m03 labeled
	I0108 21:47:59.897614   10884 start.go:306] JoinCluster complete in 16.2393123s
	I0108 21:47:59.897614   10884 cni.go:84] Creating CNI manager for ""
	I0108 21:47:59.897614   10884 cni.go:136] 3 nodes found, recommending kindnet
	I0108 21:47:59.912045   10884 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0108 21:47:59.919312   10884 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0108 21:47:59.919312   10884 command_runner.go:130] >   Size: 2685752   	Blocks: 5248       IO Block: 4096   regular file
	I0108 21:47:59.919312   10884 command_runner.go:130] > Device: 11h/17d	Inode: 3544        Links: 1
	I0108 21:47:59.919312   10884 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0108 21:47:59.919312   10884 command_runner.go:130] > Access: 2024-01-08 21:44:03.520554000 +0000
	I0108 21:47:59.919312   10884 command_runner.go:130] > Modify: 2023-12-16 11:53:47.000000000 +0000
	I0108 21:47:59.919312   10884 command_runner.go:130] > Change: 2024-01-08 21:43:53.914000000 +0000
	I0108 21:47:59.919312   10884 command_runner.go:130] >  Birth: -
	I0108 21:47:59.919312   10884 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I0108 21:47:59.919532   10884 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0108 21:47:59.960506   10884 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0108 21:48:00.381261   10884 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I0108 21:48:00.381261   10884 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I0108 21:48:00.381261   10884 command_runner.go:130] > serviceaccount/kindnet unchanged
	I0108 21:48:00.381261   10884 command_runner.go:130] > daemonset.apps/kindnet configured
	I0108 21:48:00.382912   10884 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	I0108 21:48:00.383348   10884 kapi.go:59] client config for multinode-554300: &rest.Config{Host:"https://172.29.104.77:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\profiles\\multinode-554300\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\profiles\\multinode-554300\\client.key", CAFile:"C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAD
ata:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x188c140), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0108 21:48:00.383348   10884 round_trippers.go:463] GET https://172.29.104.77:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0108 21:48:00.383348   10884 round_trippers.go:469] Request Headers:
	I0108 21:48:00.383348   10884 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:48:00.384351   10884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0108 21:48:00.387345   10884 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 21:48:00.387345   10884 round_trippers.go:577] Response Headers:
	I0108 21:48:00.387345   10884 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:48:00 GMT
	I0108 21:48:00.387345   10884 round_trippers.go:580]     Audit-Id: 54025350-dd9d-4929-baf6-578bcace71d0
	I0108 21:48:00.387567   10884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:48:00.387567   10884 round_trippers.go:580]     Content-Type: application/json
	I0108 21:48:00.387567   10884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9ebd4a85-36a3-4fdc-bd57-aab45239cbb7
	I0108 21:48:00.387567   10884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 15bc4c65-0928-4725-8730-353e6d4075e7
	I0108 21:48:00.387567   10884 round_trippers.go:580]     Content-Length: 292
	I0108 21:48:00.387567   10884 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"a05171e3-49ee-4610-ae26-e93c6a171dfe","resourceVersion":"1846","creationTimestamp":"2024-01-08T21:23:32Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0108 21:48:00.387716   10884 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-554300" context rescaled to 1 replicas
	I0108 21:48:00.387897   10884 start.go:223] Will wait 6m0s for node &{Name:m02 IP:172.29.97.220 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0108 21:48:00.388549   10884 out.go:177] * Verifying Kubernetes components...
	I0108 21:48:00.401524   10884 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0108 21:48:00.421117   10884 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	I0108 21:48:00.421117   10884 kapi.go:59] client config for multinode-554300: &rest.Config{Host:"https://172.29.104.77:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\profiles\\multinode-554300\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\profiles\\multinode-554300\\client.key", CAFile:"C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAD
ata:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x188c140), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0108 21:48:00.421799   10884 node_ready.go:35] waiting up to 6m0s for node "multinode-554300-m02" to be "Ready" ...
	I0108 21:48:00.421799   10884 round_trippers.go:463] GET https://172.29.104.77:8443/api/v1/nodes/multinode-554300-m02
	I0108 21:48:00.421799   10884 round_trippers.go:469] Request Headers:
	I0108 21:48:00.421799   10884 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:48:00.421799   10884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0108 21:48:00.428794   10884 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0108 21:48:00.429638   10884 round_trippers.go:577] Response Headers:
	I0108 21:48:00.429638   10884 round_trippers.go:580]     Audit-Id: edeba7eb-4bc4-41a9-885f-5d07340a6a84
	I0108 21:48:00.429638   10884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:48:00.429638   10884 round_trippers.go:580]     Content-Type: application/json
	I0108 21:48:00.429638   10884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9ebd4a85-36a3-4fdc-bd57-aab45239cbb7
	I0108 21:48:00.429638   10884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 15bc4c65-0928-4725-8730-353e6d4075e7
	I0108 21:48:00.429638   10884 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:48:00 GMT
	I0108 21:48:00.430140   10884 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-554300-m02","uid":"9132c7d7-a2f4-429d-b538-ad18254f1c39","resourceVersion":"2002","creationTimestamp":"2024-01-08T21:47:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-554300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-554300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T21_47_59_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-08T21:47:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:anno
tations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-det [truncated 3984 chars]
	I0108 21:48:00.931984   10884 round_trippers.go:463] GET https://172.29.104.77:8443/api/v1/nodes/multinode-554300-m02
	I0108 21:48:00.932049   10884 round_trippers.go:469] Request Headers:
	I0108 21:48:00.932049   10884 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:48:00.932049   10884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0108 21:48:00.936030   10884 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 21:48:00.936030   10884 round_trippers.go:577] Response Headers:
	I0108 21:48:00.936030   10884 round_trippers.go:580]     Content-Type: application/json
	I0108 21:48:00.936136   10884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9ebd4a85-36a3-4fdc-bd57-aab45239cbb7
	I0108 21:48:00.936136   10884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 15bc4c65-0928-4725-8730-353e6d4075e7
	I0108 21:48:00.936136   10884 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:48:00 GMT
	I0108 21:48:00.936136   10884 round_trippers.go:580]     Audit-Id: 3409d2e0-c9d6-46fa-984b-4c00933b5598
	I0108 21:48:00.936136   10884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:48:00.936609   10884 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-554300-m02","uid":"9132c7d7-a2f4-429d-b538-ad18254f1c39","resourceVersion":"2002","creationTimestamp":"2024-01-08T21:47:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-554300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-554300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T21_47_59_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-08T21:47:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:anno
tations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-det [truncated 3984 chars]
	I0108 21:48:01.431284   10884 round_trippers.go:463] GET https://172.29.104.77:8443/api/v1/nodes/multinode-554300-m02
	I0108 21:48:01.431344   10884 round_trippers.go:469] Request Headers:
	I0108 21:48:01.431344   10884 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:48:01.431344   10884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0108 21:48:01.434820   10884 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 21:48:01.435648   10884 round_trippers.go:577] Response Headers:
	I0108 21:48:01.435648   10884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9ebd4a85-36a3-4fdc-bd57-aab45239cbb7
	I0108 21:48:01.435648   10884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 15bc4c65-0928-4725-8730-353e6d4075e7
	I0108 21:48:01.435648   10884 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:48:01 GMT
	I0108 21:48:01.435648   10884 round_trippers.go:580]     Audit-Id: 92ecc68b-c28e-49ac-8ba5-4f674d60c5f8
	I0108 21:48:01.435648   10884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:48:01.435648   10884 round_trippers.go:580]     Content-Type: application/json
	I0108 21:48:01.435968   10884 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-554300-m02","uid":"9132c7d7-a2f4-429d-b538-ad18254f1c39","resourceVersion":"2010","creationTimestamp":"2024-01-08T21:47:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-554300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-554300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T21_47_59_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-08T21:47:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:anno
tations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-det [truncated 4093 chars]
	I0108 21:48:01.933283   10884 round_trippers.go:463] GET https://172.29.104.77:8443/api/v1/nodes/multinode-554300-m02
	I0108 21:48:01.933361   10884 round_trippers.go:469] Request Headers:
	I0108 21:48:01.933361   10884 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:48:01.933361   10884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0108 21:48:01.937107   10884 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 21:48:01.937107   10884 round_trippers.go:577] Response Headers:
	I0108 21:48:01.937107   10884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:48:01.937107   10884 round_trippers.go:580]     Content-Type: application/json
	I0108 21:48:01.937107   10884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9ebd4a85-36a3-4fdc-bd57-aab45239cbb7
	I0108 21:48:01.937107   10884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 15bc4c65-0928-4725-8730-353e6d4075e7
	I0108 21:48:01.937107   10884 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:48:01 GMT
	I0108 21:48:01.937107   10884 round_trippers.go:580]     Audit-Id: ce20a3b3-fe78-4025-8738-3d8ad8b0bed8
	I0108 21:48:01.937727   10884 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-554300-m02","uid":"9132c7d7-a2f4-429d-b538-ad18254f1c39","resourceVersion":"2010","creationTimestamp":"2024-01-08T21:47:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-554300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-554300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T21_47_59_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-08T21:47:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:anno
tations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-det [truncated 4093 chars]
	I0108 21:48:02.434498   10884 round_trippers.go:463] GET https://172.29.104.77:8443/api/v1/nodes/multinode-554300-m02
	I0108 21:48:02.434566   10884 round_trippers.go:469] Request Headers:
	I0108 21:48:02.434566   10884 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:48:02.434566   10884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0108 21:48:02.437853   10884 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 21:48:02.438856   10884 round_trippers.go:577] Response Headers:
	I0108 21:48:02.438923   10884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 15bc4c65-0928-4725-8730-353e6d4075e7
	I0108 21:48:02.438923   10884 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:48:02 GMT
	I0108 21:48:02.438923   10884 round_trippers.go:580]     Audit-Id: cba2e42a-87bc-4a6a-84bf-57c5b46837af
	I0108 21:48:02.438923   10884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:48:02.438923   10884 round_trippers.go:580]     Content-Type: application/json
	I0108 21:48:02.438923   10884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9ebd4a85-36a3-4fdc-bd57-aab45239cbb7
	I0108 21:48:02.439320   10884 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-554300-m02","uid":"9132c7d7-a2f4-429d-b538-ad18254f1c39","resourceVersion":"2010","creationTimestamp":"2024-01-08T21:47:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-554300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-554300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T21_47_59_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-08T21:47:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:anno
tations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-det [truncated 4093 chars]
	I0108 21:48:02.439885   10884 node_ready.go:58] node "multinode-554300-m02" has status "Ready":"False"
	I0108 21:48:02.936765   10884 round_trippers.go:463] GET https://172.29.104.77:8443/api/v1/nodes/multinode-554300-m02
	I0108 21:48:02.937290   10884 round_trippers.go:469] Request Headers:
	I0108 21:48:02.937733   10884 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:48:02.937733   10884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0108 21:48:02.941219   10884 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 21:48:02.942041   10884 round_trippers.go:577] Response Headers:
	I0108 21:48:02.942041   10884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9ebd4a85-36a3-4fdc-bd57-aab45239cbb7
	I0108 21:48:02.942041   10884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 15bc4c65-0928-4725-8730-353e6d4075e7
	I0108 21:48:02.942136   10884 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:48:02 GMT
	I0108 21:48:02.942136   10884 round_trippers.go:580]     Audit-Id: 53580183-b221-4b45-8e31-92e723a333d0
	I0108 21:48:02.942136   10884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:48:02.942221   10884 round_trippers.go:580]     Content-Type: application/json
	I0108 21:48:02.942364   10884 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-554300-m02","uid":"9132c7d7-a2f4-429d-b538-ad18254f1c39","resourceVersion":"2010","creationTimestamp":"2024-01-08T21:47:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-554300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-554300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T21_47_59_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-08T21:47:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:anno
tations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-det [truncated 4093 chars]
	I0108 21:48:03.427143   10884 round_trippers.go:463] GET https://172.29.104.77:8443/api/v1/nodes/multinode-554300-m02
	I0108 21:48:03.427222   10884 round_trippers.go:469] Request Headers:
	I0108 21:48:03.427222   10884 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:48:03.427222   10884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0108 21:48:03.430594   10884 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 21:48:03.430594   10884 round_trippers.go:577] Response Headers:
	I0108 21:48:03.430594   10884 round_trippers.go:580]     Audit-Id: f3cc0339-ab02-4027-81fe-64ba781f23b7
	I0108 21:48:03.431305   10884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:48:03.431305   10884 round_trippers.go:580]     Content-Type: application/json
	I0108 21:48:03.431305   10884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9ebd4a85-36a3-4fdc-bd57-aab45239cbb7
	I0108 21:48:03.431305   10884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 15bc4c65-0928-4725-8730-353e6d4075e7
	I0108 21:48:03.431305   10884 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:48:03 GMT
	I0108 21:48:03.431305   10884 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-554300-m02","uid":"9132c7d7-a2f4-429d-b538-ad18254f1c39","resourceVersion":"2010","creationTimestamp":"2024-01-08T21:47:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-554300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-554300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T21_47_59_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-08T21:47:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:anno
tations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-det [truncated 4093 chars]
	I0108 21:48:03.927256   10884 round_trippers.go:463] GET https://172.29.104.77:8443/api/v1/nodes/multinode-554300-m02
	I0108 21:48:03.927256   10884 round_trippers.go:469] Request Headers:
	I0108 21:48:03.927256   10884 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:48:03.927338   10884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0108 21:48:03.930995   10884 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 21:48:03.930995   10884 round_trippers.go:577] Response Headers:
	I0108 21:48:03.930995   10884 round_trippers.go:580]     Audit-Id: bbea3d94-2a28-4196-8c93-62ad8805d168
	I0108 21:48:03.930995   10884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:48:03.930995   10884 round_trippers.go:580]     Content-Type: application/json
	I0108 21:48:03.930995   10884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9ebd4a85-36a3-4fdc-bd57-aab45239cbb7
	I0108 21:48:03.930995   10884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 15bc4c65-0928-4725-8730-353e6d4075e7
	I0108 21:48:03.931331   10884 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:48:03 GMT
	I0108 21:48:03.931403   10884 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-554300-m02","uid":"9132c7d7-a2f4-429d-b538-ad18254f1c39","resourceVersion":"2010","creationTimestamp":"2024-01-08T21:47:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-554300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-554300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T21_47_59_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-08T21:47:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:anno
tations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-det [truncated 4093 chars]
	I0108 21:48:04.434806   10884 round_trippers.go:463] GET https://172.29.104.77:8443/api/v1/nodes/multinode-554300-m02
	I0108 21:48:04.434806   10884 round_trippers.go:469] Request Headers:
	I0108 21:48:04.434806   10884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0108 21:48:04.434889   10884 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:48:04.438196   10884 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 21:48:04.438196   10884 round_trippers.go:577] Response Headers:
	I0108 21:48:04.438196   10884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:48:04.438196   10884 round_trippers.go:580]     Content-Type: application/json
	I0108 21:48:04.438196   10884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9ebd4a85-36a3-4fdc-bd57-aab45239cbb7
	I0108 21:48:04.438985   10884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 15bc4c65-0928-4725-8730-353e6d4075e7
	I0108 21:48:04.438985   10884 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:48:04 GMT
	I0108 21:48:04.438985   10884 round_trippers.go:580]     Audit-Id: fa68a863-bbc9-4b5a-9228-5286c438e918
	I0108 21:48:04.439185   10884 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-554300-m02","uid":"9132c7d7-a2f4-429d-b538-ad18254f1c39","resourceVersion":"2010","creationTimestamp":"2024-01-08T21:47:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-554300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-554300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T21_47_59_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-08T21:47:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:anno
tations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-det [truncated 4093 chars]
	I0108 21:48:04.936511   10884 round_trippers.go:463] GET https://172.29.104.77:8443/api/v1/nodes/multinode-554300-m02
	I0108 21:48:04.936511   10884 round_trippers.go:469] Request Headers:
	I0108 21:48:04.936511   10884 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:48:04.936511   10884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0108 21:48:04.940131   10884 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 21:48:04.940131   10884 round_trippers.go:577] Response Headers:
	I0108 21:48:04.940131   10884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9ebd4a85-36a3-4fdc-bd57-aab45239cbb7
	I0108 21:48:04.940131   10884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 15bc4c65-0928-4725-8730-353e6d4075e7
	I0108 21:48:04.940131   10884 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:48:04 GMT
	I0108 21:48:04.940131   10884 round_trippers.go:580]     Audit-Id: 7193a9a5-48a3-4ce7-9618-bc3a2ed97173
	I0108 21:48:04.940131   10884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:48:04.940131   10884 round_trippers.go:580]     Content-Type: application/json
	I0108 21:48:04.941268   10884 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-554300-m02","uid":"9132c7d7-a2f4-429d-b538-ad18254f1c39","resourceVersion":"2010","creationTimestamp":"2024-01-08T21:47:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-554300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-554300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T21_47_59_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-08T21:47:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:anno
tations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-det [truncated 4093 chars]
	I0108 21:48:04.941786   10884 node_ready.go:58] node "multinode-554300-m02" has status "Ready":"False"
	I0108 21:48:05.436364   10884 round_trippers.go:463] GET https://172.29.104.77:8443/api/v1/nodes/multinode-554300-m02
	I0108 21:48:05.436364   10884 round_trippers.go:469] Request Headers:
	I0108 21:48:05.436364   10884 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:48:05.436364   10884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0108 21:48:05.441935   10884 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0108 21:48:05.442953   10884 round_trippers.go:577] Response Headers:
	I0108 21:48:05.442986   10884 round_trippers.go:580]     Audit-Id: c3a6ca6f-c4c9-4fa9-ac31-2cab4c420a74
	I0108 21:48:05.442986   10884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:48:05.442986   10884 round_trippers.go:580]     Content-Type: application/json
	I0108 21:48:05.442986   10884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9ebd4a85-36a3-4fdc-bd57-aab45239cbb7
	I0108 21:48:05.442986   10884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 15bc4c65-0928-4725-8730-353e6d4075e7
	I0108 21:48:05.442986   10884 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:48:05 GMT
	I0108 21:48:05.443269   10884 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-554300-m02","uid":"9132c7d7-a2f4-429d-b538-ad18254f1c39","resourceVersion":"2010","creationTimestamp":"2024-01-08T21:47:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-554300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-554300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T21_47_59_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-08T21:47:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:anno
tations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-det [truncated 4093 chars]
	I0108 21:48:05.936757   10884 round_trippers.go:463] GET https://172.29.104.77:8443/api/v1/nodes/multinode-554300-m02
	I0108 21:48:05.936938   10884 round_trippers.go:469] Request Headers:
	I0108 21:48:05.936986   10884 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:48:05.936986   10884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0108 21:48:05.940556   10884 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 21:48:05.940556   10884 round_trippers.go:577] Response Headers:
	I0108 21:48:05.940556   10884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 15bc4c65-0928-4725-8730-353e6d4075e7
	I0108 21:48:05.940556   10884 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:48:05 GMT
	I0108 21:48:05.940556   10884 round_trippers.go:580]     Audit-Id: b1025752-9c9c-4dae-827a-4a1528272acb
	I0108 21:48:05.940556   10884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:48:05.940556   10884 round_trippers.go:580]     Content-Type: application/json
	I0108 21:48:05.940556   10884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9ebd4a85-36a3-4fdc-bd57-aab45239cbb7
	I0108 21:48:05.940928   10884 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-554300-m02","uid":"9132c7d7-a2f4-429d-b538-ad18254f1c39","resourceVersion":"2010","creationTimestamp":"2024-01-08T21:47:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-554300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-554300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T21_47_59_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-08T21:47:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:anno
tations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-det [truncated 4093 chars]
	I0108 21:48:06.426212   10884 round_trippers.go:463] GET https://172.29.104.77:8443/api/v1/nodes/multinode-554300-m02
	I0108 21:48:06.426212   10884 round_trippers.go:469] Request Headers:
	I0108 21:48:06.426212   10884 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:48:06.426212   10884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0108 21:48:06.429818   10884 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 21:48:06.429818   10884 round_trippers.go:577] Response Headers:
	I0108 21:48:06.429818   10884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:48:06.429818   10884 round_trippers.go:580]     Content-Type: application/json
	I0108 21:48:06.429818   10884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9ebd4a85-36a3-4fdc-bd57-aab45239cbb7
	I0108 21:48:06.429818   10884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 15bc4c65-0928-4725-8730-353e6d4075e7
	I0108 21:48:06.429818   10884 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:48:06 GMT
	I0108 21:48:06.430555   10884 round_trippers.go:580]     Audit-Id: 0f2c1e3d-605e-4f9e-bb89-4d902d89e458
	I0108 21:48:06.430910   10884 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-554300-m02","uid":"9132c7d7-a2f4-429d-b538-ad18254f1c39","resourceVersion":"2010","creationTimestamp":"2024-01-08T21:47:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-554300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-554300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T21_47_59_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-08T21:47:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:anno
tations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-det [truncated 4093 chars]
	I0108 21:48:06.927196   10884 round_trippers.go:463] GET https://172.29.104.77:8443/api/v1/nodes/multinode-554300-m02
	I0108 21:48:06.927196   10884 round_trippers.go:469] Request Headers:
	I0108 21:48:06.927196   10884 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:48:06.927196   10884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0108 21:48:06.930907   10884 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 21:48:06.930934   10884 round_trippers.go:577] Response Headers:
	I0108 21:48:06.930972   10884 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:48:06 GMT
	I0108 21:48:06.930972   10884 round_trippers.go:580]     Audit-Id: d7644fb1-d7f1-4b4d-8d32-2b4320675f92
	I0108 21:48:06.930972   10884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:48:06.930972   10884 round_trippers.go:580]     Content-Type: application/json
	I0108 21:48:06.930972   10884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9ebd4a85-36a3-4fdc-bd57-aab45239cbb7
	I0108 21:48:06.930972   10884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 15bc4c65-0928-4725-8730-353e6d4075e7
	I0108 21:48:06.931229   10884 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-554300-m02","uid":"9132c7d7-a2f4-429d-b538-ad18254f1c39","resourceVersion":"2010","creationTimestamp":"2024-01-08T21:47:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-554300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-554300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T21_47_59_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-08T21:47:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:anno
tations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-det [truncated 4093 chars]
	I0108 21:48:07.428235   10884 round_trippers.go:463] GET https://172.29.104.77:8443/api/v1/nodes/multinode-554300-m02
	I0108 21:48:07.428278   10884 round_trippers.go:469] Request Headers:
	I0108 21:48:07.428316   10884 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:48:07.428316   10884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0108 21:48:07.434335   10884 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0108 21:48:07.434335   10884 round_trippers.go:577] Response Headers:
	I0108 21:48:07.434335   10884 round_trippers.go:580]     Content-Type: application/json
	I0108 21:48:07.434335   10884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9ebd4a85-36a3-4fdc-bd57-aab45239cbb7
	I0108 21:48:07.434335   10884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 15bc4c65-0928-4725-8730-353e6d4075e7
	I0108 21:48:07.434335   10884 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:48:07 GMT
	I0108 21:48:07.434335   10884 round_trippers.go:580]     Audit-Id: 85f1aa9c-83e6-4dec-ba41-cadb0baa7a91
	I0108 21:48:07.434335   10884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:48:07.434970   10884 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-554300-m02","uid":"9132c7d7-a2f4-429d-b538-ad18254f1c39","resourceVersion":"2010","creationTimestamp":"2024-01-08T21:47:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-554300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-554300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T21_47_59_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-08T21:47:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:anno
tations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-det [truncated 4093 chars]
	I0108 21:48:07.434970   10884 node_ready.go:58] node "multinode-554300-m02" has status "Ready":"False"
	I0108 21:48:07.928242   10884 round_trippers.go:463] GET https://172.29.104.77:8443/api/v1/nodes/multinode-554300-m02
	I0108 21:48:07.929596   10884 round_trippers.go:469] Request Headers:
	I0108 21:48:07.929596   10884 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:48:07.930038   10884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0108 21:48:07.934790   10884 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0108 21:48:07.934891   10884 round_trippers.go:577] Response Headers:
	I0108 21:48:07.934891   10884 round_trippers.go:580]     Audit-Id: 58a3bd2d-90ac-45b7-84fc-e56dc434d2b4
	I0108 21:48:07.934891   10884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:48:07.934891   10884 round_trippers.go:580]     Content-Type: application/json
	I0108 21:48:07.934891   10884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9ebd4a85-36a3-4fdc-bd57-aab45239cbb7
	I0108 21:48:07.935001   10884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 15bc4c65-0928-4725-8730-353e6d4075e7
	I0108 21:48:07.935001   10884 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:48:07 GMT
	I0108 21:48:07.935230   10884 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-554300-m02","uid":"9132c7d7-a2f4-429d-b538-ad18254f1c39","resourceVersion":"2024","creationTimestamp":"2024-01-08T21:47:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-554300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-554300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T21_47_59_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-08T21:47:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:anno
tations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-det [truncated 3925 chars]
	I0108 21:48:07.936065   10884 node_ready.go:49] node "multinode-554300-m02" has status "Ready":"True"
	I0108 21:48:07.936128   10884 node_ready.go:38] duration metric: took 7.5142934s waiting for node "multinode-554300-m02" to be "Ready" ...
	I0108 21:48:07.936128   10884 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0108 21:48:07.936128   10884 round_trippers.go:463] GET https://172.29.104.77:8443/api/v1/namespaces/kube-system/pods
	I0108 21:48:07.936128   10884 round_trippers.go:469] Request Headers:
	I0108 21:48:07.936128   10884 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:48:07.936128   10884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0108 21:48:07.940431   10884 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0108 21:48:07.940431   10884 round_trippers.go:577] Response Headers:
	I0108 21:48:07.940431   10884 round_trippers.go:580]     Audit-Id: b4dba0ff-9c82-432e-b3c4-5db7f1a40ea9
	I0108 21:48:07.940431   10884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:48:07.940431   10884 round_trippers.go:580]     Content-Type: application/json
	I0108 21:48:07.941458   10884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9ebd4a85-36a3-4fdc-bd57-aab45239cbb7
	I0108 21:48:07.941458   10884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 15bc4c65-0928-4725-8730-353e6d4075e7
	I0108 21:48:07.941458   10884 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:48:07 GMT
	I0108 21:48:07.944183   10884 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"2026"},"items":[{"metadata":{"name":"coredns-5dd5756b68-q7vd7","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"fe215542-1a69-4152-9098-06937431fa74","resourceVersion":"1842","creationTimestamp":"2024-01-08T21:23:44Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"4ef51136-88a5-445c-bfe9-e1a45010851b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-08T21:23:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"4ef51136-88a5-445c-bfe9-e1a45010851b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 83377 chars]
	I0108 21:48:07.947946   10884 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-q7vd7" in "kube-system" namespace to be "Ready" ...
	I0108 21:48:07.948035   10884 round_trippers.go:463] GET https://172.29.104.77:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-q7vd7
	I0108 21:48:07.948035   10884 round_trippers.go:469] Request Headers:
	I0108 21:48:07.948035   10884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0108 21:48:07.948035   10884 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:48:07.950727   10884 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 21:48:07.950727   10884 round_trippers.go:577] Response Headers:
	I0108 21:48:07.950727   10884 round_trippers.go:580]     Audit-Id: 5d1646b1-ecd1-49c1-8e73-25d32d44d8f4
	I0108 21:48:07.950727   10884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:48:07.950727   10884 round_trippers.go:580]     Content-Type: application/json
	I0108 21:48:07.950727   10884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9ebd4a85-36a3-4fdc-bd57-aab45239cbb7
	I0108 21:48:07.950727   10884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 15bc4c65-0928-4725-8730-353e6d4075e7
	I0108 21:48:07.951116   10884 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:48:07 GMT
	I0108 21:48:07.951305   10884 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-q7vd7","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"fe215542-1a69-4152-9098-06937431fa74","resourceVersion":"1842","creationTimestamp":"2024-01-08T21:23:44Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"4ef51136-88a5-445c-bfe9-e1a45010851b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-08T21:23:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"4ef51136-88a5-445c-bfe9-e1a45010851b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6493 chars]
	I0108 21:48:07.951977   10884 round_trippers.go:463] GET https://172.29.104.77:8443/api/v1/nodes/multinode-554300
	I0108 21:48:07.951977   10884 round_trippers.go:469] Request Headers:
	I0108 21:48:07.951977   10884 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:48:07.951977   10884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0108 21:48:07.954583   10884 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 21:48:07.954583   10884 round_trippers.go:577] Response Headers:
	I0108 21:48:07.954583   10884 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:48:07 GMT
	I0108 21:48:07.954583   10884 round_trippers.go:580]     Audit-Id: 391d679a-7108-43df-bbb8-6431cd93e4c8
	I0108 21:48:07.954583   10884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:48:07.955086   10884 round_trippers.go:580]     Content-Type: application/json
	I0108 21:48:07.955086   10884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9ebd4a85-36a3-4fdc-bd57-aab45239cbb7
	I0108 21:48:07.955086   10884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 15bc4c65-0928-4725-8730-353e6d4075e7
	I0108 21:48:07.955743   10884 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-554300","uid":"00d4501c-311c-4564-812c-b620304a4e8e","resourceVersion":"1838","creationTimestamp":"2024-01-08T21:23:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-554300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-554300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T21_23_33_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-01-08T21:23:28Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0108 21:48:07.955874   10884 pod_ready.go:92] pod "coredns-5dd5756b68-q7vd7" in "kube-system" namespace has status "Ready":"True"
	I0108 21:48:07.955874   10884 pod_ready.go:81] duration metric: took 7.928ms waiting for pod "coredns-5dd5756b68-q7vd7" in "kube-system" namespace to be "Ready" ...
	I0108 21:48:07.955874   10884 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-554300" in "kube-system" namespace to be "Ready" ...
	I0108 21:48:07.955874   10884 round_trippers.go:463] GET https://172.29.104.77:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-554300
	I0108 21:48:07.955874   10884 round_trippers.go:469] Request Headers:
	I0108 21:48:07.955874   10884 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:48:07.955874   10884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0108 21:48:07.958660   10884 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 21:48:07.958660   10884 round_trippers.go:577] Response Headers:
	I0108 21:48:07.958660   10884 round_trippers.go:580]     Audit-Id: 1e1239c8-3b86-4bf1-a9ea-b42e3948b8b3
	I0108 21:48:07.958660   10884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:48:07.958660   10884 round_trippers.go:580]     Content-Type: application/json
	I0108 21:48:07.958660   10884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9ebd4a85-36a3-4fdc-bd57-aab45239cbb7
	I0108 21:48:07.958660   10884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 15bc4c65-0928-4725-8730-353e6d4075e7
	I0108 21:48:07.958660   10884 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:48:07 GMT
	I0108 21:48:07.959403   10884 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-554300","namespace":"kube-system","uid":"55fb89f1-0f93-4967-877e-c170530dd9ed","resourceVersion":"1804","creationTimestamp":"2024-01-08T21:45:28Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.29.104.77:2379","kubernetes.io/config.hash":"eeac3c939de11db202bb72fb9d694f8f","kubernetes.io/config.mirror":"eeac3c939de11db202bb72fb9d694f8f","kubernetes.io/config.seen":"2024-01-08T21:45:22.563167670Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-554300","uid":"00d4501c-311c-4564-812c-b620304a4e8e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-08T21:45:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 5863 chars]
	I0108 21:48:07.959403   10884 round_trippers.go:463] GET https://172.29.104.77:8443/api/v1/nodes/multinode-554300
	I0108 21:48:07.959992   10884 round_trippers.go:469] Request Headers:
	I0108 21:48:07.959992   10884 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:48:07.959992   10884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0108 21:48:07.962232   10884 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 21:48:07.962232   10884 round_trippers.go:577] Response Headers:
	I0108 21:48:07.962232   10884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9ebd4a85-36a3-4fdc-bd57-aab45239cbb7
	I0108 21:48:07.962232   10884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 15bc4c65-0928-4725-8730-353e6d4075e7
	I0108 21:48:07.962232   10884 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:48:07 GMT
	I0108 21:48:07.962232   10884 round_trippers.go:580]     Audit-Id: 9da7c7a6-2227-44e0-99b2-a854caa1cd39
	I0108 21:48:07.962232   10884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:48:07.962232   10884 round_trippers.go:580]     Content-Type: application/json
	I0108 21:48:07.963399   10884 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-554300","uid":"00d4501c-311c-4564-812c-b620304a4e8e","resourceVersion":"1838","creationTimestamp":"2024-01-08T21:23:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-554300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-554300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T21_23_33_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-01-08T21:23:28Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0108 21:48:07.963753   10884 pod_ready.go:92] pod "etcd-multinode-554300" in "kube-system" namespace has status "Ready":"True"
	I0108 21:48:07.963753   10884 pod_ready.go:81] duration metric: took 7.8785ms waiting for pod "etcd-multinode-554300" in "kube-system" namespace to be "Ready" ...
	I0108 21:48:07.963859   10884 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-554300" in "kube-system" namespace to be "Ready" ...
	I0108 21:48:07.963859   10884 round_trippers.go:463] GET https://172.29.104.77:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-554300
	I0108 21:48:07.963971   10884 round_trippers.go:469] Request Headers:
	I0108 21:48:07.963971   10884 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:48:07.963971   10884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0108 21:48:07.970150   10884 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0108 21:48:07.970150   10884 round_trippers.go:577] Response Headers:
	I0108 21:48:07.970150   10884 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:48:07 GMT
	I0108 21:48:07.970150   10884 round_trippers.go:580]     Audit-Id: dadb22a7-d3eb-4d0a-9c1c-fef4729ac9f9
	I0108 21:48:07.970150   10884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:48:07.970150   10884 round_trippers.go:580]     Content-Type: application/json
	I0108 21:48:07.970150   10884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9ebd4a85-36a3-4fdc-bd57-aab45239cbb7
	I0108 21:48:07.970150   10884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 15bc4c65-0928-4725-8730-353e6d4075e7
	I0108 21:48:07.970150   10884 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-554300","namespace":"kube-system","uid":"ad4821d4-6eff-483c-b12d-9123225ab172","resourceVersion":"1805","creationTimestamp":"2024-01-08T21:45:28Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.29.104.77:8443","kubernetes.io/config.hash":"2ecc3d0c6efe0268d7822d74e28f9f5b","kubernetes.io/config.mirror":"2ecc3d0c6efe0268d7822d74e28f9f5b","kubernetes.io/config.seen":"2024-01-08T21:45:22.563174170Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-554300","uid":"00d4501c-311c-4564-812c-b620304a4e8e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-08T21:45:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7400 chars]
	I0108 21:48:07.971178   10884 round_trippers.go:463] GET https://172.29.104.77:8443/api/v1/nodes/multinode-554300
	I0108 21:48:07.971233   10884 round_trippers.go:469] Request Headers:
	I0108 21:48:07.971233   10884 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:48:07.971233   10884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0108 21:48:07.973851   10884 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 21:48:07.973851   10884 round_trippers.go:577] Response Headers:
	I0108 21:48:07.973851   10884 round_trippers.go:580]     Audit-Id: 64475e14-00b7-439b-8945-1a8d7b825735
	I0108 21:48:07.973851   10884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:48:07.973851   10884 round_trippers.go:580]     Content-Type: application/json
	I0108 21:48:07.973851   10884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9ebd4a85-36a3-4fdc-bd57-aab45239cbb7
	I0108 21:48:07.973851   10884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 15bc4c65-0928-4725-8730-353e6d4075e7
	I0108 21:48:07.973851   10884 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:48:07 GMT
	I0108 21:48:07.973851   10884 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-554300","uid":"00d4501c-311c-4564-812c-b620304a4e8e","resourceVersion":"1838","creationTimestamp":"2024-01-08T21:23:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-554300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-554300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T21_23_33_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-01-08T21:23:28Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0108 21:48:07.974846   10884 pod_ready.go:92] pod "kube-apiserver-multinode-554300" in "kube-system" namespace has status "Ready":"True"
	I0108 21:48:07.974881   10884 pod_ready.go:81] duration metric: took 11.0225ms waiting for pod "kube-apiserver-multinode-554300" in "kube-system" namespace to be "Ready" ...
	I0108 21:48:07.974881   10884 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-554300" in "kube-system" namespace to be "Ready" ...
	I0108 21:48:07.974881   10884 round_trippers.go:463] GET https://172.29.104.77:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-554300
	I0108 21:48:07.974881   10884 round_trippers.go:469] Request Headers:
	I0108 21:48:07.974881   10884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0108 21:48:07.974881   10884 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:48:07.977507   10884 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 21:48:07.977507   10884 round_trippers.go:577] Response Headers:
	I0108 21:48:07.977507   10884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:48:07.977507   10884 round_trippers.go:580]     Content-Type: application/json
	I0108 21:48:07.977507   10884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9ebd4a85-36a3-4fdc-bd57-aab45239cbb7
	I0108 21:48:07.977507   10884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 15bc4c65-0928-4725-8730-353e6d4075e7
	I0108 21:48:07.977507   10884 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:48:07 GMT
	I0108 21:48:07.977507   10884 round_trippers.go:580]     Audit-Id: 6573c0a0-7266-4ff5-a331-a7655b220c1e
	I0108 21:48:07.978827   10884 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-554300","namespace":"kube-system","uid":"c5c47910-dee9-4e42-8623-dbc45d13564f","resourceVersion":"1813","creationTimestamp":"2024-01-08T21:23:32Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"8c1bbf537f0866640621fd36da3286e0","kubernetes.io/config.mirror":"8c1bbf537f0866640621fd36da3286e0","kubernetes.io/config.seen":"2024-01-08T21:23:32.232191792Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-554300","uid":"00d4501c-311c-4564-812c-b620304a4e8e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-08T21:23:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.
io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".": [truncated 7175 chars]
	I0108 21:48:07.978870   10884 round_trippers.go:463] GET https://172.29.104.77:8443/api/v1/nodes/multinode-554300
	I0108 21:48:07.978870   10884 round_trippers.go:469] Request Headers:
	I0108 21:48:07.979425   10884 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:48:07.979425   10884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0108 21:48:07.982213   10884 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 21:48:07.982213   10884 round_trippers.go:577] Response Headers:
	I0108 21:48:07.982698   10884 round_trippers.go:580]     Audit-Id: d9b3c01f-5249-4d51-b1e5-cbf9f16b1bc2
	I0108 21:48:07.982698   10884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:48:07.982698   10884 round_trippers.go:580]     Content-Type: application/json
	I0108 21:48:07.982698   10884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9ebd4a85-36a3-4fdc-bd57-aab45239cbb7
	I0108 21:48:07.982698   10884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 15bc4c65-0928-4725-8730-353e6d4075e7
	I0108 21:48:07.982698   10884 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:48:07 GMT
	I0108 21:48:07.983020   10884 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-554300","uid":"00d4501c-311c-4564-812c-b620304a4e8e","resourceVersion":"1838","creationTimestamp":"2024-01-08T21:23:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-554300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-554300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T21_23_33_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-01-08T21:23:28Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0108 21:48:07.983143   10884 pod_ready.go:92] pod "kube-controller-manager-multinode-554300" in "kube-system" namespace has status "Ready":"True"
	I0108 21:48:07.983143   10884 pod_ready.go:81] duration metric: took 8.2617ms waiting for pod "kube-controller-manager-multinode-554300" in "kube-system" namespace to be "Ready" ...
	I0108 21:48:07.983143   10884 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-jsq7c" in "kube-system" namespace to be "Ready" ...
	I0108 21:48:08.132065   10884 request.go:629] Waited for 148.6791ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.104.77:8443/api/v1/namespaces/kube-system/pods/kube-proxy-jsq7c
	I0108 21:48:08.132065   10884 round_trippers.go:463] GET https://172.29.104.77:8443/api/v1/namespaces/kube-system/pods/kube-proxy-jsq7c
	I0108 21:48:08.132244   10884 round_trippers.go:469] Request Headers:
	I0108 21:48:08.132275   10884 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:48:08.132275   10884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0108 21:48:08.135826   10884 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 21:48:08.135826   10884 round_trippers.go:577] Response Headers:
	I0108 21:48:08.135826   10884 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:48:08 GMT
	I0108 21:48:08.135826   10884 round_trippers.go:580]     Audit-Id: 5960cac1-0276-4ea7-819e-7332bbd733cb
	I0108 21:48:08.135826   10884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:48:08.135826   10884 round_trippers.go:580]     Content-Type: application/json
	I0108 21:48:08.136213   10884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9ebd4a85-36a3-4fdc-bd57-aab45239cbb7
	I0108 21:48:08.136213   10884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 15bc4c65-0928-4725-8730-353e6d4075e7
	I0108 21:48:08.136769   10884 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-jsq7c","generateName":"kube-proxy-","namespace":"kube-system","uid":"cbc6a2d2-bb66-4af4-8a7d-315bc293cac0","resourceVersion":"1807","creationTimestamp":"2024-01-08T21:23:44Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"0597a1a7-46a6-4b69-bb4b-83cbc4a269de","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-08T21:23:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0597a1a7-46a6-4b69-bb4b-83cbc4a269de\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5739 chars]
	I0108 21:48:08.335798   10884 request.go:629] Waited for 198.1804ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.104.77:8443/api/v1/nodes/multinode-554300
	I0108 21:48:08.336167   10884 round_trippers.go:463] GET https://172.29.104.77:8443/api/v1/nodes/multinode-554300
	I0108 21:48:08.336194   10884 round_trippers.go:469] Request Headers:
	I0108 21:48:08.336194   10884 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:48:08.336194   10884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0108 21:48:08.340973   10884 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0108 21:48:08.341544   10884 round_trippers.go:577] Response Headers:
	I0108 21:48:08.341544   10884 round_trippers.go:580]     Content-Type: application/json
	I0108 21:48:08.341544   10884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9ebd4a85-36a3-4fdc-bd57-aab45239cbb7
	I0108 21:48:08.341581   10884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 15bc4c65-0928-4725-8730-353e6d4075e7
	I0108 21:48:08.341581   10884 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:48:08 GMT
	I0108 21:48:08.341581   10884 round_trippers.go:580]     Audit-Id: 4bbcb9d8-6c3d-4448-bdea-3e53f9061a43
	I0108 21:48:08.341614   10884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:48:08.341614   10884 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-554300","uid":"00d4501c-311c-4564-812c-b620304a4e8e","resourceVersion":"1838","creationTimestamp":"2024-01-08T21:23:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-554300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-554300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T21_23_33_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-01-08T21:23:28Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0108 21:48:08.342398   10884 pod_ready.go:92] pod "kube-proxy-jsq7c" in "kube-system" namespace has status "Ready":"True"
	I0108 21:48:08.342398   10884 pod_ready.go:81] duration metric: took 359.2535ms waiting for pod "kube-proxy-jsq7c" in "kube-system" namespace to be "Ready" ...
	I0108 21:48:08.342398   10884 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-nbzjb" in "kube-system" namespace to be "Ready" ...
	I0108 21:48:08.539091   10884 request.go:629] Waited for 196.6917ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.104.77:8443/api/v1/namespaces/kube-system/pods/kube-proxy-nbzjb
	I0108 21:48:08.539091   10884 round_trippers.go:463] GET https://172.29.104.77:8443/api/v1/namespaces/kube-system/pods/kube-proxy-nbzjb
	I0108 21:48:08.539091   10884 round_trippers.go:469] Request Headers:
	I0108 21:48:08.539091   10884 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:48:08.539091   10884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0108 21:48:08.543690   10884 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0108 21:48:08.543690   10884 round_trippers.go:577] Response Headers:
	I0108 21:48:08.544540   10884 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:48:08 GMT
	I0108 21:48:08.544540   10884 round_trippers.go:580]     Audit-Id: 9ac5963f-3feb-47ef-af81-10913c28772a
	I0108 21:48:08.544540   10884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:48:08.544540   10884 round_trippers.go:580]     Content-Type: application/json
	I0108 21:48:08.544540   10884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9ebd4a85-36a3-4fdc-bd57-aab45239cbb7
	I0108 21:48:08.544540   10884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 15bc4c65-0928-4725-8730-353e6d4075e7
	I0108 21:48:08.544854   10884 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-nbzjb","generateName":"kube-proxy-","namespace":"kube-system","uid":"73b08d5a-2015-4712-92b4-2d12298e9fc3","resourceVersion":"2004","creationTimestamp":"2024-01-08T21:26:42Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"0597a1a7-46a6-4b69-bb4b-83cbc4a269de","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-08T21:26:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0597a1a7-46a6-4b69-bb4b-83cbc4a269de\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5747 chars]
	I0108 21:48:08.742683   10884 request.go:629] Waited for 197.0131ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.104.77:8443/api/v1/nodes/multinode-554300-m02
	I0108 21:48:08.742783   10884 round_trippers.go:463] GET https://172.29.104.77:8443/api/v1/nodes/multinode-554300-m02
	I0108 21:48:08.742783   10884 round_trippers.go:469] Request Headers:
	I0108 21:48:08.742783   10884 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:48:08.742783   10884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0108 21:48:08.747805   10884 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0108 21:48:08.747805   10884 round_trippers.go:577] Response Headers:
	I0108 21:48:08.747994   10884 round_trippers.go:580]     Audit-Id: 94ddee58-07fc-49d0-90ba-e039e4fbe5e1
	I0108 21:48:08.747994   10884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:48:08.747994   10884 round_trippers.go:580]     Content-Type: application/json
	I0108 21:48:08.747994   10884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9ebd4a85-36a3-4fdc-bd57-aab45239cbb7
	I0108 21:48:08.747994   10884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 15bc4c65-0928-4725-8730-353e6d4075e7
	I0108 21:48:08.747994   10884 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:48:08 GMT
	I0108 21:48:08.748209   10884 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-554300-m02","uid":"9132c7d7-a2f4-429d-b538-ad18254f1c39","resourceVersion":"2024","creationTimestamp":"2024-01-08T21:47:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-554300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-554300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T21_47_59_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-08T21:47:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:anno
tations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-det [truncated 3925 chars]
	I0108 21:48:08.748242   10884 pod_ready.go:92] pod "kube-proxy-nbzjb" in "kube-system" namespace has status "Ready":"True"
	I0108 21:48:08.748242   10884 pod_ready.go:81] duration metric: took 405.8419ms waiting for pod "kube-proxy-nbzjb" in "kube-system" namespace to be "Ready" ...
	I0108 21:48:08.748242   10884 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-pdt95" in "kube-system" namespace to be "Ready" ...
	I0108 21:48:08.944043   10884 request.go:629] Waited for 195.6367ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.104.77:8443/api/v1/namespaces/kube-system/pods/kube-proxy-pdt95
	I0108 21:48:08.944364   10884 round_trippers.go:463] GET https://172.29.104.77:8443/api/v1/namespaces/kube-system/pods/kube-proxy-pdt95
	I0108 21:48:08.944364   10884 round_trippers.go:469] Request Headers:
	I0108 21:48:08.944364   10884 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:48:08.944364   10884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0108 21:48:08.947770   10884 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 21:48:08.947770   10884 round_trippers.go:577] Response Headers:
	I0108 21:48:08.947770   10884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9ebd4a85-36a3-4fdc-bd57-aab45239cbb7
	I0108 21:48:08.947770   10884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 15bc4c65-0928-4725-8730-353e6d4075e7
	I0108 21:48:08.947770   10884 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:48:08 GMT
	I0108 21:48:08.948234   10884 round_trippers.go:580]     Audit-Id: bfbddaa5-ef94-44ff-96e6-6520e3915a6d
	I0108 21:48:08.948234   10884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:48:08.948234   10884 round_trippers.go:580]     Content-Type: application/json
	I0108 21:48:08.948606   10884 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-pdt95","generateName":"kube-proxy-","namespace":"kube-system","uid":"e4aa76bc-96be-46f8-bc0e-7f3a6caa9883","resourceVersion":"1874","creationTimestamp":"2024-01-08T21:31:11Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"0597a1a7-46a6-4b69-bb4b-83cbc4a269de","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-08T21:31:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0597a1a7-46a6-4b69-bb4b-83cbc4a269de\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5972 chars]
	I0108 21:48:09.130729   10884 request.go:629] Waited for 181.0769ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.104.77:8443/api/v1/nodes/multinode-554300-m03
	I0108 21:48:09.131021   10884 round_trippers.go:463] GET https://172.29.104.77:8443/api/v1/nodes/multinode-554300-m03
	I0108 21:48:09.131160   10884 round_trippers.go:469] Request Headers:
	I0108 21:48:09.131160   10884 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:48:09.131212   10884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0108 21:48:09.134767   10884 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 21:48:09.135696   10884 round_trippers.go:577] Response Headers:
	I0108 21:48:09.135696   10884 round_trippers.go:580]     Audit-Id: 6811f9f7-b006-42ff-b975-884509987224
	I0108 21:48:09.135696   10884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:48:09.135696   10884 round_trippers.go:580]     Content-Type: application/json
	I0108 21:48:09.135696   10884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9ebd4a85-36a3-4fdc-bd57-aab45239cbb7
	I0108 21:48:09.135778   10884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 15bc4c65-0928-4725-8730-353e6d4075e7
	I0108 21:48:09.135778   10884 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:48:09 GMT
	I0108 21:48:09.136525   10884 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-554300-m03","uid":"fc944979-99f9-46c6-a35f-f2c3e1c020f4","resourceVersion":"2003","creationTimestamp":"2024-01-08T21:41:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-554300-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-554300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T21_47_59_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-08T21:41:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:anno
tations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-det [truncated 4392 chars]
	I0108 21:48:09.136935   10884 pod_ready.go:97] node "multinode-554300-m03" hosting pod "kube-proxy-pdt95" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-554300-m03" has status "Ready":"Unknown"
	I0108 21:48:09.136935   10884 pod_ready.go:81] duration metric: took 388.6909ms waiting for pod "kube-proxy-pdt95" in "kube-system" namespace to be "Ready" ...
	E0108 21:48:09.136935   10884 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-554300-m03" hosting pod "kube-proxy-pdt95" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-554300-m03" has status "Ready":"Unknown"
	I0108 21:48:09.136935   10884 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-554300" in "kube-system" namespace to be "Ready" ...
	I0108 21:48:09.334304   10884 request.go:629] Waited for 197.3687ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.104.77:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-554300
	I0108 21:48:09.334304   10884 round_trippers.go:463] GET https://172.29.104.77:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-554300
	I0108 21:48:09.334304   10884 round_trippers.go:469] Request Headers:
	I0108 21:48:09.334304   10884 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:48:09.334304   10884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0108 21:48:09.339185   10884 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0108 21:48:09.339185   10884 round_trippers.go:577] Response Headers:
	I0108 21:48:09.339185   10884 round_trippers.go:580]     Audit-Id: 88c8ab4d-0351-4ba6-9d6e-a554f3430380
	I0108 21:48:09.339185   10884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:48:09.339185   10884 round_trippers.go:580]     Content-Type: application/json
	I0108 21:48:09.339185   10884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9ebd4a85-36a3-4fdc-bd57-aab45239cbb7
	I0108 21:48:09.339657   10884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 15bc4c65-0928-4725-8730-353e6d4075e7
	I0108 21:48:09.339657   10884 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:48:09 GMT
	I0108 21:48:09.340006   10884 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-554300","namespace":"kube-system","uid":"f5b78bba-6cd0-495b-b6d6-c9afd93b3534","resourceVersion":"1806","creationTimestamp":"2024-01-08T21:23:32Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"33f58bfb7438c4df2c3fb56db1f613ab","kubernetes.io/config.mirror":"33f58bfb7438c4df2c3fb56db1f613ab","kubernetes.io/config.seen":"2024-01-08T21:23:32.232192792Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-554300","uid":"00d4501c-311c-4564-812c-b620304a4e8e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-08T21:23:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{}
,"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{ [truncated 4905 chars]
	I0108 21:48:09.538391   10884 request.go:629] Waited for 197.5935ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.104.77:8443/api/v1/nodes/multinode-554300
	I0108 21:48:09.538391   10884 round_trippers.go:463] GET https://172.29.104.77:8443/api/v1/nodes/multinode-554300
	I0108 21:48:09.538391   10884 round_trippers.go:469] Request Headers:
	I0108 21:48:09.538391   10884 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:48:09.538391   10884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0108 21:48:09.542259   10884 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 21:48:09.542259   10884 round_trippers.go:577] Response Headers:
	I0108 21:48:09.543036   10884 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:48:09 GMT
	I0108 21:48:09.543036   10884 round_trippers.go:580]     Audit-Id: 2e5cf711-df10-4a5a-80a9-bfad5d690237
	I0108 21:48:09.543036   10884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:48:09.543036   10884 round_trippers.go:580]     Content-Type: application/json
	I0108 21:48:09.543036   10884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9ebd4a85-36a3-4fdc-bd57-aab45239cbb7
	I0108 21:48:09.543036   10884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 15bc4c65-0928-4725-8730-353e6d4075e7
	I0108 21:48:09.543239   10884 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-554300","uid":"00d4501c-311c-4564-812c-b620304a4e8e","resourceVersion":"1838","creationTimestamp":"2024-01-08T21:23:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-554300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-554300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T21_23_33_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-01-08T21:23:28Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0108 21:48:09.543886   10884 pod_ready.go:92] pod "kube-scheduler-multinode-554300" in "kube-system" namespace has status "Ready":"True"
	I0108 21:48:09.543926   10884 pod_ready.go:81] duration metric: took 406.9895ms waiting for pod "kube-scheduler-multinode-554300" in "kube-system" namespace to be "Ready" ...
	I0108 21:48:09.543926   10884 pod_ready.go:38] duration metric: took 1.6077904s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0108 21:48:09.543926   10884 system_svc.go:44] waiting for kubelet service to be running ....
	I0108 21:48:09.558690   10884 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0108 21:48:09.582598   10884 system_svc.go:56] duration metric: took 38.6713ms WaitForService to wait for kubelet.
	I0108 21:48:09.582726   10884 kubeadm.go:581] duration metric: took 9.1947137s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0108 21:48:09.582726   10884 node_conditions.go:102] verifying NodePressure condition ...
	I0108 21:48:09.741977   10884 request.go:629] Waited for 159.145ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.104.77:8443/api/v1/nodes
	I0108 21:48:09.742060   10884 round_trippers.go:463] GET https://172.29.104.77:8443/api/v1/nodes
	I0108 21:48:09.742060   10884 round_trippers.go:469] Request Headers:
	I0108 21:48:09.742060   10884 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:48:09.742060   10884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0108 21:48:09.746730   10884 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0108 21:48:09.746730   10884 round_trippers.go:577] Response Headers:
	I0108 21:48:09.746730   10884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 15bc4c65-0928-4725-8730-353e6d4075e7
	I0108 21:48:09.746730   10884 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:48:09 GMT
	I0108 21:48:09.746730   10884 round_trippers.go:580]     Audit-Id: 592f5eb1-7f72-4c8e-b1a5-0469724201f7
	I0108 21:48:09.746730   10884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:48:09.746730   10884 round_trippers.go:580]     Content-Type: application/json
	I0108 21:48:09.746987   10884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9ebd4a85-36a3-4fdc-bd57-aab45239cbb7
	I0108 21:48:09.747970   10884 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"2028"},"items":[{"metadata":{"name":"multinode-554300","uid":"00d4501c-311c-4564-812c-b620304a4e8e","resourceVersion":"1838","creationTimestamp":"2024-01-08T21:23:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-554300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-554300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T21_23_33_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFi
elds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","tim [truncated 15592 chars]
	I0108 21:48:09.748372   10884 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0108 21:48:09.748904   10884 node_conditions.go:123] node cpu capacity is 2
	I0108 21:48:09.748904   10884 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0108 21:48:09.748904   10884 node_conditions.go:123] node cpu capacity is 2
	I0108 21:48:09.748904   10884 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0108 21:48:09.748904   10884 node_conditions.go:123] node cpu capacity is 2
	I0108 21:48:09.748904   10884 node_conditions.go:105] duration metric: took 166.1778ms to run NodePressure ...
	I0108 21:48:09.748904   10884 start.go:228] waiting for startup goroutines ...
	I0108 21:48:09.749101   10884 start.go:242] writing updated cluster config ...
	I0108 21:48:09.764731   10884 config.go:182] Loaded profile config "multinode-554300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0108 21:48:09.764922   10884 profile.go:148] Saving config to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-554300\config.json ...
	I0108 21:48:09.769049   10884 out.go:177] * Starting worker node multinode-554300-m03 in cluster multinode-554300
	I0108 21:48:09.769788   10884 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0108 21:48:09.769788   10884 cache.go:56] Caching tarball of preloaded images
	I0108 21:48:09.770218   10884 preload.go:174] Found C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0108 21:48:09.770499   10884 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on docker
	I0108 21:48:09.770576   10884 profile.go:148] Saving config to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-554300\config.json ...
	I0108 21:48:09.778689   10884 start.go:365] acquiring machines lock for multinode-554300-m03: {Name:mk814f158b6187cc9297257c36fdbe0d2871c950 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0108 21:48:09.778824   10884 start.go:369] acquired machines lock for "multinode-554300-m03" in 76.1µs
	I0108 21:48:09.778824   10884 start.go:96] Skipping create...Using existing machine configuration
	I0108 21:48:09.778824   10884 fix.go:54] fixHost starting: m03
	I0108 21:48:09.779450   10884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-554300-m03 ).state
	I0108 21:48:11.911032   10884 main.go:141] libmachine: [stdout =====>] : Off
	
	I0108 21:48:11.911032   10884 main.go:141] libmachine: [stderr =====>] : 
	I0108 21:48:11.911102   10884 fix.go:102] recreateIfNeeded on multinode-554300-m03: state=Stopped err=<nil>
	W0108 21:48:11.911102   10884 fix.go:128] unexpected machine state, will restart: <nil>
	I0108 21:48:11.911904   10884 out.go:177] * Restarting existing hyperv VM for "multinode-554300-m03" ...
	I0108 21:48:11.912609   10884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-554300-m03
	I0108 21:48:14.367802   10884 main.go:141] libmachine: [stdout =====>] : 
	I0108 21:48:14.367989   10884 main.go:141] libmachine: [stderr =====>] : 
	I0108 21:48:14.368046   10884 main.go:141] libmachine: Waiting for host to start...
	I0108 21:48:14.368046   10884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-554300-m03 ).state
	I0108 21:48:16.585911   10884 main.go:141] libmachine: [stdout =====>] : Running
	
	I0108 21:48:16.585947   10884 main.go:141] libmachine: [stderr =====>] : 
	I0108 21:48:16.585997   10884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-554300-m03 ).networkadapters[0]).ipaddresses[0]
	I0108 21:48:19.109147   10884 main.go:141] libmachine: [stdout =====>] : 
	I0108 21:48:19.109147   10884 main.go:141] libmachine: [stderr =====>] : 
	I0108 21:48:20.113724   10884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-554300-m03 ).state
	I0108 21:48:22.277791   10884 main.go:141] libmachine: [stdout =====>] : Running
	
	I0108 21:48:22.277966   10884 main.go:141] libmachine: [stderr =====>] : 
	I0108 21:48:22.277966   10884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-554300-m03 ).networkadapters[0]).ipaddresses[0]
	I0108 21:48:24.856098   10884 main.go:141] libmachine: [stdout =====>] : 
	I0108 21:48:24.856098   10884 main.go:141] libmachine: [stderr =====>] : 
	I0108 21:48:25.857610   10884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-554300-m03 ).state
	I0108 21:48:28.066229   10884 main.go:141] libmachine: [stdout =====>] : Running
	
	I0108 21:48:28.066229   10884 main.go:141] libmachine: [stderr =====>] : 
	I0108 21:48:28.066314   10884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-554300-m03 ).networkadapters[0]).ipaddresses[0]
	I0108 21:48:30.642148   10884 main.go:141] libmachine: [stdout =====>] : 
	I0108 21:48:30.642148   10884 main.go:141] libmachine: [stderr =====>] : 
	I0108 21:48:31.647305   10884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-554300-m03 ).state
	I0108 21:48:33.869493   10884 main.go:141] libmachine: [stdout =====>] : Running
	
	I0108 21:48:33.869551   10884 main.go:141] libmachine: [stderr =====>] : 
	I0108 21:48:33.869551   10884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-554300-m03 ).networkadapters[0]).ipaddresses[0]
	I0108 21:48:36.365203   10884 main.go:141] libmachine: [stdout =====>] : 
	I0108 21:48:36.365203   10884 main.go:141] libmachine: [stderr =====>] : 
	I0108 21:48:37.367800   10884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-554300-m03 ).state
	I0108 21:48:39.601657   10884 main.go:141] libmachine: [stdout =====>] : Running
	
	I0108 21:48:39.601729   10884 main.go:141] libmachine: [stderr =====>] : 
	I0108 21:48:39.601729   10884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-554300-m03 ).networkadapters[0]).ipaddresses[0]
	I0108 21:48:42.160781   10884 main.go:141] libmachine: [stdout =====>] : 172.29.108.2
	
	I0108 21:48:42.160991   10884 main.go:141] libmachine: [stderr =====>] : 
	I0108 21:48:42.164044   10884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-554300-m03 ).state
	I0108 21:48:44.265378   10884 main.go:141] libmachine: [stdout =====>] : Running
	
	I0108 21:48:44.265378   10884 main.go:141] libmachine: [stderr =====>] : 
	I0108 21:48:44.265502   10884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-554300-m03 ).networkadapters[0]).ipaddresses[0]
	I0108 21:48:46.811750   10884 main.go:141] libmachine: [stdout =====>] : 172.29.108.2
	
	I0108 21:48:46.812082   10884 main.go:141] libmachine: [stderr =====>] : 
	I0108 21:48:46.812176   10884 profile.go:148] Saving config to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-554300\config.json ...
	I0108 21:48:46.815234   10884 machine.go:88] provisioning docker machine ...
	I0108 21:48:46.815279   10884 buildroot.go:166] provisioning hostname "multinode-554300-m03"
	I0108 21:48:46.815279   10884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-554300-m03 ).state
	I0108 21:48:48.899147   10884 main.go:141] libmachine: [stdout =====>] : Running
	
	I0108 21:48:48.899280   10884 main.go:141] libmachine: [stderr =====>] : 
	I0108 21:48:48.899280   10884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-554300-m03 ).networkadapters[0]).ipaddresses[0]
	I0108 21:48:51.410578   10884 main.go:141] libmachine: [stdout =====>] : 172.29.108.2
	
	I0108 21:48:51.410578   10884 main.go:141] libmachine: [stderr =====>] : 
	I0108 21:48:51.416937   10884 main.go:141] libmachine: Using SSH client type: native
	I0108 21:48:51.417634   10884 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4a6120] 0x4a8c60 <nil>  [] 0s} 172.29.108.2 22 <nil> <nil>}
	I0108 21:48:51.417634   10884 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-554300-m03 && echo "multinode-554300-m03" | sudo tee /etc/hostname
	I0108 21:48:51.577647   10884 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-554300-m03
	
	I0108 21:48:51.577647   10884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-554300-m03 ).state
	I0108 21:48:53.717625   10884 main.go:141] libmachine: [stdout =====>] : Running
	
	I0108 21:48:53.717625   10884 main.go:141] libmachine: [stderr =====>] : 
	I0108 21:48:53.717625   10884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-554300-m03 ).networkadapters[0]).ipaddresses[0]
	I0108 21:48:56.250190   10884 main.go:141] libmachine: [stdout =====>] : 172.29.108.2
	
	I0108 21:48:56.250190   10884 main.go:141] libmachine: [stderr =====>] : 
	I0108 21:48:56.256551   10884 main.go:141] libmachine: Using SSH client type: native
	I0108 21:48:56.257716   10884 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4a6120] 0x4a8c60 <nil>  [] 0s} 172.29.108.2 22 <nil> <nil>}
	I0108 21:48:56.257814   10884 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-554300-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-554300-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-554300-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0108 21:48:56.410974   10884 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0108 21:48:56.410974   10884 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube7\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube7\minikube-integration\.minikube}
	I0108 21:48:56.410974   10884 buildroot.go:174] setting up certificates
	I0108 21:48:56.410974   10884 provision.go:83] configureAuth start
	I0108 21:48:56.410974   10884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-554300-m03 ).state
	I0108 21:48:58.507860   10884 main.go:141] libmachine: [stdout =====>] : Running
	
	I0108 21:48:58.507860   10884 main.go:141] libmachine: [stderr =====>] : 
	I0108 21:48:58.507974   10884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-554300-m03 ).networkadapters[0]).ipaddresses[0]
	I0108 21:49:01.053131   10884 main.go:141] libmachine: [stdout =====>] : 172.29.108.2
	
	I0108 21:49:01.053131   10884 main.go:141] libmachine: [stderr =====>] : 
	I0108 21:49:01.053251   10884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-554300-m03 ).state
	I0108 21:49:03.171545   10884 main.go:141] libmachine: [stdout =====>] : Running
	
	I0108 21:49:03.171880   10884 main.go:141] libmachine: [stderr =====>] : 
	I0108 21:49:03.171880   10884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-554300-m03 ).networkadapters[0]).ipaddresses[0]
	I0108 21:49:05.685255   10884 main.go:141] libmachine: [stdout =====>] : 172.29.108.2
	
	I0108 21:49:05.685255   10884 main.go:141] libmachine: [stderr =====>] : 
	I0108 21:49:05.685255   10884 provision.go:138] copyHostCerts
	I0108 21:49:05.685255   10884 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube7\minikube-integration\.minikube/cert.pem
	I0108 21:49:05.685810   10884 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/cert.pem, removing ...
	I0108 21:49:05.685810   10884 exec_runner.go:203] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cert.pem
	I0108 21:49:05.686088   10884 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0108 21:49:05.687543   10884 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube7\minikube-integration\.minikube/key.pem
	I0108 21:49:05.687543   10884 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/key.pem, removing ...
	I0108 21:49:05.688074   10884 exec_runner.go:203] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\key.pem
	I0108 21:49:05.688147   10884 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/key.pem (1679 bytes)
	I0108 21:49:05.689292   10884 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube7\minikube-integration\.minikube/ca.pem
	I0108 21:49:05.689533   10884 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/ca.pem, removing ...
	I0108 21:49:05.689533   10884 exec_runner.go:203] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.pem
	I0108 21:49:05.690065   10884 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0108 21:49:05.690981   10884 provision.go:112] generating server cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-554300-m03 san=[172.29.108.2 172.29.108.2 localhost 127.0.0.1 minikube multinode-554300-m03]
	I0108 21:49:05.772182   10884 provision.go:172] copyRemoteCerts
	I0108 21:49:05.788097   10884 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0108 21:49:05.788097   10884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-554300-m03 ).state
	I0108 21:49:07.895512   10884 main.go:141] libmachine: [stdout =====>] : Running
	
	I0108 21:49:07.895512   10884 main.go:141] libmachine: [stderr =====>] : 
	I0108 21:49:07.895512   10884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-554300-m03 ).networkadapters[0]).ipaddresses[0]
	I0108 21:49:10.410299   10884 main.go:141] libmachine: [stdout =====>] : 172.29.108.2
	
	I0108 21:49:10.410299   10884 main.go:141] libmachine: [stderr =====>] : 
	I0108 21:49:10.410901   10884 sshutil.go:53] new ssh client: &{IP:172.29.108.2 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-554300-m03\id_rsa Username:docker}
	I0108 21:49:10.521127   10884 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.7330063s)
	I0108 21:49:10.521127   10884 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0108 21:49:10.521901   10884 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1237 bytes)
	I0108 21:49:10.564525   10884 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0108 21:49:10.564785   10884 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0108 21:49:10.614953   10884 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0108 21:49:10.615546   10884 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0108 21:49:10.659227   10884 provision.go:86] duration metric: configureAuth took 14.2481831s
	I0108 21:49:10.659227   10884 buildroot.go:189] setting minikube options for container-runtime
	I0108 21:49:10.660900   10884 config.go:182] Loaded profile config "multinode-554300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0108 21:49:10.660900   10884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-554300-m03 ).state
	I0108 21:49:12.784130   10884 main.go:141] libmachine: [stdout =====>] : Running
	
	I0108 21:49:12.784294   10884 main.go:141] libmachine: [stderr =====>] : 
	I0108 21:49:12.784294   10884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-554300-m03 ).networkadapters[0]).ipaddresses[0]
	I0108 21:49:15.316417   10884 main.go:141] libmachine: [stdout =====>] : 172.29.108.2
	
	I0108 21:49:15.316417   10884 main.go:141] libmachine: [stderr =====>] : 
	I0108 21:49:15.321019   10884 main.go:141] libmachine: Using SSH client type: native
	I0108 21:49:15.322050   10884 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4a6120] 0x4a8c60 <nil>  [] 0s} 172.29.108.2 22 <nil> <nil>}
	I0108 21:49:15.322050   10884 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0108 21:49:15.463155   10884 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0108 21:49:15.463217   10884 buildroot.go:70] root file system type: tmpfs
	I0108 21:49:15.463356   10884 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0108 21:49:15.463432   10884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-554300-m03 ).state
	I0108 21:49:17.592205   10884 main.go:141] libmachine: [stdout =====>] : Running
	
	I0108 21:49:17.592205   10884 main.go:141] libmachine: [stderr =====>] : 
	I0108 21:49:17.592269   10884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-554300-m03 ).networkadapters[0]).ipaddresses[0]
	I0108 21:49:20.118599   10884 main.go:141] libmachine: [stdout =====>] : 172.29.108.2
	
	I0108 21:49:20.118599   10884 main.go:141] libmachine: [stderr =====>] : 
	I0108 21:49:20.124847   10884 main.go:141] libmachine: Using SSH client type: native
	I0108 21:49:20.125227   10884 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4a6120] 0x4a8c60 <nil>  [] 0s} 172.29.108.2 22 <nil> <nil>}
	I0108 21:49:20.125807   10884 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.29.104.77"
	Environment="NO_PROXY=172.29.104.77,172.29.97.220"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0108 21:49:20.274124   10884 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.29.104.77
	Environment=NO_PROXY=172.29.104.77,172.29.97.220
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0108 21:49:20.274124   10884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-554300-m03 ).state
	I0108 21:49:22.358098   10884 main.go:141] libmachine: [stdout =====>] : Running
	
	I0108 21:49:22.358185   10884 main.go:141] libmachine: [stderr =====>] : 
	I0108 21:49:22.358185   10884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-554300-m03 ).networkadapters[0]).ipaddresses[0]
	I0108 21:49:24.950563   10884 main.go:141] libmachine: [stdout =====>] : 172.29.108.2
	
	I0108 21:49:24.950628   10884 main.go:141] libmachine: [stderr =====>] : 
	I0108 21:49:24.957757   10884 main.go:141] libmachine: Using SSH client type: native
	I0108 21:49:24.957757   10884 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4a6120] 0x4a8c60 <nil>  [] 0s} 172.29.108.2 22 <nil> <nil>}
	I0108 21:49:24.957757   10884 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0108 21:49:26.041354   10884 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0108 21:49:26.041488   10884 machine.go:91] provisioned docker machine in 39.2260162s
	I0108 21:49:26.041488   10884 start.go:300] post-start starting for "multinode-554300-m03" (driver="hyperv")
	I0108 21:49:26.041560   10884 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0108 21:49:26.056461   10884 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0108 21:49:26.056461   10884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-554300-m03 ).state
	I0108 21:49:28.198465   10884 main.go:141] libmachine: [stdout =====>] : Running
	
	I0108 21:49:28.198465   10884 main.go:141] libmachine: [stderr =====>] : 
	I0108 21:49:28.198589   10884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-554300-m03 ).networkadapters[0]).ipaddresses[0]
	I0108 21:49:30.730080   10884 main.go:141] libmachine: [stdout =====>] : 172.29.108.2
	
	I0108 21:49:30.730080   10884 main.go:141] libmachine: [stderr =====>] : 
	I0108 21:49:30.730860   10884 sshutil.go:53] new ssh client: &{IP:172.29.108.2 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-554300-m03\id_rsa Username:docker}
	I0108 21:49:30.840743   10884 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.7842582s)
	I0108 21:49:30.854853   10884 ssh_runner.go:195] Run: cat /etc/os-release
	I0108 21:49:30.859947   10884 command_runner.go:130] > NAME=Buildroot
	I0108 21:49:30.859947   10884 command_runner.go:130] > VERSION=2021.02.12-1-gae27a7b-dirty
	I0108 21:49:30.859947   10884 command_runner.go:130] > ID=buildroot
	I0108 21:49:30.859947   10884 command_runner.go:130] > VERSION_ID=2021.02.12
	I0108 21:49:30.859947   10884 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
	I0108 21:49:30.859947   10884 info.go:137] Remote host: Buildroot 2021.02.12
	I0108 21:49:30.859947   10884 filesync.go:126] Scanning C:\Users\jenkins.minikube7\minikube-integration\.minikube\addons for local assets ...
	I0108 21:49:30.860934   10884 filesync.go:126] Scanning C:\Users\jenkins.minikube7\minikube-integration\.minikube\files for local assets ...
	I0108 21:49:30.860934   10884 filesync.go:149] local asset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\30082.pem -> 30082.pem in /etc/ssl/certs
	I0108 21:49:30.862072   10884 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\30082.pem -> /etc/ssl/certs/30082.pem
	I0108 21:49:30.874703   10884 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0108 21:49:30.890186   10884 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\30082.pem --> /etc/ssl/certs/30082.pem (1708 bytes)
	I0108 21:49:30.930905   10884 start.go:303] post-start completed in 4.889351s
	I0108 21:49:30.930905   10884 fix.go:56] fixHost completed within 1m21.1516846s
	I0108 21:49:30.930973   10884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-554300-m03 ).state
	I0108 21:49:33.104151   10884 main.go:141] libmachine: [stdout =====>] : Running
	
	I0108 21:49:33.104429   10884 main.go:141] libmachine: [stderr =====>] : 
	I0108 21:49:33.104517   10884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-554300-m03 ).networkadapters[0]).ipaddresses[0]
	I0108 21:49:35.613538   10884 main.go:141] libmachine: [stdout =====>] : 172.29.108.2
	
	I0108 21:49:35.613538   10884 main.go:141] libmachine: [stderr =====>] : 
	I0108 21:49:35.619363   10884 main.go:141] libmachine: Using SSH client type: native
	I0108 21:49:35.620098   10884 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4a6120] 0x4a8c60 <nil>  [] 0s} 172.29.108.2 22 <nil> <nil>}
	I0108 21:49:35.620098   10884 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0108 21:49:35.760250   10884 main.go:141] libmachine: SSH cmd err, output: <nil>: 1704750575.766101129
	
	I0108 21:49:35.760250   10884 fix.go:206] guest clock: 1704750575.766101129
	I0108 21:49:35.760250   10884 fix.go:219] Guest: 2024-01-08 21:49:35.766101129 +0000 UTC Remote: 2024-01-08 21:49:30.9309059 +0000 UTC m=+363.184882801 (delta=4.835195229s)
	I0108 21:49:35.760374   10884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-554300-m03 ).state
	I0108 21:49:37.924735   10884 main.go:141] libmachine: [stdout =====>] : Running
	
	I0108 21:49:37.925136   10884 main.go:141] libmachine: [stderr =====>] : 
	I0108 21:49:37.926050   10884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-554300-m03 ).networkadapters[0]).ipaddresses[0]
	I0108 21:49:40.468569   10884 main.go:141] libmachine: [stdout =====>] : 172.29.108.2
	
	I0108 21:49:40.468569   10884 main.go:141] libmachine: [stderr =====>] : 
	I0108 21:49:40.476982   10884 main.go:141] libmachine: Using SSH client type: native
	I0108 21:49:40.477897   10884 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4a6120] 0x4a8c60 <nil>  [] 0s} 172.29.108.2 22 <nil> <nil>}
	I0108 21:49:40.477897   10884 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1704750575
	I0108 21:49:40.628887   10884 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Jan  8 21:49:35 UTC 2024
	
	I0108 21:49:40.628887   10884 fix.go:226] clock set: Mon Jan  8 21:49:35 UTC 2024
	 (err=<nil>)
	I0108 21:49:40.628887   10884 start.go:83] releasing machines lock for "multinode-554300-m03", held for 1m30.8496172s
	I0108 21:49:40.629691   10884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-554300-m03 ).state
	I0108 21:49:42.744158   10884 main.go:141] libmachine: [stdout =====>] : Running
	
	I0108 21:49:42.744346   10884 main.go:141] libmachine: [stderr =====>] : 
	I0108 21:49:42.744411   10884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-554300-m03 ).networkadapters[0]).ipaddresses[0]
	I0108 21:49:45.286523   10884 main.go:141] libmachine: [stdout =====>] : 172.29.108.2
	
	I0108 21:49:45.286523   10884 main.go:141] libmachine: [stderr =====>] : 
	I0108 21:49:45.287871   10884 out.go:177] * Found network options:
	I0108 21:49:45.288885   10884 out.go:177]   - NO_PROXY=172.29.104.77,172.29.97.220
	W0108 21:49:45.289590   10884 proxy.go:119] fail to check proxy env: Error ip not in block
	W0108 21:49:45.289669   10884 proxy.go:119] fail to check proxy env: Error ip not in block
	I0108 21:49:45.290410   10884 out.go:177]   - NO_PROXY=172.29.104.77,172.29.97.220
	W0108 21:49:45.291212   10884 proxy.go:119] fail to check proxy env: Error ip not in block
	W0108 21:49:45.291297   10884 proxy.go:119] fail to check proxy env: Error ip not in block
	W0108 21:49:45.293006   10884 proxy.go:119] fail to check proxy env: Error ip not in block
	W0108 21:49:45.293006   10884 proxy.go:119] fail to check proxy env: Error ip not in block
	I0108 21:49:45.295289   10884 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0108 21:49:45.295438   10884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-554300-m03 ).state
	I0108 21:49:45.309140   10884 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0108 21:49:45.309140   10884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-554300-m03 ).state
	I0108 21:49:47.500012   10884 main.go:141] libmachine: [stdout =====>] : Running
	
	I0108 21:49:47.500012   10884 main.go:141] libmachine: [stderr =====>] : 
	I0108 21:49:47.500012   10884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-554300-m03 ).networkadapters[0]).ipaddresses[0]
	I0108 21:49:47.503861   10884 main.go:141] libmachine: [stdout =====>] : Running
	
	I0108 21:49:47.503861   10884 main.go:141] libmachine: [stderr =====>] : 
	I0108 21:49:47.503861   10884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-554300-m03 ).networkadapters[0]).ipaddresses[0]
	I0108 21:49:50.127650   10884 main.go:141] libmachine: [stdout =====>] : 172.29.108.2
	
	I0108 21:49:50.127650   10884 main.go:141] libmachine: [stderr =====>] : 
	I0108 21:49:50.128256   10884 sshutil.go:53] new ssh client: &{IP:172.29.108.2 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-554300-m03\id_rsa Username:docker}
	I0108 21:49:50.159754   10884 main.go:141] libmachine: [stdout =====>] : 172.29.108.2
	
	I0108 21:49:50.160332   10884 main.go:141] libmachine: [stderr =====>] : 
	I0108 21:49:50.160932   10884 sshutil.go:53] new ssh client: &{IP:172.29.108.2 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-554300-m03\id_rsa Username:docker}
	I0108 21:49:50.295146   10884 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0108 21:49:50.295146   10884 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	I0108 21:49:50.295333   10884 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (4.9861231s)
	I0108 21:49:50.295333   10884 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (4.9998317s)
	W0108 21:49:50.295333   10884 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0108 21:49:50.309273   10884 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0108 21:49:50.332061   10884 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0108 21:49:50.332921   10884 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0108 21:49:50.333032   10884 start.go:475] detecting cgroup driver to use...
	I0108 21:49:50.333266   10884 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0108 21:49:50.362062   10884 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0108 21:49:50.378978   10884 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0108 21:49:50.410762   10884 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0108 21:49:50.426371   10884 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0108 21:49:50.439347   10884 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0108 21:49:50.468587   10884 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0108 21:49:50.500281   10884 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0108 21:49:50.529110   10884 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0108 21:49:50.560260   10884 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0108 21:49:50.588924   10884 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0108 21:49:50.620038   10884 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0108 21:49:50.636053   10884 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0108 21:49:50.651073   10884 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0108 21:49:50.680240   10884 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0108 21:49:50.842847   10884 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0108 21:49:50.872385   10884 start.go:475] detecting cgroup driver to use...
	I0108 21:49:50.885436   10884 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0108 21:49:50.907025   10884 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0108 21:49:50.907025   10884 command_runner.go:130] > [Unit]
	I0108 21:49:50.907025   10884 command_runner.go:130] > Description=Docker Application Container Engine
	I0108 21:49:50.907025   10884 command_runner.go:130] > Documentation=https://docs.docker.com
	I0108 21:49:50.907025   10884 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0108 21:49:50.907025   10884 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0108 21:49:50.907025   10884 command_runner.go:130] > StartLimitBurst=3
	I0108 21:49:50.907025   10884 command_runner.go:130] > StartLimitIntervalSec=60
	I0108 21:49:50.907025   10884 command_runner.go:130] > [Service]
	I0108 21:49:50.907025   10884 command_runner.go:130] > Type=notify
	I0108 21:49:50.907025   10884 command_runner.go:130] > Restart=on-failure
	I0108 21:49:50.907025   10884 command_runner.go:130] > Environment=NO_PROXY=172.29.104.77
	I0108 21:49:50.907025   10884 command_runner.go:130] > Environment=NO_PROXY=172.29.104.77,172.29.97.220
	I0108 21:49:50.907025   10884 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0108 21:49:50.907025   10884 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0108 21:49:50.907025   10884 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0108 21:49:50.907025   10884 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0108 21:49:50.907025   10884 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0108 21:49:50.907025   10884 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0108 21:49:50.907025   10884 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0108 21:49:50.907025   10884 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0108 21:49:50.907025   10884 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0108 21:49:50.907025   10884 command_runner.go:130] > ExecStart=
	I0108 21:49:50.907025   10884 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0108 21:49:50.907025   10884 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0108 21:49:50.907025   10884 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0108 21:49:50.907025   10884 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0108 21:49:50.907025   10884 command_runner.go:130] > LimitNOFILE=infinity
	I0108 21:49:50.907025   10884 command_runner.go:130] > LimitNPROC=infinity
	I0108 21:49:50.907025   10884 command_runner.go:130] > LimitCORE=infinity
	I0108 21:49:50.907025   10884 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0108 21:49:50.907025   10884 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0108 21:49:50.907025   10884 command_runner.go:130] > TasksMax=infinity
	I0108 21:49:50.907025   10884 command_runner.go:130] > TimeoutStartSec=0
	I0108 21:49:50.907025   10884 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0108 21:49:50.907025   10884 command_runner.go:130] > Delegate=yes
	I0108 21:49:50.907025   10884 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0108 21:49:50.907025   10884 command_runner.go:130] > KillMode=process
	I0108 21:49:50.907025   10884 command_runner.go:130] > [Install]
	I0108 21:49:50.907025   10884 command_runner.go:130] > WantedBy=multi-user.target
	I0108 21:49:50.922898   10884 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0108 21:49:50.955881   10884 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0108 21:49:50.992890   10884 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0108 21:49:51.031109   10884 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0108 21:49:51.069858   10884 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0108 21:49:51.122459   10884 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0108 21:49:51.142111   10884 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0108 21:49:51.170920   10884 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0108 21:49:51.186412   10884 ssh_runner.go:195] Run: which cri-dockerd
	I0108 21:49:51.195098   10884 command_runner.go:130] > /usr/bin/cri-dockerd
	I0108 21:49:51.207125   10884 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0108 21:49:51.222966   10884 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0108 21:49:51.264958   10884 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0108 21:49:51.432320   10884 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0108 21:49:51.590715   10884 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0108 21:49:51.590893   10884 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0108 21:49:51.632950   10884 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0108 21:49:51.806358   10884 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0108 21:49:53.350549   10884 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.5441834s)
	I0108 21:49:53.364353   10884 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0108 21:49:53.407131   10884 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0108 21:49:53.449860   10884 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0108 21:49:53.625520   10884 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0108 21:49:53.804600   10884 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0108 21:49:53.984550   10884 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0108 21:49:54.024291   10884 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0108 21:49:54.060901   10884 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0108 21:49:54.230047   10884 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0108 21:49:54.338942   10884 start.go:522] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0108 21:49:54.354208   10884 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0108 21:49:54.363298   10884 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0108 21:49:54.363298   10884 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0108 21:49:54.363298   10884 command_runner.go:130] > Device: 16h/22d	Inode: 901         Links: 1
	I0108 21:49:54.363298   10884 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0108 21:49:54.363410   10884 command_runner.go:130] > Access: 2024-01-08 21:49:54.261304360 +0000
	I0108 21:49:54.363410   10884 command_runner.go:130] > Modify: 2024-01-08 21:49:54.261304360 +0000
	I0108 21:49:54.363410   10884 command_runner.go:130] > Change: 2024-01-08 21:49:54.264304360 +0000
	I0108 21:49:54.363475   10884 command_runner.go:130] >  Birth: -
	I0108 21:49:54.363475   10884 start.go:543] Will wait 60s for crictl version
	I0108 21:49:54.377363   10884 ssh_runner.go:195] Run: which crictl
	I0108 21:49:54.382566   10884 command_runner.go:130] > /usr/bin/crictl
	I0108 21:49:54.396616   10884 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0108 21:49:54.472660   10884 command_runner.go:130] > Version:  0.1.0
	I0108 21:49:54.473319   10884 command_runner.go:130] > RuntimeName:  docker
	I0108 21:49:54.473319   10884 command_runner.go:130] > RuntimeVersion:  24.0.7
	I0108 21:49:54.473319   10884 command_runner.go:130] > RuntimeApiVersion:  v1
	I0108 21:49:54.473319   10884 start.go:559] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  24.0.7
	RuntimeApiVersion:  v1
	I0108 21:49:54.484209   10884 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0108 21:49:54.517175   10884 command_runner.go:130] > 24.0.7
	I0108 21:49:54.529647   10884 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0108 21:49:54.564642   10884 command_runner.go:130] > 24.0.7
	I0108 21:49:54.566751   10884 out.go:204] * Preparing Kubernetes v1.28.4 on Docker 24.0.7 ...
	I0108 21:49:54.567736   10884 out.go:177]   - env NO_PROXY=172.29.104.77
	I0108 21:49:54.568646   10884 out.go:177]   - env NO_PROXY=172.29.104.77,172.29.97.220
	I0108 21:49:54.568646   10884 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0108 21:49:54.573662   10884 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0108 21:49:54.573662   10884 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0108 21:49:54.573662   10884 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0108 21:49:54.573662   10884 ip.go:207] Found interface: {Index:10 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:21:91:d6 Flags:up|broadcast|multicast|running}
	I0108 21:49:54.576657   10884 ip.go:210] interface addr: fe80::2be2:9d7a:5cc4:f25c/64
	I0108 21:49:54.576657   10884 ip.go:210] interface addr: 172.29.96.1/20
	I0108 21:49:54.591645   10884 ssh_runner.go:195] Run: grep 172.29.96.1	host.minikube.internal$ /etc/hosts
	I0108 21:49:54.596644   10884 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.29.96.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0108 21:49:54.617437   10884 certs.go:56] Setting up C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-554300 for IP: 172.29.108.2
	I0108 21:49:54.617523   10884 certs.go:190] acquiring lock for shared ca certs: {Name:mk3bd6e475aadf28590677101b36f61c132d4290 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 21:49:54.618256   10884 certs.go:199] skipping minikubeCA CA generation: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.key
	I0108 21:49:54.618681   10884 certs.go:199] skipping proxyClientCA CA generation: C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.key
	I0108 21:49:54.618908   10884 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0108 21:49:54.619079   10884 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0108 21:49:54.619412   10884 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0108 21:49:54.619412   10884 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0108 21:49:54.619412   10884 certs.go:437] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\3008.pem (1338 bytes)
	W0108 21:49:54.619412   10884 certs.go:433] ignoring C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\3008_empty.pem, impossibly tiny 0 bytes
	I0108 21:49:54.619412   10884 certs.go:437] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I0108 21:49:54.620796   10884 certs.go:437] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0108 21:49:54.621177   10884 certs.go:437] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0108 21:49:54.621476   10884 certs.go:437] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0108 21:49:54.622172   10884 certs.go:437] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\30082.pem (1708 bytes)
	I0108 21:49:54.622495   10884 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\3008.pem -> /usr/share/ca-certificates/3008.pem
	I0108 21:49:54.622663   10884 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\30082.pem -> /usr/share/ca-certificates/30082.pem
	I0108 21:49:54.622917   10884 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0108 21:49:54.623728   10884 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0108 21:49:54.664933   10884 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0108 21:49:54.705219   10884 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0108 21:49:54.744723   10884 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0108 21:49:54.782544   10884 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\3008.pem --> /usr/share/ca-certificates/3008.pem (1338 bytes)
	I0108 21:49:54.821723   10884 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\30082.pem --> /usr/share/ca-certificates/30082.pem (1708 bytes)
	I0108 21:49:54.858986   10884 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0108 21:49:54.911027   10884 ssh_runner.go:195] Run: openssl version
	I0108 21:49:54.918007   10884 command_runner.go:130] > OpenSSL 1.1.1n  15 Mar 2022
	I0108 21:49:54.930003   10884 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3008.pem && ln -fs /usr/share/ca-certificates/3008.pem /etc/ssl/certs/3008.pem"
	I0108 21:49:54.961538   10884 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3008.pem
	I0108 21:49:54.967767   10884 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Jan  8 20:28 /usr/share/ca-certificates/3008.pem
	I0108 21:49:54.967869   10884 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan  8 20:28 /usr/share/ca-certificates/3008.pem
	I0108 21:49:54.980454   10884 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3008.pem
	I0108 21:49:54.987584   10884 command_runner.go:130] > 51391683
	I0108 21:49:55.000221   10884 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/3008.pem /etc/ssl/certs/51391683.0"
	I0108 21:49:55.028773   10884 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/30082.pem && ln -fs /usr/share/ca-certificates/30082.pem /etc/ssl/certs/30082.pem"
	I0108 21:49:55.058466   10884 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/30082.pem
	I0108 21:49:55.064998   10884 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Jan  8 20:28 /usr/share/ca-certificates/30082.pem
	I0108 21:49:55.064998   10884 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan  8 20:28 /usr/share/ca-certificates/30082.pem
	I0108 21:49:55.077808   10884 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/30082.pem
	I0108 21:49:55.084662   10884 command_runner.go:130] > 3ec20f2e
	I0108 21:49:55.097772   10884 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/30082.pem /etc/ssl/certs/3ec20f2e.0"
	I0108 21:49:55.129138   10884 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0108 21:49:55.157698   10884 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0108 21:49:55.163524   10884 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Jan  8 20:14 /usr/share/ca-certificates/minikubeCA.pem
	I0108 21:49:55.163524   10884 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan  8 20:14 /usr/share/ca-certificates/minikubeCA.pem
	I0108 21:49:55.175877   10884 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0108 21:49:55.184874   10884 command_runner.go:130] > b5213941
	I0108 21:49:55.200947   10884 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0108 21:49:55.233449   10884 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0108 21:49:55.239232   10884 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0108 21:49:55.239232   10884 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0108 21:49:55.248281   10884 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0108 21:49:55.281118   10884 command_runner.go:130] > cgroupfs
	I0108 21:49:55.282212   10884 cni.go:84] Creating CNI manager for ""
	I0108 21:49:55.282212   10884 cni.go:136] 3 nodes found, recommending kindnet
	I0108 21:49:55.282212   10884 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0108 21:49:55.282282   10884 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.29.108.2 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-554300 NodeName:multinode-554300-m03 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.29.104.77"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.29.108.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0108 21:49:55.282567   10884 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.29.108.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "multinode-554300-m03"
	  kubeletExtraArgs:
	    node-ip: 172.29.108.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.29.104.77"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0108 21:49:55.282730   10884 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=multinode-554300-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.29.108.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:multinode-554300 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0108 21:49:55.295523   10884 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0108 21:49:55.312219   10884 command_runner.go:130] > kubeadm
	I0108 21:49:55.312277   10884 command_runner.go:130] > kubectl
	I0108 21:49:55.312277   10884 command_runner.go:130] > kubelet
	I0108 21:49:55.312397   10884 binaries.go:44] Found k8s binaries, skipping transfer
	I0108 21:49:55.324769   10884 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I0108 21:49:55.339420   10884 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (381 bytes)
	I0108 21:49:55.366095   10884 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0108 21:49:55.406353   10884 ssh_runner.go:195] Run: grep 172.29.104.77	control-plane.minikube.internal$ /etc/hosts
	I0108 21:49:55.411861   10884 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.29.104.77	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0108 21:49:55.428755   10884 host.go:66] Checking if "multinode-554300" exists ...
	I0108 21:49:55.429763   10884 config.go:182] Loaded profile config "multinode-554300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0108 21:49:55.429763   10884 start.go:304] JoinCluster: &{Name:multinode-554300 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17806/minikube-v1.32.1-1702708929-17806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion
:v1.28.4 ClusterName:multinode-554300 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:172.29.104.77 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.29.97.220 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.29.108.2 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false
ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bina
ryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I0108 21:49:55.429926   10884 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0108 21:49:55.430001   10884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-554300 ).state
	I0108 21:49:57.569190   10884 main.go:141] libmachine: [stdout =====>] : Running
	
	I0108 21:49:57.569467   10884 main.go:141] libmachine: [stderr =====>] : 
	I0108 21:49:57.569467   10884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-554300 ).networkadapters[0]).ipaddresses[0]
	I0108 21:50:00.157997   10884 main.go:141] libmachine: [stdout =====>] : 172.29.104.77
	
	I0108 21:50:00.157997   10884 main.go:141] libmachine: [stderr =====>] : 
	I0108 21:50:00.158181   10884 sshutil.go:53] new ssh client: &{IP:172.29.104.77 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-554300\id_rsa Username:docker}
	I0108 21:50:00.356679   10884 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token hnqcdx.fjh1z8k1giu3zmnb --discovery-token-ca-cert-hash sha256:5d4a576aad216c3b59d844299451f9173aefcd7f6ecd29b777d9935fab24b02c 
	I0108 21:50:00.356679   10884 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm token create --print-join-command --ttl=0": (4.9266527s)
	I0108 21:50:00.356679   10884 start.go:317] removing existing worker node "m03" before attempting to rejoin cluster: &{Name:m03 IP:172.29.108.2 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime: ControlPlane:false Worker:true}
	I0108 21:50:00.356679   10884 host.go:66] Checking if "multinode-554300" exists ...
	I0108 21:50:00.376145   10884 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl drain multinode-554300-m03 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data --delete-local-data
	I0108 21:50:00.376145   10884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-554300 ).state
	I0108 21:50:02.481439   10884 main.go:141] libmachine: [stdout =====>] : Running
	
	I0108 21:50:02.481439   10884 main.go:141] libmachine: [stderr =====>] : 
	I0108 21:50:02.481543   10884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-554300 ).networkadapters[0]).ipaddresses[0]
	I0108 21:50:05.042088   10884 main.go:141] libmachine: [stdout =====>] : 172.29.104.77
	
	I0108 21:50:05.042088   10884 main.go:141] libmachine: [stderr =====>] : 
	I0108 21:50:05.042933   10884 sshutil.go:53] new ssh client: &{IP:172.29.104.77 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-554300\id_rsa Username:docker}
	I0108 21:50:05.221157   10884 command_runner.go:130] ! Flag --delete-local-data has been deprecated, This option is deprecated and will be deleted. Use --delete-emptydir-data.
	I0108 21:50:05.287129   10884 command_runner.go:130] ! Warning: ignoring DaemonSet-managed Pods: kube-system/kindnet-dnjjm, kube-system/kube-proxy-pdt95
	I0108 21:50:05.289729   10884 command_runner.go:130] > node/multinode-554300-m03 cordoned
	I0108 21:50:05.289729   10884 command_runner.go:130] > node/multinode-554300-m03 drained
	I0108 21:50:05.290033   10884 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl drain multinode-554300-m03 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data --delete-local-data: (4.9138629s)
	I0108 21:50:05.290033   10884 node.go:108] successfully drained node "m03"
	I0108 21:50:05.290803   10884 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	I0108 21:50:05.291548   10884 kapi.go:59] client config for multinode-554300: &rest.Config{Host:"https://172.29.104.77:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\profiles\\multinode-554300\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\profiles\\multinode-554300\\client.key", CAFile:"C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAD
ata:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x188c140), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0108 21:50:05.292338   10884 request.go:1212] Request Body: {"kind":"DeleteOptions","apiVersion":"v1"}
	I0108 21:50:05.292861   10884 round_trippers.go:463] DELETE https://172.29.104.77:8443/api/v1/nodes/multinode-554300-m03
	I0108 21:50:05.292861   10884 round_trippers.go:469] Request Headers:
	I0108 21:50:05.292861   10884 round_trippers.go:473]     Content-Type: application/json
	I0108 21:50:05.292861   10884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0108 21:50:05.293044   10884 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:50:05.309704   10884 round_trippers.go:574] Response Status: 200 OK in 16 milliseconds
	I0108 21:50:05.309704   10884 round_trippers.go:577] Response Headers:
	I0108 21:50:05.309704   10884 round_trippers.go:580]     Content-Length: 171
	I0108 21:50:05.309704   10884 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:50:05 GMT
	I0108 21:50:05.309704   10884 round_trippers.go:580]     Audit-Id: c1892fc7-ab7b-4e0f-ba4d-b1c9f18b9c85
	I0108 21:50:05.309704   10884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:50:05.310184   10884 round_trippers.go:580]     Content-Type: application/json
	I0108 21:50:05.310184   10884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9ebd4a85-36a3-4fdc-bd57-aab45239cbb7
	I0108 21:50:05.310184   10884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 15bc4c65-0928-4725-8730-353e6d4075e7
	I0108 21:50:05.310285   10884 request.go:1212] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Success","details":{"name":"multinode-554300-m03","kind":"nodes","uid":"fc944979-99f9-46c6-a35f-f2c3e1c020f4"}}
	I0108 21:50:05.310333   10884 node.go:124] successfully deleted node "m03"
	I0108 21:50:05.310333   10884 start.go:321] successfully removed existing worker node "m03" from cluster: &{Name:m03 IP:172.29.108.2 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime: ControlPlane:false Worker:true}
	I0108 21:50:05.310333   10884 start.go:325] trying to join worker node "m03" to cluster: &{Name:m03 IP:172.29.108.2 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime: ControlPlane:false Worker:true}
	I0108 21:50:05.310446   10884 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm join control-plane.minikube.internal:8443 --token hnqcdx.fjh1z8k1giu3zmnb --discovery-token-ca-cert-hash sha256:5d4a576aad216c3b59d844299451f9173aefcd7f6ecd29b777d9935fab24b02c --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-554300-m03"
	I0108 21:50:05.651689   10884 command_runner.go:130] ! W0108 21:50:05.658466    1361 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I0108 21:50:06.346186   10884 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0108 21:50:08.146919   10884 command_runner.go:130] > [preflight] Running pre-flight checks
	I0108 21:50:08.147060   10884 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I0108 21:50:08.147060   10884 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I0108 21:50:08.147060   10884 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0108 21:50:08.147060   10884 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0108 21:50:08.147060   10884 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0108 21:50:08.147060   10884 command_runner.go:130] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
	I0108 21:50:08.147060   10884 command_runner.go:130] > This node has joined the cluster:
	I0108 21:50:08.147060   10884 command_runner.go:130] > * Certificate signing request was sent to apiserver and a response was received.
	I0108 21:50:08.147060   10884 command_runner.go:130] > * The Kubelet was informed of the new secure connection details.
	I0108 21:50:08.147209   10884 command_runner.go:130] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I0108 21:50:08.147209   10884 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm join control-plane.minikube.internal:8443 --token hnqcdx.fjh1z8k1giu3zmnb --discovery-token-ca-cert-hash sha256:5d4a576aad216c3b59d844299451f9173aefcd7f6ecd29b777d9935fab24b02c --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-554300-m03": (2.8367486s)
	I0108 21:50:08.147209   10884 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0108 21:50:08.341153   10884 command_runner.go:130] ! Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /usr/lib/systemd/system/kubelet.service.
	I0108 21:50:08.521313   10884 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=255792ad75c0218cbe9d2c7121633a1b1d442f28 minikube.k8s.io/name=multinode-554300 minikube.k8s.io/updated_at=2024_01_08T21_50_08_0700 minikube.k8s.io/primary=false "-l minikube.k8s.io/primary!=true" --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:50:08.727898   10884 command_runner.go:130] > node/multinode-554300-m02 labeled
	I0108 21:50:08.727898   10884 command_runner.go:130] > node/multinode-554300-m03 labeled
	I0108 21:50:08.727898   10884 start.go:306] JoinCluster complete in 13.2980679s
	I0108 21:50:08.727898   10884 cni.go:84] Creating CNI manager for ""
	I0108 21:50:08.727898   10884 cni.go:136] 3 nodes found, recommending kindnet
	I0108 21:50:08.740897   10884 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0108 21:50:08.748899   10884 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0108 21:50:08.748899   10884 command_runner.go:130] >   Size: 2685752   	Blocks: 5248       IO Block: 4096   regular file
	I0108 21:50:08.748899   10884 command_runner.go:130] > Device: 11h/17d	Inode: 3544        Links: 1
	I0108 21:50:08.748899   10884 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0108 21:50:08.748899   10884 command_runner.go:130] > Access: 2024-01-08 21:44:03.520554000 +0000
	I0108 21:50:08.748899   10884 command_runner.go:130] > Modify: 2023-12-16 11:53:47.000000000 +0000
	I0108 21:50:08.748899   10884 command_runner.go:130] > Change: 2024-01-08 21:43:53.914000000 +0000
	I0108 21:50:08.748899   10884 command_runner.go:130] >  Birth: -
	I0108 21:50:08.748899   10884 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I0108 21:50:08.748899   10884 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0108 21:50:08.792849   10884 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0108 21:50:09.209172   10884 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I0108 21:50:09.209172   10884 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I0108 21:50:09.209172   10884 command_runner.go:130] > serviceaccount/kindnet unchanged
	I0108 21:50:09.209172   10884 command_runner.go:130] > daemonset.apps/kindnet configured
	I0108 21:50:09.211007   10884 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	I0108 21:50:09.211007   10884 kapi.go:59] client config for multinode-554300: &rest.Config{Host:"https://172.29.104.77:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\profiles\\multinode-554300\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\profiles\\multinode-554300\\client.key", CAFile:"C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAD
ata:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x188c140), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0108 21:50:09.213117   10884 round_trippers.go:463] GET https://172.29.104.77:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0108 21:50:09.213160   10884 round_trippers.go:469] Request Headers:
	I0108 21:50:09.213160   10884 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:50:09.213160   10884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0108 21:50:09.218291   10884 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0108 21:50:09.218291   10884 round_trippers.go:577] Response Headers:
	I0108 21:50:09.218291   10884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 15bc4c65-0928-4725-8730-353e6d4075e7
	I0108 21:50:09.218969   10884 round_trippers.go:580]     Content-Length: 292
	I0108 21:50:09.218969   10884 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:50:09 GMT
	I0108 21:50:09.218969   10884 round_trippers.go:580]     Audit-Id: dfd76a8e-201f-4e93-85a3-bd791986815d
	I0108 21:50:09.218969   10884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:50:09.218969   10884 round_trippers.go:580]     Content-Type: application/json
	I0108 21:50:09.218969   10884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9ebd4a85-36a3-4fdc-bd57-aab45239cbb7
	I0108 21:50:09.218969   10884 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"a05171e3-49ee-4610-ae26-e93c6a171dfe","resourceVersion":"1846","creationTimestamp":"2024-01-08T21:23:32Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0108 21:50:09.219124   10884 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-554300" context rescaled to 1 replicas
	I0108 21:50:09.219171   10884 start.go:223] Will wait 6m0s for node &{Name:m03 IP:172.29.108.2 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime: ControlPlane:false Worker:true}
	I0108 21:50:09.219958   10884 out.go:177] * Verifying Kubernetes components...
	I0108 21:50:09.235296   10884 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0108 21:50:09.256344   10884 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	I0108 21:50:09.257263   10884 kapi.go:59] client config for multinode-554300: &rest.Config{Host:"https://172.29.104.77:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\profiles\\multinode-554300\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\profiles\\multinode-554300\\client.key", CAFile:"C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAD
ata:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x188c140), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0108 21:50:09.258685   10884 node_ready.go:35] waiting up to 6m0s for node "multinode-554300-m03" to be "Ready" ...
	I0108 21:50:09.258685   10884 round_trippers.go:463] GET https://172.29.104.77:8443/api/v1/nodes/multinode-554300-m03
	I0108 21:50:09.258685   10884 round_trippers.go:469] Request Headers:
	I0108 21:50:09.258685   10884 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:50:09.258685   10884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0108 21:50:09.263602   10884 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0108 21:50:09.263602   10884 round_trippers.go:577] Response Headers:
	I0108 21:50:09.263602   10884 round_trippers.go:580]     Content-Type: application/json
	I0108 21:50:09.263602   10884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9ebd4a85-36a3-4fdc-bd57-aab45239cbb7
	I0108 21:50:09.263602   10884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 15bc4c65-0928-4725-8730-353e6d4075e7
	I0108 21:50:09.263602   10884 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:50:09 GMT
	I0108 21:50:09.263602   10884 round_trippers.go:580]     Audit-Id: 755f191b-1372-4870-b119-1c76d39b44bb
	I0108 21:50:09.263760   10884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:50:09.263839   10884 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-554300-m03","uid":"61f21f6d-05bc-433f-a5f9-4cc622905150","resourceVersion":"2179","creationTimestamp":"2024-01-08T21:50:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-554300-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-554300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T21_50_08_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-08T21:50:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:me
tadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec [truncated 3814 chars]
	I0108 21:50:09.769965   10884 round_trippers.go:463] GET https://172.29.104.77:8443/api/v1/nodes/multinode-554300-m03
	I0108 21:50:09.769965   10884 round_trippers.go:469] Request Headers:
	I0108 21:50:09.769965   10884 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:50:09.770073   10884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0108 21:50:09.773225   10884 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 21:50:09.774115   10884 round_trippers.go:577] Response Headers:
	I0108 21:50:09.774115   10884 round_trippers.go:580]     Audit-Id: 3b6eb9fb-640f-4a8b-8b9d-daf645f42c56
	I0108 21:50:09.774115   10884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:50:09.774115   10884 round_trippers.go:580]     Content-Type: application/json
	I0108 21:50:09.774115   10884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9ebd4a85-36a3-4fdc-bd57-aab45239cbb7
	I0108 21:50:09.774294   10884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 15bc4c65-0928-4725-8730-353e6d4075e7
	I0108 21:50:09.774330   10884 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:50:09 GMT
	I0108 21:50:09.774517   10884 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-554300-m03","uid":"61f21f6d-05bc-433f-a5f9-4cc622905150","resourceVersion":"2179","creationTimestamp":"2024-01-08T21:50:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-554300-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-554300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T21_50_08_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-08T21:50:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:me
tadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec [truncated 3814 chars]
	I0108 21:50:10.273684   10884 round_trippers.go:463] GET https://172.29.104.77:8443/api/v1/nodes/multinode-554300-m03
	I0108 21:50:10.273684   10884 round_trippers.go:469] Request Headers:
	I0108 21:50:10.273684   10884 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:50:10.273684   10884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0108 21:50:10.277659   10884 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 21:50:10.278202   10884 round_trippers.go:577] Response Headers:
	I0108 21:50:10.278202   10884 round_trippers.go:580]     Audit-Id: 1931c39d-7506-4826-b5f8-341f96e9067f
	I0108 21:50:10.278202   10884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:50:10.278202   10884 round_trippers.go:580]     Content-Type: application/json
	I0108 21:50:10.278202   10884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9ebd4a85-36a3-4fdc-bd57-aab45239cbb7
	I0108 21:50:10.278202   10884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 15bc4c65-0928-4725-8730-353e6d4075e7
	I0108 21:50:10.278202   10884 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:50:10 GMT
	I0108 21:50:10.278368   10884 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-554300-m03","uid":"61f21f6d-05bc-433f-a5f9-4cc622905150","resourceVersion":"2179","creationTimestamp":"2024-01-08T21:50:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-554300-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-554300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T21_50_08_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-08T21:50:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:me
tadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec [truncated 3814 chars]
	I0108 21:50:10.773453   10884 round_trippers.go:463] GET https://172.29.104.77:8443/api/v1/nodes/multinode-554300-m03
	I0108 21:50:10.773589   10884 round_trippers.go:469] Request Headers:
	I0108 21:50:10.773589   10884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0108 21:50:10.773683   10884 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:50:10.777047   10884 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 21:50:10.778037   10884 round_trippers.go:577] Response Headers:
	I0108 21:50:10.778078   10884 round_trippers.go:580]     Audit-Id: c44210db-254e-44f2-885e-13c693004d2b
	I0108 21:50:10.778078   10884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:50:10.778078   10884 round_trippers.go:580]     Content-Type: application/json
	I0108 21:50:10.778078   10884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9ebd4a85-36a3-4fdc-bd57-aab45239cbb7
	I0108 21:50:10.778078   10884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 15bc4c65-0928-4725-8730-353e6d4075e7
	I0108 21:50:10.778078   10884 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:50:10 GMT
	I0108 21:50:10.778321   10884 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-554300-m03","uid":"61f21f6d-05bc-433f-a5f9-4cc622905150","resourceVersion":"2179","creationTimestamp":"2024-01-08T21:50:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-554300-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-554300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T21_50_08_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-08T21:50:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:me
tadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec [truncated 3814 chars]
	I0108 21:50:11.273447   10884 round_trippers.go:463] GET https://172.29.104.77:8443/api/v1/nodes/multinode-554300-m03
	I0108 21:50:11.273447   10884 round_trippers.go:469] Request Headers:
	I0108 21:50:11.273447   10884 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:50:11.273447   10884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0108 21:50:11.277509   10884 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0108 21:50:11.277509   10884 round_trippers.go:577] Response Headers:
	I0108 21:50:11.277509   10884 round_trippers.go:580]     Content-Type: application/json
	I0108 21:50:11.277509   10884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9ebd4a85-36a3-4fdc-bd57-aab45239cbb7
	I0108 21:50:11.277509   10884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 15bc4c65-0928-4725-8730-353e6d4075e7
	I0108 21:50:11.277509   10884 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:50:11 GMT
	I0108 21:50:11.277509   10884 round_trippers.go:580]     Audit-Id: efaea06a-c592-483b-a23d-c929c8320fa6
	I0108 21:50:11.278420   10884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:50:11.278527   10884 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-554300-m03","uid":"61f21f6d-05bc-433f-a5f9-4cc622905150","resourceVersion":"2190","creationTimestamp":"2024-01-08T21:50:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-554300-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-554300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T21_50_08_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-08T21:50:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:anno
tations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-det [truncated 3923 chars]
	I0108 21:50:11.279124   10884 node_ready.go:58] node "multinode-554300-m03" has status "Ready":"False"
	I0108 21:50:11.759548   10884 round_trippers.go:463] GET https://172.29.104.77:8443/api/v1/nodes/multinode-554300-m03
	I0108 21:50:11.759548   10884 round_trippers.go:469] Request Headers:
	I0108 21:50:11.759657   10884 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:50:11.759657   10884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0108 21:50:11.762788   10884 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 21:50:11.762788   10884 round_trippers.go:577] Response Headers:
	I0108 21:50:11.762870   10884 round_trippers.go:580]     Content-Type: application/json
	I0108 21:50:11.762870   10884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9ebd4a85-36a3-4fdc-bd57-aab45239cbb7
	I0108 21:50:11.762870   10884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 15bc4c65-0928-4725-8730-353e6d4075e7
	I0108 21:50:11.762870   10884 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:50:11 GMT
	I0108 21:50:11.762870   10884 round_trippers.go:580]     Audit-Id: bad943f9-f34f-432f-a5ae-0b58f7aec5c1
	I0108 21:50:11.762870   10884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:50:11.763135   10884 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-554300-m03","uid":"61f21f6d-05bc-433f-a5f9-4cc622905150","resourceVersion":"2190","creationTimestamp":"2024-01-08T21:50:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-554300-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-554300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T21_50_08_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-08T21:50:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:anno
tations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-det [truncated 3923 chars]
	I0108 21:50:12.262854   10884 round_trippers.go:463] GET https://172.29.104.77:8443/api/v1/nodes/multinode-554300-m03
	I0108 21:50:12.262854   10884 round_trippers.go:469] Request Headers:
	I0108 21:50:12.263004   10884 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:50:12.263004   10884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0108 21:50:12.266326   10884 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 21:50:12.266326   10884 round_trippers.go:577] Response Headers:
	I0108 21:50:12.267281   10884 round_trippers.go:580]     Content-Type: application/json
	I0108 21:50:12.267302   10884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9ebd4a85-36a3-4fdc-bd57-aab45239cbb7
	I0108 21:50:12.267302   10884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 15bc4c65-0928-4725-8730-353e6d4075e7
	I0108 21:50:12.267302   10884 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:50:12 GMT
	I0108 21:50:12.267302   10884 round_trippers.go:580]     Audit-Id: 48af05f8-aa55-4307-a35c-6188001d3888
	I0108 21:50:12.267302   10884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:50:12.267610   10884 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-554300-m03","uid":"61f21f6d-05bc-433f-a5f9-4cc622905150","resourceVersion":"2190","creationTimestamp":"2024-01-08T21:50:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-554300-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-554300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T21_50_08_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-08T21:50:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:anno
tations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-det [truncated 3923 chars]
	I0108 21:50:12.761993   10884 round_trippers.go:463] GET https://172.29.104.77:8443/api/v1/nodes/multinode-554300-m03
	I0108 21:50:12.761993   10884 round_trippers.go:469] Request Headers:
	I0108 21:50:12.761993   10884 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:50:12.761993   10884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0108 21:50:12.768137   10884 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0108 21:50:12.768137   10884 round_trippers.go:577] Response Headers:
	I0108 21:50:12.768229   10884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:50:12.768229   10884 round_trippers.go:580]     Content-Type: application/json
	I0108 21:50:12.768229   10884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9ebd4a85-36a3-4fdc-bd57-aab45239cbb7
	I0108 21:50:12.768229   10884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 15bc4c65-0928-4725-8730-353e6d4075e7
	I0108 21:50:12.768229   10884 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:50:12 GMT
	I0108 21:50:12.768229   10884 round_trippers.go:580]     Audit-Id: c1eb304c-1d1f-47e4-b405-d6142d61692c
	I0108 21:50:12.768343   10884 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-554300-m03","uid":"61f21f6d-05bc-433f-a5f9-4cc622905150","resourceVersion":"2190","creationTimestamp":"2024-01-08T21:50:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-554300-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-554300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T21_50_08_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-08T21:50:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:anno
tations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-det [truncated 3923 chars]
	I0108 21:50:13.262100   10884 round_trippers.go:463] GET https://172.29.104.77:8443/api/v1/nodes/multinode-554300-m03
	I0108 21:50:13.262194   10884 round_trippers.go:469] Request Headers:
	I0108 21:50:13.262273   10884 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:50:13.262273   10884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0108 21:50:13.267011   10884 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0108 21:50:13.267244   10884 round_trippers.go:577] Response Headers:
	I0108 21:50:13.267244   10884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9ebd4a85-36a3-4fdc-bd57-aab45239cbb7
	I0108 21:50:13.267244   10884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 15bc4c65-0928-4725-8730-353e6d4075e7
	I0108 21:50:13.267244   10884 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:50:13 GMT
	I0108 21:50:13.267244   10884 round_trippers.go:580]     Audit-Id: cc0c5728-6478-47eb-9215-deb8356cc0af
	I0108 21:50:13.267244   10884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:50:13.267244   10884 round_trippers.go:580]     Content-Type: application/json
	I0108 21:50:13.267521   10884 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-554300-m03","uid":"61f21f6d-05bc-433f-a5f9-4cc622905150","resourceVersion":"2190","creationTimestamp":"2024-01-08T21:50:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-554300-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-554300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T21_50_08_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-08T21:50:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:anno
tations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-det [truncated 3923 chars]
	I0108 21:50:13.763286   10884 round_trippers.go:463] GET https://172.29.104.77:8443/api/v1/nodes/multinode-554300-m03
	I0108 21:50:13.763540   10884 round_trippers.go:469] Request Headers:
	I0108 21:50:13.763540   10884 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:50:13.763540   10884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0108 21:50:13.769896   10884 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0108 21:50:13.771002   10884 round_trippers.go:577] Response Headers:
	I0108 21:50:13.771069   10884 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:50:13 GMT
	I0108 21:50:13.771094   10884 round_trippers.go:580]     Audit-Id: 6fb44a50-9602-407c-a137-b60a33b7fa7e
	I0108 21:50:13.771094   10884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:50:13.771094   10884 round_trippers.go:580]     Content-Type: application/json
	I0108 21:50:13.771139   10884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9ebd4a85-36a3-4fdc-bd57-aab45239cbb7
	I0108 21:50:13.771139   10884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 15bc4c65-0928-4725-8730-353e6d4075e7
	I0108 21:50:13.771330   10884 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-554300-m03","uid":"61f21f6d-05bc-433f-a5f9-4cc622905150","resourceVersion":"2190","creationTimestamp":"2024-01-08T21:50:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-554300-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-554300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T21_50_08_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-08T21:50:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:anno
tations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-det [truncated 3923 chars]
	I0108 21:50:13.771519   10884 node_ready.go:58] node "multinode-554300-m03" has status "Ready":"False"
	I0108 21:50:14.266510   10884 round_trippers.go:463] GET https://172.29.104.77:8443/api/v1/nodes/multinode-554300-m03
	I0108 21:50:14.266510   10884 round_trippers.go:469] Request Headers:
	I0108 21:50:14.266624   10884 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:50:14.266624   10884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0108 21:50:14.270685   10884 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0108 21:50:14.270776   10884 round_trippers.go:577] Response Headers:
	I0108 21:50:14.270776   10884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9ebd4a85-36a3-4fdc-bd57-aab45239cbb7
	I0108 21:50:14.270776   10884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 15bc4c65-0928-4725-8730-353e6d4075e7
	I0108 21:50:14.270776   10884 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:50:14 GMT
	I0108 21:50:14.270776   10884 round_trippers.go:580]     Audit-Id: b0f190d9-4a09-4cf1-b2ae-fbe14fcbb375
	I0108 21:50:14.270776   10884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:50:14.270895   10884 round_trippers.go:580]     Content-Type: application/json
	I0108 21:50:14.271285   10884 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-554300-m03","uid":"61f21f6d-05bc-433f-a5f9-4cc622905150","resourceVersion":"2190","creationTimestamp":"2024-01-08T21:50:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-554300-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-554300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T21_50_08_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-08T21:50:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:anno
tations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-det [truncated 3923 chars]
	I0108 21:50:14.770612   10884 round_trippers.go:463] GET https://172.29.104.77:8443/api/v1/nodes/multinode-554300-m03
	I0108 21:50:14.770677   10884 round_trippers.go:469] Request Headers:
	I0108 21:50:14.770724   10884 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:50:14.770724   10884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0108 21:50:14.775175   10884 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0108 21:50:14.775175   10884 round_trippers.go:577] Response Headers:
	I0108 21:50:14.775175   10884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 15bc4c65-0928-4725-8730-353e6d4075e7
	I0108 21:50:14.775175   10884 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:50:14 GMT
	I0108 21:50:14.775175   10884 round_trippers.go:580]     Audit-Id: 98951ed7-1af2-42b8-820e-b38d4f145ae5
	I0108 21:50:14.775175   10884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:50:14.775175   10884 round_trippers.go:580]     Content-Type: application/json
	I0108 21:50:14.775175   10884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9ebd4a85-36a3-4fdc-bd57-aab45239cbb7
	I0108 21:50:14.775175   10884 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-554300-m03","uid":"61f21f6d-05bc-433f-a5f9-4cc622905150","resourceVersion":"2190","creationTimestamp":"2024-01-08T21:50:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-554300-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-554300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T21_50_08_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-08T21:50:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:anno
tations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-det [truncated 3923 chars]
	I0108 21:50:15.272902   10884 round_trippers.go:463] GET https://172.29.104.77:8443/api/v1/nodes/multinode-554300-m03
	I0108 21:50:15.272902   10884 round_trippers.go:469] Request Headers:
	I0108 21:50:15.272902   10884 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:50:15.272902   10884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0108 21:50:15.277859   10884 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0108 21:50:15.277859   10884 round_trippers.go:577] Response Headers:
	I0108 21:50:15.278015   10884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:50:15.278015   10884 round_trippers.go:580]     Content-Type: application/json
	I0108 21:50:15.278015   10884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9ebd4a85-36a3-4fdc-bd57-aab45239cbb7
	I0108 21:50:15.278015   10884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 15bc4c65-0928-4725-8730-353e6d4075e7
	I0108 21:50:15.278015   10884 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:50:15 GMT
	I0108 21:50:15.278015   10884 round_trippers.go:580]     Audit-Id: d2b860cf-3857-46f8-85fc-3144c495b0e2
	I0108 21:50:15.278237   10884 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-554300-m03","uid":"61f21f6d-05bc-433f-a5f9-4cc622905150","resourceVersion":"2190","creationTimestamp":"2024-01-08T21:50:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-554300-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-554300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T21_50_08_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-08T21:50:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:anno
tations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-det [truncated 3923 chars]
	I0108 21:50:15.759615   10884 round_trippers.go:463] GET https://172.29.104.77:8443/api/v1/nodes/multinode-554300-m03
	I0108 21:50:15.759615   10884 round_trippers.go:469] Request Headers:
	I0108 21:50:15.759685   10884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0108 21:50:15.759685   10884 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:50:15.762108   10884 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 21:50:15.762627   10884 round_trippers.go:577] Response Headers:
	I0108 21:50:15.762627   10884 round_trippers.go:580]     Audit-Id: 0033bb0d-d8c2-421d-88fa-282c4b35af04
	I0108 21:50:15.762627   10884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:50:15.762627   10884 round_trippers.go:580]     Content-Type: application/json
	I0108 21:50:15.762627   10884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9ebd4a85-36a3-4fdc-bd57-aab45239cbb7
	I0108 21:50:15.762627   10884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 15bc4c65-0928-4725-8730-353e6d4075e7
	I0108 21:50:15.762627   10884 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:50:15 GMT
	I0108 21:50:15.762878   10884 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-554300-m03","uid":"61f21f6d-05bc-433f-a5f9-4cc622905150","resourceVersion":"2190","creationTimestamp":"2024-01-08T21:50:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-554300-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-554300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T21_50_08_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-08T21:50:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:anno
tations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-det [truncated 3923 chars]
	I0108 21:50:16.262013   10884 round_trippers.go:463] GET https://172.29.104.77:8443/api/v1/nodes/multinode-554300-m03
	I0108 21:50:16.262013   10884 round_trippers.go:469] Request Headers:
	I0108 21:50:16.262119   10884 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:50:16.262119   10884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0108 21:50:16.266500   10884 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0108 21:50:16.266500   10884 round_trippers.go:577] Response Headers:
	I0108 21:50:16.266500   10884 round_trippers.go:580]     Audit-Id: fd7802c5-55b6-4f3d-81f0-04fa0915fb3a
	I0108 21:50:16.266629   10884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:50:16.266629   10884 round_trippers.go:580]     Content-Type: application/json
	I0108 21:50:16.266629   10884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9ebd4a85-36a3-4fdc-bd57-aab45239cbb7
	I0108 21:50:16.266629   10884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 15bc4c65-0928-4725-8730-353e6d4075e7
	I0108 21:50:16.266629   10884 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:50:16 GMT
	I0108 21:50:16.266772   10884 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-554300-m03","uid":"61f21f6d-05bc-433f-a5f9-4cc622905150","resourceVersion":"2190","creationTimestamp":"2024-01-08T21:50:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-554300-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-554300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T21_50_08_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-08T21:50:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:anno
tations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-det [truncated 3923 chars]
	I0108 21:50:16.267299   10884 node_ready.go:58] node "multinode-554300-m03" has status "Ready":"False"
	I0108 21:50:16.761690   10884 round_trippers.go:463] GET https://172.29.104.77:8443/api/v1/nodes/multinode-554300-m03
	I0108 21:50:16.761840   10884 round_trippers.go:469] Request Headers:
	I0108 21:50:16.761840   10884 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:50:16.761840   10884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0108 21:50:16.765302   10884 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 21:50:16.766293   10884 round_trippers.go:577] Response Headers:
	I0108 21:50:16.766293   10884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:50:16.766293   10884 round_trippers.go:580]     Content-Type: application/json
	I0108 21:50:16.766293   10884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9ebd4a85-36a3-4fdc-bd57-aab45239cbb7
	I0108 21:50:16.766293   10884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 15bc4c65-0928-4725-8730-353e6d4075e7
	I0108 21:50:16.766361   10884 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:50:16 GMT
	I0108 21:50:16.766361   10884 round_trippers.go:580]     Audit-Id: 73da55b9-8af1-4909-8364-239b0e1cc608
	I0108 21:50:16.766448   10884 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-554300-m03","uid":"61f21f6d-05bc-433f-a5f9-4cc622905150","resourceVersion":"2190","creationTimestamp":"2024-01-08T21:50:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-554300-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-554300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T21_50_08_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-08T21:50:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:anno
tations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-det [truncated 3923 chars]
	I0108 21:50:17.261661   10884 round_trippers.go:463] GET https://172.29.104.77:8443/api/v1/nodes/multinode-554300-m03
	I0108 21:50:17.261763   10884 round_trippers.go:469] Request Headers:
	I0108 21:50:17.261763   10884 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:50:17.261763   10884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0108 21:50:17.266076   10884 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0108 21:50:17.266779   10884 round_trippers.go:577] Response Headers:
	I0108 21:50:17.266847   10884 round_trippers.go:580]     Audit-Id: f4e9d5f0-7065-4d89-af52-16ca783656d8
	I0108 21:50:17.266847   10884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:50:17.266847   10884 round_trippers.go:580]     Content-Type: application/json
	I0108 21:50:17.266847   10884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9ebd4a85-36a3-4fdc-bd57-aab45239cbb7
	I0108 21:50:17.266847   10884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 15bc4c65-0928-4725-8730-353e6d4075e7
	I0108 21:50:17.266847   10884 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:50:17 GMT
	I0108 21:50:17.267100   10884 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-554300-m03","uid":"61f21f6d-05bc-433f-a5f9-4cc622905150","resourceVersion":"2190","creationTimestamp":"2024-01-08T21:50:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-554300-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-554300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T21_50_08_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-08T21:50:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:anno
tations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-det [truncated 3923 chars]
	I0108 21:50:17.762871   10884 round_trippers.go:463] GET https://172.29.104.77:8443/api/v1/nodes/multinode-554300-m03
	I0108 21:50:17.762913   10884 round_trippers.go:469] Request Headers:
	I0108 21:50:17.762913   10884 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:50:17.762913   10884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0108 21:50:17.766391   10884 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 21:50:17.766391   10884 round_trippers.go:577] Response Headers:
	I0108 21:50:17.766391   10884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:50:17.766391   10884 round_trippers.go:580]     Content-Type: application/json
	I0108 21:50:17.766391   10884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9ebd4a85-36a3-4fdc-bd57-aab45239cbb7
	I0108 21:50:17.766391   10884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 15bc4c65-0928-4725-8730-353e6d4075e7
	I0108 21:50:17.766391   10884 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:50:17 GMT
	I0108 21:50:17.766391   10884 round_trippers.go:580]     Audit-Id: f2004a37-50a2-498c-be45-a89c5abc9680
	I0108 21:50:17.766391   10884 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-554300-m03","uid":"61f21f6d-05bc-433f-a5f9-4cc622905150","resourceVersion":"2197","creationTimestamp":"2024-01-08T21:50:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-554300-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-554300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T21_50_08_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-08T21:50:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:anno
tations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-det [truncated 3889 chars]
	I0108 21:50:18.265699   10884 round_trippers.go:463] GET https://172.29.104.77:8443/api/v1/nodes/multinode-554300-m03
	I0108 21:50:18.265786   10884 round_trippers.go:469] Request Headers:
	I0108 21:50:18.265786   10884 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:50:18.265786   10884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0108 21:50:18.269272   10884 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 21:50:18.270482   10884 round_trippers.go:577] Response Headers:
	I0108 21:50:18.270482   10884 round_trippers.go:580]     Audit-Id: 59589284-20c8-4d28-992e-4caabb9f99c5
	I0108 21:50:18.270482   10884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:50:18.270482   10884 round_trippers.go:580]     Content-Type: application/json
	I0108 21:50:18.270482   10884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9ebd4a85-36a3-4fdc-bd57-aab45239cbb7
	I0108 21:50:18.270482   10884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 15bc4c65-0928-4725-8730-353e6d4075e7
	I0108 21:50:18.270482   10884 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:50:18 GMT
	I0108 21:50:18.270635   10884 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-554300-m03","uid":"61f21f6d-05bc-433f-a5f9-4cc622905150","resourceVersion":"2197","creationTimestamp":"2024-01-08T21:50:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-554300-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-554300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T21_50_08_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-08T21:50:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:anno
tations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-det [truncated 3889 chars]
	I0108 21:50:18.270635   10884 node_ready.go:58] node "multinode-554300-m03" has status "Ready":"False"
	I0108 21:50:18.767451   10884 round_trippers.go:463] GET https://172.29.104.77:8443/api/v1/nodes/multinode-554300-m03
	I0108 21:50:18.767537   10884 round_trippers.go:469] Request Headers:
	I0108 21:50:18.767537   10884 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:50:18.767537   10884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0108 21:50:18.770900   10884 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 21:50:18.771273   10884 round_trippers.go:577] Response Headers:
	I0108 21:50:18.771273   10884 round_trippers.go:580]     Content-Type: application/json
	I0108 21:50:18.771273   10884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9ebd4a85-36a3-4fdc-bd57-aab45239cbb7
	I0108 21:50:18.771273   10884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 15bc4c65-0928-4725-8730-353e6d4075e7
	I0108 21:50:18.771273   10884 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:50:18 GMT
	I0108 21:50:18.771412   10884 round_trippers.go:580]     Audit-Id: 5323a7b0-8a41-4c35-8933-5c9af341ceaa
	I0108 21:50:18.771503   10884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:50:18.771901   10884 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-554300-m03","uid":"61f21f6d-05bc-433f-a5f9-4cc622905150","resourceVersion":"2197","creationTimestamp":"2024-01-08T21:50:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-554300-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-554300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T21_50_08_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-08T21:50:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:anno
tations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-det [truncated 3889 chars]
	I0108 21:50:19.271999   10884 round_trippers.go:463] GET https://172.29.104.77:8443/api/v1/nodes/multinode-554300-m03
	I0108 21:50:19.271999   10884 round_trippers.go:469] Request Headers:
	I0108 21:50:19.271999   10884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0108 21:50:19.271999   10884 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:50:19.276761   10884 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0108 21:50:19.276761   10884 round_trippers.go:577] Response Headers:
	I0108 21:50:19.276761   10884 round_trippers.go:580]     Audit-Id: edc22bab-0757-451d-8439-35db63c472f2
	I0108 21:50:19.276761   10884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:50:19.276761   10884 round_trippers.go:580]     Content-Type: application/json
	I0108 21:50:19.276761   10884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9ebd4a85-36a3-4fdc-bd57-aab45239cbb7
	I0108 21:50:19.276761   10884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 15bc4c65-0928-4725-8730-353e6d4075e7
	I0108 21:50:19.276960   10884 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:50:19 GMT
	I0108 21:50:19.277187   10884 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-554300-m03","uid":"61f21f6d-05bc-433f-a5f9-4cc622905150","resourceVersion":"2197","creationTimestamp":"2024-01-08T21:50:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-554300-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-554300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T21_50_08_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-08T21:50:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:anno
tations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-det [truncated 3889 chars]
	I0108 21:50:19.771508   10884 round_trippers.go:463] GET https://172.29.104.77:8443/api/v1/nodes/multinode-554300-m03
	I0108 21:50:19.771592   10884 round_trippers.go:469] Request Headers:
	I0108 21:50:19.771592   10884 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:50:19.771592   10884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0108 21:50:19.775980   10884 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0108 21:50:19.775980   10884 round_trippers.go:577] Response Headers:
	I0108 21:50:19.775980   10884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:50:19.776186   10884 round_trippers.go:580]     Content-Type: application/json
	I0108 21:50:19.776186   10884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9ebd4a85-36a3-4fdc-bd57-aab45239cbb7
	I0108 21:50:19.776186   10884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 15bc4c65-0928-4725-8730-353e6d4075e7
	I0108 21:50:19.776186   10884 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:50:19 GMT
	I0108 21:50:19.776186   10884 round_trippers.go:580]     Audit-Id: a8c56ffa-d079-45ee-9293-bb45f016632c
	I0108 21:50:19.776333   10884 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-554300-m03","uid":"61f21f6d-05bc-433f-a5f9-4cc622905150","resourceVersion":"2197","creationTimestamp":"2024-01-08T21:50:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-554300-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-554300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T21_50_08_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-08T21:50:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:anno
tations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-det [truncated 3889 chars]
	I0108 21:50:20.260167   10884 round_trippers.go:463] GET https://172.29.104.77:8443/api/v1/nodes/multinode-554300-m03
	I0108 21:50:20.260255   10884 round_trippers.go:469] Request Headers:
	I0108 21:50:20.260255   10884 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:50:20.260255   10884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0108 21:50:20.263978   10884 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 21:50:20.263978   10884 round_trippers.go:577] Response Headers:
	I0108 21:50:20.265008   10884 round_trippers.go:580]     Audit-Id: 893adeb4-8619-4eb8-9968-b1c8e8186abd
	I0108 21:50:20.265008   10884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:50:20.265008   10884 round_trippers.go:580]     Content-Type: application/json
	I0108 21:50:20.265008   10884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9ebd4a85-36a3-4fdc-bd57-aab45239cbb7
	I0108 21:50:20.265008   10884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 15bc4c65-0928-4725-8730-353e6d4075e7
	I0108 21:50:20.265008   10884 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:50:20 GMT
	I0108 21:50:20.265384   10884 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-554300-m03","uid":"61f21f6d-05bc-433f-a5f9-4cc622905150","resourceVersion":"2197","creationTimestamp":"2024-01-08T21:50:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-554300-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-554300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T21_50_08_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-08T21:50:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:anno
tations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-det [truncated 3889 chars]
	I0108 21:50:20.763257   10884 round_trippers.go:463] GET https://172.29.104.77:8443/api/v1/nodes/multinode-554300-m03
	I0108 21:50:20.763344   10884 round_trippers.go:469] Request Headers:
	I0108 21:50:20.763344   10884 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:50:20.763344   10884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0108 21:50:20.767725   10884 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0108 21:50:20.767725   10884 round_trippers.go:577] Response Headers:
	I0108 21:50:20.767725   10884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 15bc4c65-0928-4725-8730-353e6d4075e7
	I0108 21:50:20.767725   10884 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:50:20 GMT
	I0108 21:50:20.767725   10884 round_trippers.go:580]     Audit-Id: 95740080-ce61-460f-9c7b-dbf60e33eb84
	I0108 21:50:20.767725   10884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:50:20.767725   10884 round_trippers.go:580]     Content-Type: application/json
	I0108 21:50:20.768177   10884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9ebd4a85-36a3-4fdc-bd57-aab45239cbb7
	I0108 21:50:20.768564   10884 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-554300-m03","uid":"61f21f6d-05bc-433f-a5f9-4cc622905150","resourceVersion":"2197","creationTimestamp":"2024-01-08T21:50:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-554300-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-554300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T21_50_08_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-08T21:50:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:anno
tations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-det [truncated 3889 chars]
	I0108 21:50:20.769080   10884 node_ready.go:58] node "multinode-554300-m03" has status "Ready":"False"
	I0108 21:50:21.260437   10884 round_trippers.go:463] GET https://172.29.104.77:8443/api/v1/nodes/multinode-554300-m03
	I0108 21:50:21.260504   10884 round_trippers.go:469] Request Headers:
	I0108 21:50:21.260504   10884 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:50:21.260504   10884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0108 21:50:21.263902   10884 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 21:50:21.263902   10884 round_trippers.go:577] Response Headers:
	I0108 21:50:21.264921   10884 round_trippers.go:580]     Audit-Id: a130bd18-dc93-431d-8c4b-44c83b69ea28
	I0108 21:50:21.264921   10884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:50:21.264921   10884 round_trippers.go:580]     Content-Type: application/json
	I0108 21:50:21.264921   10884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9ebd4a85-36a3-4fdc-bd57-aab45239cbb7
	I0108 21:50:21.264921   10884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 15bc4c65-0928-4725-8730-353e6d4075e7
	I0108 21:50:21.264979   10884 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:50:21 GMT
	I0108 21:50:21.265072   10884 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-554300-m03","uid":"61f21f6d-05bc-433f-a5f9-4cc622905150","resourceVersion":"2197","creationTimestamp":"2024-01-08T21:50:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-554300-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-554300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T21_50_08_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-08T21:50:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:anno
tations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-det [truncated 3889 chars]
	I0108 21:50:21.761633   10884 round_trippers.go:463] GET https://172.29.104.77:8443/api/v1/nodes/multinode-554300-m03
	I0108 21:50:21.761633   10884 round_trippers.go:469] Request Headers:
	I0108 21:50:21.761633   10884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0108 21:50:21.761633   10884 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:50:21.765685   10884 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 21:50:21.765747   10884 round_trippers.go:577] Response Headers:
	I0108 21:50:21.765747   10884 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:50:21 GMT
	I0108 21:50:21.765747   10884 round_trippers.go:580]     Audit-Id: 92616021-3d8f-4cf7-a687-3deb642aee3a
	I0108 21:50:21.765747   10884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:50:21.765747   10884 round_trippers.go:580]     Content-Type: application/json
	I0108 21:50:21.765747   10884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9ebd4a85-36a3-4fdc-bd57-aab45239cbb7
	I0108 21:50:21.765747   10884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 15bc4c65-0928-4725-8730-353e6d4075e7
	I0108 21:50:21.765747   10884 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-554300-m03","uid":"61f21f6d-05bc-433f-a5f9-4cc622905150","resourceVersion":"2197","creationTimestamp":"2024-01-08T21:50:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-554300-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-554300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T21_50_08_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-08T21:50:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:anno
tations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-det [truncated 3889 chars]
	I0108 21:50:22.263796   10884 round_trippers.go:463] GET https://172.29.104.77:8443/api/v1/nodes/multinode-554300-m03
	I0108 21:50:22.263903   10884 round_trippers.go:469] Request Headers:
	I0108 21:50:22.263903   10884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0108 21:50:22.263903   10884 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:50:22.267179   10884 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 21:50:22.267179   10884 round_trippers.go:577] Response Headers:
	I0108 21:50:22.267179   10884 round_trippers.go:580]     Audit-Id: 5cac1bf5-299c-4c5d-90ec-4c6f0a98bfc2
	I0108 21:50:22.267179   10884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:50:22.267179   10884 round_trippers.go:580]     Content-Type: application/json
	I0108 21:50:22.267179   10884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9ebd4a85-36a3-4fdc-bd57-aab45239cbb7
	I0108 21:50:22.267179   10884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 15bc4c65-0928-4725-8730-353e6d4075e7
	I0108 21:50:22.267955   10884 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:50:22 GMT
	I0108 21:50:22.268269   10884 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-554300-m03","uid":"61f21f6d-05bc-433f-a5f9-4cc622905150","resourceVersion":"2197","creationTimestamp":"2024-01-08T21:50:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-554300-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-554300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T21_50_08_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-08T21:50:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:anno
tations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-det [truncated 3889 chars]
	I0108 21:50:22.766052   10884 round_trippers.go:463] GET https://172.29.104.77:8443/api/v1/nodes/multinode-554300-m03
	I0108 21:50:22.766138   10884 round_trippers.go:469] Request Headers:
	I0108 21:50:22.766138   10884 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:50:22.766138   10884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0108 21:50:22.770534   10884 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0108 21:50:22.770534   10884 round_trippers.go:577] Response Headers:
	I0108 21:50:22.770534   10884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9ebd4a85-36a3-4fdc-bd57-aab45239cbb7
	I0108 21:50:22.770534   10884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 15bc4c65-0928-4725-8730-353e6d4075e7
	I0108 21:50:22.770534   10884 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:50:22 GMT
	I0108 21:50:22.770534   10884 round_trippers.go:580]     Audit-Id: 30b2b94a-439b-44c1-bbe6-65fe6c27d725
	I0108 21:50:22.770534   10884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:50:22.770934   10884 round_trippers.go:580]     Content-Type: application/json
	I0108 21:50:22.771112   10884 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-554300-m03","uid":"61f21f6d-05bc-433f-a5f9-4cc622905150","resourceVersion":"2197","creationTimestamp":"2024-01-08T21:50:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-554300-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-554300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T21_50_08_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-08T21:50:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:anno
tations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-det [truncated 3889 chars]
	I0108 21:50:22.771585   10884 node_ready.go:58] node "multinode-554300-m03" has status "Ready":"False"
	I0108 21:50:23.269292   10884 round_trippers.go:463] GET https://172.29.104.77:8443/api/v1/nodes/multinode-554300-m03
	I0108 21:50:23.269292   10884 round_trippers.go:469] Request Headers:
	I0108 21:50:23.269292   10884 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:50:23.269292   10884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0108 21:50:23.273878   10884 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0108 21:50:23.274140   10884 round_trippers.go:577] Response Headers:
	I0108 21:50:23.274140   10884 round_trippers.go:580]     Content-Type: application/json
	I0108 21:50:23.274140   10884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9ebd4a85-36a3-4fdc-bd57-aab45239cbb7
	I0108 21:50:23.274140   10884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 15bc4c65-0928-4725-8730-353e6d4075e7
	I0108 21:50:23.274140   10884 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:50:23 GMT
	I0108 21:50:23.274140   10884 round_trippers.go:580]     Audit-Id: b2f61e37-0105-4113-b977-00e7679cbcdd
	I0108 21:50:23.274140   10884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:50:23.274332   10884 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-554300-m03","uid":"61f21f6d-05bc-433f-a5f9-4cc622905150","resourceVersion":"2197","creationTimestamp":"2024-01-08T21:50:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-554300-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-554300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T21_50_08_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-08T21:50:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:anno
tations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-det [truncated 3889 chars]
	I0108 21:50:23.771586   10884 round_trippers.go:463] GET https://172.29.104.77:8443/api/v1/nodes/multinode-554300-m03
	I0108 21:50:23.771586   10884 round_trippers.go:469] Request Headers:
	I0108 21:50:23.771586   10884 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:50:23.771586   10884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0108 21:50:23.776226   10884 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0108 21:50:23.776672   10884 round_trippers.go:577] Response Headers:
	I0108 21:50:23.776672   10884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9ebd4a85-36a3-4fdc-bd57-aab45239cbb7
	I0108 21:50:23.776672   10884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 15bc4c65-0928-4725-8730-353e6d4075e7
	I0108 21:50:23.776672   10884 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:50:23 GMT
	I0108 21:50:23.776672   10884 round_trippers.go:580]     Audit-Id: b94e398e-e3b7-4022-b999-435443f4666e
	I0108 21:50:23.776672   10884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:50:23.776672   10884 round_trippers.go:580]     Content-Type: application/json
	I0108 21:50:23.776839   10884 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-554300-m03","uid":"61f21f6d-05bc-433f-a5f9-4cc622905150","resourceVersion":"2197","creationTimestamp":"2024-01-08T21:50:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-554300-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-554300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T21_50_08_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-08T21:50:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:anno
tations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-det [truncated 3889 chars]
	I0108 21:50:24.272618   10884 round_trippers.go:463] GET https://172.29.104.77:8443/api/v1/nodes/multinode-554300-m03
	I0108 21:50:24.272618   10884 round_trippers.go:469] Request Headers:
	I0108 21:50:24.272618   10884 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:50:24.272618   10884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0108 21:50:24.276495   10884 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 21:50:24.276495   10884 round_trippers.go:577] Response Headers:
	I0108 21:50:24.276495   10884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9ebd4a85-36a3-4fdc-bd57-aab45239cbb7
	I0108 21:50:24.276495   10884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 15bc4c65-0928-4725-8730-353e6d4075e7
	I0108 21:50:24.276495   10884 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:50:24 GMT
	I0108 21:50:24.276495   10884 round_trippers.go:580]     Audit-Id: ee375b3b-80aa-4777-bece-e54d374edb39
	I0108 21:50:24.276495   10884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:50:24.276495   10884 round_trippers.go:580]     Content-Type: application/json
	I0108 21:50:24.276742   10884 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-554300-m03","uid":"61f21f6d-05bc-433f-a5f9-4cc622905150","resourceVersion":"2197","creationTimestamp":"2024-01-08T21:50:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-554300-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-554300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T21_50_08_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-08T21:50:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:anno
tations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-det [truncated 3889 chars]
	I0108 21:50:24.759808   10884 round_trippers.go:463] GET https://172.29.104.77:8443/api/v1/nodes/multinode-554300-m03
	I0108 21:50:24.759808   10884 round_trippers.go:469] Request Headers:
	I0108 21:50:24.759808   10884 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:50:24.759808   10884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0108 21:50:24.764248   10884 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0108 21:50:24.764248   10884 round_trippers.go:577] Response Headers:
	I0108 21:50:24.764248   10884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9ebd4a85-36a3-4fdc-bd57-aab45239cbb7
	I0108 21:50:24.764248   10884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 15bc4c65-0928-4725-8730-353e6d4075e7
	I0108 21:50:24.764248   10884 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:50:24 GMT
	I0108 21:50:24.764248   10884 round_trippers.go:580]     Audit-Id: 61ea58a2-8c6e-40f5-b0dd-d0ba7704cd79
	I0108 21:50:24.764248   10884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:50:24.764248   10884 round_trippers.go:580]     Content-Type: application/json
	I0108 21:50:24.764248   10884 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-554300-m03","uid":"61f21f6d-05bc-433f-a5f9-4cc622905150","resourceVersion":"2197","creationTimestamp":"2024-01-08T21:50:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-554300-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-554300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T21_50_08_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-08T21:50:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:anno
tations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-det [truncated 3889 chars]
	I0108 21:50:25.262164   10884 round_trippers.go:463] GET https://172.29.104.77:8443/api/v1/nodes/multinode-554300-m03
	I0108 21:50:25.262164   10884 round_trippers.go:469] Request Headers:
	I0108 21:50:25.262307   10884 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:50:25.262307   10884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0108 21:50:25.266542   10884 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0108 21:50:25.266868   10884 round_trippers.go:577] Response Headers:
	I0108 21:50:25.266868   10884 round_trippers.go:580]     Audit-Id: 6281d635-57f4-4e1e-94dd-5687f178c091
	I0108 21:50:25.266868   10884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:50:25.266868   10884 round_trippers.go:580]     Content-Type: application/json
	I0108 21:50:25.266868   10884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9ebd4a85-36a3-4fdc-bd57-aab45239cbb7
	I0108 21:50:25.266868   10884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 15bc4c65-0928-4725-8730-353e6d4075e7
	I0108 21:50:25.266868   10884 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:50:25 GMT
	I0108 21:50:25.267238   10884 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-554300-m03","uid":"61f21f6d-05bc-433f-a5f9-4cc622905150","resourceVersion":"2197","creationTimestamp":"2024-01-08T21:50:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-554300-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-554300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T21_50_08_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-08T21:50:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:anno
tations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-det [truncated 3889 chars]
	I0108 21:50:25.267730   10884 node_ready.go:58] node "multinode-554300-m03" has status "Ready":"False"
	I0108 21:50:25.764732   10884 round_trippers.go:463] GET https://172.29.104.77:8443/api/v1/nodes/multinode-554300-m03
	I0108 21:50:25.764732   10884 round_trippers.go:469] Request Headers:
	I0108 21:50:25.764884   10884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0108 21:50:25.764884   10884 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:50:25.768242   10884 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 21:50:25.768242   10884 round_trippers.go:577] Response Headers:
	I0108 21:50:25.768242   10884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9ebd4a85-36a3-4fdc-bd57-aab45239cbb7
	I0108 21:50:25.768242   10884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 15bc4c65-0928-4725-8730-353e6d4075e7
	I0108 21:50:25.768242   10884 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:50:25 GMT
	I0108 21:50:25.768242   10884 round_trippers.go:580]     Audit-Id: 283f29b9-9f32-414d-92e1-98c23ceaf66f
	I0108 21:50:25.769131   10884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:50:25.769131   10884 round_trippers.go:580]     Content-Type: application/json
	I0108 21:50:25.769468   10884 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-554300-m03","uid":"61f21f6d-05bc-433f-a5f9-4cc622905150","resourceVersion":"2197","creationTimestamp":"2024-01-08T21:50:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-554300-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-554300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T21_50_08_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-08T21:50:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:anno
tations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-det [truncated 3889 chars]
	I0108 21:50:26.267093   10884 round_trippers.go:463] GET https://172.29.104.77:8443/api/v1/nodes/multinode-554300-m03
	I0108 21:50:26.267184   10884 round_trippers.go:469] Request Headers:
	I0108 21:50:26.267184   10884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0108 21:50:26.267320   10884 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:50:26.274829   10884 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0108 21:50:26.274829   10884 round_trippers.go:577] Response Headers:
	I0108 21:50:26.274829   10884 round_trippers.go:580]     Audit-Id: d345f55f-0865-4934-bdec-aad284fe1155
	I0108 21:50:26.274829   10884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:50:26.274829   10884 round_trippers.go:580]     Content-Type: application/json
	I0108 21:50:26.274829   10884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9ebd4a85-36a3-4fdc-bd57-aab45239cbb7
	I0108 21:50:26.274829   10884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 15bc4c65-0928-4725-8730-353e6d4075e7
	I0108 21:50:26.274829   10884 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:50:26 GMT
	I0108 21:50:26.275371   10884 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-554300-m03","uid":"61f21f6d-05bc-433f-a5f9-4cc622905150","resourceVersion":"2197","creationTimestamp":"2024-01-08T21:50:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-554300-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-554300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T21_50_08_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-08T21:50:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:anno
tations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-det [truncated 3889 chars]
	I0108 21:50:26.768960   10884 round_trippers.go:463] GET https://172.29.104.77:8443/api/v1/nodes/multinode-554300-m03
	I0108 21:50:26.769061   10884 round_trippers.go:469] Request Headers:
	I0108 21:50:26.769061   10884 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:50:26.769061   10884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0108 21:50:26.773447   10884 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0108 21:50:26.773447   10884 round_trippers.go:577] Response Headers:
	I0108 21:50:26.773447   10884 round_trippers.go:580]     Audit-Id: 823d75eb-8603-4a13-9a8f-b2103df683dc
	I0108 21:50:26.773894   10884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:50:26.773894   10884 round_trippers.go:580]     Content-Type: application/json
	I0108 21:50:26.773894   10884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9ebd4a85-36a3-4fdc-bd57-aab45239cbb7
	I0108 21:50:26.773894   10884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 15bc4c65-0928-4725-8730-353e6d4075e7
	I0108 21:50:26.773894   10884 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:50:26 GMT
	I0108 21:50:26.774131   10884 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-554300-m03","uid":"61f21f6d-05bc-433f-a5f9-4cc622905150","resourceVersion":"2197","creationTimestamp":"2024-01-08T21:50:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-554300-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-554300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T21_50_08_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-08T21:50:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:anno
tations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-det [truncated 3889 chars]
	I0108 21:50:27.274334   10884 round_trippers.go:463] GET https://172.29.104.77:8443/api/v1/nodes/multinode-554300-m03
	I0108 21:50:27.274334   10884 round_trippers.go:469] Request Headers:
	I0108 21:50:27.274334   10884 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:50:27.274334   10884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0108 21:50:27.277721   10884 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 21:50:27.278742   10884 round_trippers.go:577] Response Headers:
	I0108 21:50:27.278742   10884 round_trippers.go:580]     Audit-Id: c0f5e3dd-e633-4e2b-b22c-32a8282ed84c
	I0108 21:50:27.278791   10884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:50:27.278791   10884 round_trippers.go:580]     Content-Type: application/json
	I0108 21:50:27.278791   10884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9ebd4a85-36a3-4fdc-bd57-aab45239cbb7
	I0108 21:50:27.278791   10884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 15bc4c65-0928-4725-8730-353e6d4075e7
	I0108 21:50:27.278791   10884 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:50:27 GMT
	I0108 21:50:27.278857   10884 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-554300-m03","uid":"61f21f6d-05bc-433f-a5f9-4cc622905150","resourceVersion":"2197","creationTimestamp":"2024-01-08T21:50:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-554300-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-554300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T21_50_08_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-08T21:50:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:anno
tations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-det [truncated 3889 chars]
	I0108 21:50:27.278857   10884 node_ready.go:58] node "multinode-554300-m03" has status "Ready":"False"
	I0108 21:50:27.759720   10884 round_trippers.go:463] GET https://172.29.104.77:8443/api/v1/nodes/multinode-554300-m03
	I0108 21:50:27.759720   10884 round_trippers.go:469] Request Headers:
	I0108 21:50:27.759720   10884 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:50:27.759720   10884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0108 21:50:27.763314   10884 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 21:50:27.763314   10884 round_trippers.go:577] Response Headers:
	I0108 21:50:27.763314   10884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:50:27.763314   10884 round_trippers.go:580]     Content-Type: application/json
	I0108 21:50:27.763314   10884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9ebd4a85-36a3-4fdc-bd57-aab45239cbb7
	I0108 21:50:27.763314   10884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 15bc4c65-0928-4725-8730-353e6d4075e7
	I0108 21:50:27.763314   10884 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:50:27 GMT
	I0108 21:50:27.763314   10884 round_trippers.go:580]     Audit-Id: 0ca44394-3461-4a1c-a9fa-169d2c0f7fba
	I0108 21:50:27.763314   10884 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-554300-m03","uid":"61f21f6d-05bc-433f-a5f9-4cc622905150","resourceVersion":"2197","creationTimestamp":"2024-01-08T21:50:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-554300-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-554300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T21_50_08_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-08T21:50:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:anno
tations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-det [truncated 3889 chars]
	I0108 21:50:28.260614   10884 round_trippers.go:463] GET https://172.29.104.77:8443/api/v1/nodes/multinode-554300-m03
	I0108 21:50:28.260614   10884 round_trippers.go:469] Request Headers:
	I0108 21:50:28.260614   10884 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:50:28.260614   10884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0108 21:50:28.266105   10884 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0108 21:50:28.266190   10884 round_trippers.go:577] Response Headers:
	I0108 21:50:28.266190   10884 round_trippers.go:580]     Content-Type: application/json
	I0108 21:50:28.266190   10884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9ebd4a85-36a3-4fdc-bd57-aab45239cbb7
	I0108 21:50:28.266190   10884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 15bc4c65-0928-4725-8730-353e6d4075e7
	I0108 21:50:28.266190   10884 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:50:28 GMT
	I0108 21:50:28.266190   10884 round_trippers.go:580]     Audit-Id: 0cafead4-08fc-4cd0-8b14-6826d2f10821
	I0108 21:50:28.266190   10884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:50:28.266445   10884 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-554300-m03","uid":"61f21f6d-05bc-433f-a5f9-4cc622905150","resourceVersion":"2197","creationTimestamp":"2024-01-08T21:50:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-554300-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-554300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T21_50_08_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-08T21:50:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:anno
tations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-det [truncated 3889 chars]
	I0108 21:50:28.762144   10884 round_trippers.go:463] GET https://172.29.104.77:8443/api/v1/nodes/multinode-554300-m03
	I0108 21:50:28.762144   10884 round_trippers.go:469] Request Headers:
	I0108 21:50:28.762144   10884 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:50:28.762144   10884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0108 21:50:28.766784   10884 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0108 21:50:28.767271   10884 round_trippers.go:577] Response Headers:
	I0108 21:50:28.767271   10884 round_trippers.go:580]     Audit-Id: bd84f254-c68a-430b-87fd-43f0b25a53f1
	I0108 21:50:28.767271   10884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:50:28.767271   10884 round_trippers.go:580]     Content-Type: application/json
	I0108 21:50:28.767271   10884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9ebd4a85-36a3-4fdc-bd57-aab45239cbb7
	I0108 21:50:28.767271   10884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 15bc4c65-0928-4725-8730-353e6d4075e7
	I0108 21:50:28.767336   10884 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:50:28 GMT
	I0108 21:50:28.767521   10884 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-554300-m03","uid":"61f21f6d-05bc-433f-a5f9-4cc622905150","resourceVersion":"2197","creationTimestamp":"2024-01-08T21:50:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-554300-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-554300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T21_50_08_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-08T21:50:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:anno
tations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-det [truncated 3889 chars]
	I0108 21:50:29.264615   10884 round_trippers.go:463] GET https://172.29.104.77:8443/api/v1/nodes/multinode-554300-m03
	I0108 21:50:29.264615   10884 round_trippers.go:469] Request Headers:
	I0108 21:50:29.264615   10884 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:50:29.264615   10884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0108 21:50:29.269526   10884 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0108 21:50:29.269969   10884 round_trippers.go:577] Response Headers:
	I0108 21:50:29.269969   10884 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:50:29 GMT
	I0108 21:50:29.269969   10884 round_trippers.go:580]     Audit-Id: 7aa16bd7-8f12-451c-b604-65799fa1bfdb
	I0108 21:50:29.270021   10884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:50:29.270021   10884 round_trippers.go:580]     Content-Type: application/json
	I0108 21:50:29.270021   10884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9ebd4a85-36a3-4fdc-bd57-aab45239cbb7
	I0108 21:50:29.270021   10884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 15bc4c65-0928-4725-8730-353e6d4075e7
	I0108 21:50:29.270103   10884 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-554300-m03","uid":"61f21f6d-05bc-433f-a5f9-4cc622905150","resourceVersion":"2197","creationTimestamp":"2024-01-08T21:50:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-554300-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-554300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T21_50_08_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-08T21:50:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:anno
tations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-det [truncated 3889 chars]
	I0108 21:50:29.766669   10884 round_trippers.go:463] GET https://172.29.104.77:8443/api/v1/nodes/multinode-554300-m03
	I0108 21:50:29.766753   10884 round_trippers.go:469] Request Headers:
	I0108 21:50:29.766753   10884 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:50:29.766753   10884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0108 21:50:29.770703   10884 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 21:50:29.771051   10884 round_trippers.go:577] Response Headers:
	I0108 21:50:29.771051   10884 round_trippers.go:580]     Audit-Id: 5754f957-08a6-4ca4-826b-f2503a82757d
	I0108 21:50:29.771051   10884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:50:29.771051   10884 round_trippers.go:580]     Content-Type: application/json
	I0108 21:50:29.771051   10884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9ebd4a85-36a3-4fdc-bd57-aab45239cbb7
	I0108 21:50:29.771051   10884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 15bc4c65-0928-4725-8730-353e6d4075e7
	I0108 21:50:29.771051   10884 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:50:29 GMT
	I0108 21:50:29.771356   10884 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-554300-m03","uid":"61f21f6d-05bc-433f-a5f9-4cc622905150","resourceVersion":"2217","creationTimestamp":"2024-01-08T21:50:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-554300-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-554300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T21_50_08_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-08T21:50:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:anno
tations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-det [truncated 3755 chars]
	I0108 21:50:29.771876   10884 node_ready.go:49] node "multinode-554300-m03" has status "Ready":"True"
	I0108 21:50:29.771876   10884 node_ready.go:38] duration metric: took 20.5130887s waiting for node "multinode-554300-m03" to be "Ready" ...
	I0108 21:50:29.771876   10884 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0108 21:50:29.771999   10884 round_trippers.go:463] GET https://172.29.104.77:8443/api/v1/namespaces/kube-system/pods
	I0108 21:50:29.772120   10884 round_trippers.go:469] Request Headers:
	I0108 21:50:29.772120   10884 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:50:29.772120   10884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0108 21:50:29.777494   10884 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0108 21:50:29.777494   10884 round_trippers.go:577] Response Headers:
	I0108 21:50:29.777494   10884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 15bc4c65-0928-4725-8730-353e6d4075e7
	I0108 21:50:29.777602   10884 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:50:29 GMT
	I0108 21:50:29.777602   10884 round_trippers.go:580]     Audit-Id: 072e5a22-5b1f-41b5-a5af-83bc1eba57e9
	I0108 21:50:29.777602   10884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:50:29.777602   10884 round_trippers.go:580]     Content-Type: application/json
	I0108 21:50:29.777602   10884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9ebd4a85-36a3-4fdc-bd57-aab45239cbb7
	I0108 21:50:29.780532   10884 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"2217"},"items":[{"metadata":{"name":"coredns-5dd5756b68-q7vd7","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"fe215542-1a69-4152-9098-06937431fa74","resourceVersion":"1842","creationTimestamp":"2024-01-08T21:23:44Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"4ef51136-88a5-445c-bfe9-e1a45010851b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-08T21:23:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"4ef51136-88a5-445c-bfe9-e1a45010851b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 82919 chars]
	I0108 21:50:29.784134   10884 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-q7vd7" in "kube-system" namespace to be "Ready" ...
	I0108 21:50:29.784134   10884 round_trippers.go:463] GET https://172.29.104.77:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-q7vd7
	I0108 21:50:29.784134   10884 round_trippers.go:469] Request Headers:
	I0108 21:50:29.784134   10884 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:50:29.784134   10884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0108 21:50:29.788069   10884 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 21:50:29.788069   10884 round_trippers.go:577] Response Headers:
	I0108 21:50:29.788069   10884 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:50:29 GMT
	I0108 21:50:29.788069   10884 round_trippers.go:580]     Audit-Id: 72a30c64-3816-48a4-ad8e-572f54142b7f
	I0108 21:50:29.788069   10884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:50:29.788069   10884 round_trippers.go:580]     Content-Type: application/json
	I0108 21:50:29.788069   10884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9ebd4a85-36a3-4fdc-bd57-aab45239cbb7
	I0108 21:50:29.788619   10884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 15bc4c65-0928-4725-8730-353e6d4075e7
	I0108 21:50:29.788786   10884 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-q7vd7","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"fe215542-1a69-4152-9098-06937431fa74","resourceVersion":"1842","creationTimestamp":"2024-01-08T21:23:44Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"4ef51136-88a5-445c-bfe9-e1a45010851b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-08T21:23:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"4ef51136-88a5-445c-bfe9-e1a45010851b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6493 chars]
	I0108 21:50:29.789198   10884 round_trippers.go:463] GET https://172.29.104.77:8443/api/v1/nodes/multinode-554300
	I0108 21:50:29.789378   10884 round_trippers.go:469] Request Headers:
	I0108 21:50:29.789378   10884 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:50:29.789378   10884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0108 21:50:29.792659   10884 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 21:50:29.792659   10884 round_trippers.go:577] Response Headers:
	I0108 21:50:29.792659   10884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9ebd4a85-36a3-4fdc-bd57-aab45239cbb7
	I0108 21:50:29.792659   10884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 15bc4c65-0928-4725-8730-353e6d4075e7
	I0108 21:50:29.792659   10884 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:50:29 GMT
	I0108 21:50:29.792659   10884 round_trippers.go:580]     Audit-Id: 4838b691-1a03-454e-971d-b487ea30e21c
	I0108 21:50:29.792659   10884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:50:29.792659   10884 round_trippers.go:580]     Content-Type: application/json
	I0108 21:50:29.792659   10884 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-554300","uid":"00d4501c-311c-4564-812c-b620304a4e8e","resourceVersion":"1838","creationTimestamp":"2024-01-08T21:23:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-554300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-554300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T21_23_33_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-01-08T21:23:28Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0108 21:50:29.792659   10884 pod_ready.go:92] pod "coredns-5dd5756b68-q7vd7" in "kube-system" namespace has status "Ready":"True"
	I0108 21:50:29.792659   10884 pod_ready.go:81] duration metric: took 8.5248ms waiting for pod "coredns-5dd5756b68-q7vd7" in "kube-system" namespace to be "Ready" ...
	I0108 21:50:29.792659   10884 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-554300" in "kube-system" namespace to be "Ready" ...
	I0108 21:50:29.792659   10884 round_trippers.go:463] GET https://172.29.104.77:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-554300
	I0108 21:50:29.794096   10884 round_trippers.go:469] Request Headers:
	I0108 21:50:29.794096   10884 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:50:29.794096   10884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0108 21:50:29.797296   10884 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 21:50:29.797318   10884 round_trippers.go:577] Response Headers:
	I0108 21:50:29.797318   10884 round_trippers.go:580]     Content-Type: application/json
	I0108 21:50:29.797318   10884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9ebd4a85-36a3-4fdc-bd57-aab45239cbb7
	I0108 21:50:29.797318   10884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 15bc4c65-0928-4725-8730-353e6d4075e7
	I0108 21:50:29.797318   10884 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:50:29 GMT
	I0108 21:50:29.797318   10884 round_trippers.go:580]     Audit-Id: 8837df19-bfb8-424e-a41e-1e1d3e01e4b6
	I0108 21:50:29.797318   10884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:50:29.797590   10884 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-554300","namespace":"kube-system","uid":"55fb89f1-0f93-4967-877e-c170530dd9ed","resourceVersion":"1804","creationTimestamp":"2024-01-08T21:45:28Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.29.104.77:2379","kubernetes.io/config.hash":"eeac3c939de11db202bb72fb9d694f8f","kubernetes.io/config.mirror":"eeac3c939de11db202bb72fb9d694f8f","kubernetes.io/config.seen":"2024-01-08T21:45:22.563167670Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-554300","uid":"00d4501c-311c-4564-812c-b620304a4e8e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-08T21:45:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 5863 chars]
	I0108 21:50:29.797676   10884 round_trippers.go:463] GET https://172.29.104.77:8443/api/v1/nodes/multinode-554300
	I0108 21:50:29.797676   10884 round_trippers.go:469] Request Headers:
	I0108 21:50:29.797676   10884 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:50:29.797676   10884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0108 21:50:29.800908   10884 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 21:50:29.800908   10884 round_trippers.go:577] Response Headers:
	I0108 21:50:29.800908   10884 round_trippers.go:580]     Audit-Id: 8b85c1e9-2bf7-4712-990a-b275adb38c49
	I0108 21:50:29.801174   10884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:50:29.801174   10884 round_trippers.go:580]     Content-Type: application/json
	I0108 21:50:29.801174   10884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9ebd4a85-36a3-4fdc-bd57-aab45239cbb7
	I0108 21:50:29.801174   10884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 15bc4c65-0928-4725-8730-353e6d4075e7
	I0108 21:50:29.801174   10884 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:50:29 GMT
	I0108 21:50:29.801571   10884 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-554300","uid":"00d4501c-311c-4564-812c-b620304a4e8e","resourceVersion":"1838","creationTimestamp":"2024-01-08T21:23:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-554300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-554300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T21_23_33_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-01-08T21:23:28Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0108 21:50:29.801978   10884 pod_ready.go:92] pod "etcd-multinode-554300" in "kube-system" namespace has status "Ready":"True"
	I0108 21:50:29.802036   10884 pod_ready.go:81] duration metric: took 9.3769ms waiting for pod "etcd-multinode-554300" in "kube-system" namespace to be "Ready" ...
	I0108 21:50:29.802036   10884 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-554300" in "kube-system" namespace to be "Ready" ...
	I0108 21:50:29.802148   10884 round_trippers.go:463] GET https://172.29.104.77:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-554300
	I0108 21:50:29.802223   10884 round_trippers.go:469] Request Headers:
	I0108 21:50:29.802245   10884 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:50:29.802245   10884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0108 21:50:29.805510   10884 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 21:50:29.805510   10884 round_trippers.go:577] Response Headers:
	I0108 21:50:29.805510   10884 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:50:29 GMT
	I0108 21:50:29.805510   10884 round_trippers.go:580]     Audit-Id: a631e560-9ffd-4b31-a9be-28de6b29ce6a
	I0108 21:50:29.805510   10884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:50:29.805510   10884 round_trippers.go:580]     Content-Type: application/json
	I0108 21:50:29.805510   10884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9ebd4a85-36a3-4fdc-bd57-aab45239cbb7
	I0108 21:50:29.805510   10884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 15bc4c65-0928-4725-8730-353e6d4075e7
	I0108 21:50:29.805510   10884 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-554300","namespace":"kube-system","uid":"ad4821d4-6eff-483c-b12d-9123225ab172","resourceVersion":"1805","creationTimestamp":"2024-01-08T21:45:28Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.29.104.77:8443","kubernetes.io/config.hash":"2ecc3d0c6efe0268d7822d74e28f9f5b","kubernetes.io/config.mirror":"2ecc3d0c6efe0268d7822d74e28f9f5b","kubernetes.io/config.seen":"2024-01-08T21:45:22.563174170Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-554300","uid":"00d4501c-311c-4564-812c-b620304a4e8e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-08T21:45:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7400 chars]
	I0108 21:50:29.806754   10884 round_trippers.go:463] GET https://172.29.104.77:8443/api/v1/nodes/multinode-554300
	I0108 21:50:29.806754   10884 round_trippers.go:469] Request Headers:
	I0108 21:50:29.806754   10884 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:50:29.806893   10884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0108 21:50:29.809334   10884 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 21:50:29.810266   10884 round_trippers.go:577] Response Headers:
	I0108 21:50:29.810266   10884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:50:29.810266   10884 round_trippers.go:580]     Content-Type: application/json
	I0108 21:50:29.810266   10884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9ebd4a85-36a3-4fdc-bd57-aab45239cbb7
	I0108 21:50:29.810266   10884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 15bc4c65-0928-4725-8730-353e6d4075e7
	I0108 21:50:29.810342   10884 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:50:29 GMT
	I0108 21:50:29.810365   10884 round_trippers.go:580]     Audit-Id: dea6acd8-9945-4dcf-9332-ea82696b8fe0
	I0108 21:50:29.810516   10884 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-554300","uid":"00d4501c-311c-4564-812c-b620304a4e8e","resourceVersion":"1838","creationTimestamp":"2024-01-08T21:23:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-554300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-554300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T21_23_33_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-01-08T21:23:28Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0108 21:50:29.810883   10884 pod_ready.go:92] pod "kube-apiserver-multinode-554300" in "kube-system" namespace has status "Ready":"True"
	I0108 21:50:29.810883   10884 pod_ready.go:81] duration metric: took 8.7911ms waiting for pod "kube-apiserver-multinode-554300" in "kube-system" namespace to be "Ready" ...
	I0108 21:50:29.810883   10884 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-554300" in "kube-system" namespace to be "Ready" ...
	I0108 21:50:29.810883   10884 round_trippers.go:463] GET https://172.29.104.77:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-554300
	I0108 21:50:29.810883   10884 round_trippers.go:469] Request Headers:
	I0108 21:50:29.810883   10884 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:50:29.810883   10884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0108 21:50:29.817107   10884 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0108 21:50:29.817107   10884 round_trippers.go:577] Response Headers:
	I0108 21:50:29.817107   10884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:50:29.817107   10884 round_trippers.go:580]     Content-Type: application/json
	I0108 21:50:29.817107   10884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9ebd4a85-36a3-4fdc-bd57-aab45239cbb7
	I0108 21:50:29.817107   10884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 15bc4c65-0928-4725-8730-353e6d4075e7
	I0108 21:50:29.817107   10884 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:50:29 GMT
	I0108 21:50:29.817107   10884 round_trippers.go:580]     Audit-Id: d81e998e-9d08-4a35-817f-352f3bdfdeb6
	I0108 21:50:29.817754   10884 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-554300","namespace":"kube-system","uid":"c5c47910-dee9-4e42-8623-dbc45d13564f","resourceVersion":"1813","creationTimestamp":"2024-01-08T21:23:32Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"8c1bbf537f0866640621fd36da3286e0","kubernetes.io/config.mirror":"8c1bbf537f0866640621fd36da3286e0","kubernetes.io/config.seen":"2024-01-08T21:23:32.232191792Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-554300","uid":"00d4501c-311c-4564-812c-b620304a4e8e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-08T21:23:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.
io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".": [truncated 7175 chars]
	I0108 21:50:29.817817   10884 round_trippers.go:463] GET https://172.29.104.77:8443/api/v1/nodes/multinode-554300
	I0108 21:50:29.817817   10884 round_trippers.go:469] Request Headers:
	I0108 21:50:29.817817   10884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0108 21:50:29.817817   10884 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:50:29.822720   10884 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0108 21:50:29.822720   10884 round_trippers.go:577] Response Headers:
	I0108 21:50:29.822720   10884 round_trippers.go:580]     Audit-Id: 05e4b6d8-df77-46e4-be01-648156657d43
	I0108 21:50:29.822720   10884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:50:29.822720   10884 round_trippers.go:580]     Content-Type: application/json
	I0108 21:50:29.822720   10884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9ebd4a85-36a3-4fdc-bd57-aab45239cbb7
	I0108 21:50:29.822720   10884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 15bc4c65-0928-4725-8730-353e6d4075e7
	I0108 21:50:29.822720   10884 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:50:29 GMT
	I0108 21:50:29.822720   10884 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-554300","uid":"00d4501c-311c-4564-812c-b620304a4e8e","resourceVersion":"1838","creationTimestamp":"2024-01-08T21:23:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-554300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-554300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T21_23_33_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-01-08T21:23:28Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0108 21:50:29.823713   10884 pod_ready.go:92] pod "kube-controller-manager-multinode-554300" in "kube-system" namespace has status "Ready":"True"
	I0108 21:50:29.823713   10884 pod_ready.go:81] duration metric: took 12.8298ms waiting for pod "kube-controller-manager-multinode-554300" in "kube-system" namespace to be "Ready" ...
	I0108 21:50:29.823713   10884 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-jsq7c" in "kube-system" namespace to be "Ready" ...
	I0108 21:50:29.970210   10884 request.go:629] Waited for 146.4966ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.104.77:8443/api/v1/namespaces/kube-system/pods/kube-proxy-jsq7c
	I0108 21:50:29.970410   10884 round_trippers.go:463] GET https://172.29.104.77:8443/api/v1/namespaces/kube-system/pods/kube-proxy-jsq7c
	I0108 21:50:29.970498   10884 round_trippers.go:469] Request Headers:
	I0108 21:50:29.970498   10884 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:50:29.970498   10884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0108 21:50:29.974321   10884 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 21:50:29.974321   10884 round_trippers.go:577] Response Headers:
	I0108 21:50:29.974321   10884 round_trippers.go:580]     Audit-Id: 39a83c25-792e-4825-8fe5-e8963c00ddbc
	I0108 21:50:29.974321   10884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:50:29.974878   10884 round_trippers.go:580]     Content-Type: application/json
	I0108 21:50:29.974878   10884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9ebd4a85-36a3-4fdc-bd57-aab45239cbb7
	I0108 21:50:29.974878   10884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 15bc4c65-0928-4725-8730-353e6d4075e7
	I0108 21:50:29.974878   10884 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:50:29 GMT
	I0108 21:50:29.975650   10884 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-jsq7c","generateName":"kube-proxy-","namespace":"kube-system","uid":"cbc6a2d2-bb66-4af4-8a7d-315bc293cac0","resourceVersion":"1807","creationTimestamp":"2024-01-08T21:23:44Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"0597a1a7-46a6-4b69-bb4b-83cbc4a269de","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-08T21:23:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0597a1a7-46a6-4b69-bb4b-83cbc4a269de\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5739 chars]
	I0108 21:50:30.171344   10884 request.go:629] Waited for 194.8866ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.104.77:8443/api/v1/nodes/multinode-554300
	I0108 21:50:30.171344   10884 round_trippers.go:463] GET https://172.29.104.77:8443/api/v1/nodes/multinode-554300
	I0108 21:50:30.171344   10884 round_trippers.go:469] Request Headers:
	I0108 21:50:30.171344   10884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0108 21:50:30.171344   10884 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:50:30.175143   10884 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 21:50:30.175427   10884 round_trippers.go:577] Response Headers:
	I0108 21:50:30.175427   10884 round_trippers.go:580]     Content-Type: application/json
	I0108 21:50:30.175427   10884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9ebd4a85-36a3-4fdc-bd57-aab45239cbb7
	I0108 21:50:30.175427   10884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 15bc4c65-0928-4725-8730-353e6d4075e7
	I0108 21:50:30.175427   10884 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:50:30 GMT
	I0108 21:50:30.175427   10884 round_trippers.go:580]     Audit-Id: 9fd2767b-ba21-4ca7-bf4b-445712b46bb3
	I0108 21:50:30.175427   10884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:50:30.175427   10884 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-554300","uid":"00d4501c-311c-4564-812c-b620304a4e8e","resourceVersion":"1838","creationTimestamp":"2024-01-08T21:23:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-554300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-554300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T21_23_33_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-01-08T21:23:28Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0108 21:50:30.176188   10884 pod_ready.go:92] pod "kube-proxy-jsq7c" in "kube-system" namespace has status "Ready":"True"
	I0108 21:50:30.176188   10884 pod_ready.go:81] duration metric: took 352.4732ms waiting for pod "kube-proxy-jsq7c" in "kube-system" namespace to be "Ready" ...
	I0108 21:50:30.176188   10884 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-nbzjb" in "kube-system" namespace to be "Ready" ...
	I0108 21:50:30.372731   10884 request.go:629] Waited for 196.4016ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.104.77:8443/api/v1/namespaces/kube-system/pods/kube-proxy-nbzjb
	I0108 21:50:30.372875   10884 round_trippers.go:463] GET https://172.29.104.77:8443/api/v1/namespaces/kube-system/pods/kube-proxy-nbzjb
	I0108 21:50:30.372875   10884 round_trippers.go:469] Request Headers:
	I0108 21:50:30.372875   10884 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:50:30.372875   10884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0108 21:50:30.376425   10884 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 21:50:30.376648   10884 round_trippers.go:577] Response Headers:
	I0108 21:50:30.376648   10884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:50:30.376648   10884 round_trippers.go:580]     Content-Type: application/json
	I0108 21:50:30.376648   10884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9ebd4a85-36a3-4fdc-bd57-aab45239cbb7
	I0108 21:50:30.376815   10884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 15bc4c65-0928-4725-8730-353e6d4075e7
	I0108 21:50:30.376815   10884 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:50:30 GMT
	I0108 21:50:30.376815   10884 round_trippers.go:580]     Audit-Id: 97969d15-f977-445f-b6fd-f99b163b3ba9
	I0108 21:50:30.377132   10884 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-nbzjb","generateName":"kube-proxy-","namespace":"kube-system","uid":"73b08d5a-2015-4712-92b4-2d12298e9fc3","resourceVersion":"2004","creationTimestamp":"2024-01-08T21:26:42Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"0597a1a7-46a6-4b69-bb4b-83cbc4a269de","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-08T21:26:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0597a1a7-46a6-4b69-bb4b-83cbc4a269de\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5747 chars]
	I0108 21:50:30.578588   10884 request.go:629] Waited for 200.646ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.104.77:8443/api/v1/nodes/multinode-554300-m02
	I0108 21:50:30.578588   10884 round_trippers.go:463] GET https://172.29.104.77:8443/api/v1/nodes/multinode-554300-m02
	I0108 21:50:30.578588   10884 round_trippers.go:469] Request Headers:
	I0108 21:50:30.578588   10884 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:50:30.578588   10884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0108 21:50:30.582587   10884 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 21:50:30.582587   10884 round_trippers.go:577] Response Headers:
	I0108 21:50:30.582587   10884 round_trippers.go:580]     Audit-Id: 5a76cd2c-fd15-461b-9281-6c4ca22fcd61
	I0108 21:50:30.582587   10884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:50:30.582742   10884 round_trippers.go:580]     Content-Type: application/json
	I0108 21:50:30.582742   10884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9ebd4a85-36a3-4fdc-bd57-aab45239cbb7
	I0108 21:50:30.582742   10884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 15bc4c65-0928-4725-8730-353e6d4075e7
	I0108 21:50:30.582742   10884 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:50:30 GMT
	I0108 21:50:30.582865   10884 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-554300-m02","uid":"9132c7d7-a2f4-429d-b538-ad18254f1c39","resourceVersion":"2178","creationTimestamp":"2024-01-08T21:47:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-554300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-554300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T21_50_08_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-08T21:47:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:anno
tations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-det [truncated 3805 chars]
	I0108 21:50:30.583340   10884 pod_ready.go:92] pod "kube-proxy-nbzjb" in "kube-system" namespace has status "Ready":"True"
	I0108 21:50:30.583452   10884 pod_ready.go:81] duration metric: took 407.2627ms waiting for pod "kube-proxy-nbzjb" in "kube-system" namespace to be "Ready" ...
	I0108 21:50:30.583452   10884 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-pdt95" in "kube-system" namespace to be "Ready" ...
	I0108 21:50:30.782031   10884 request.go:629] Waited for 198.4255ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.104.77:8443/api/v1/namespaces/kube-system/pods/kube-proxy-pdt95
	I0108 21:50:30.782310   10884 round_trippers.go:463] GET https://172.29.104.77:8443/api/v1/namespaces/kube-system/pods/kube-proxy-pdt95
	I0108 21:50:30.782310   10884 round_trippers.go:469] Request Headers:
	I0108 21:50:30.782424   10884 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:50:30.782424   10884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0108 21:50:30.786879   10884 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0108 21:50:30.786879   10884 round_trippers.go:577] Response Headers:
	I0108 21:50:30.786879   10884 round_trippers.go:580]     Audit-Id: 60c45db3-fdad-4ac0-9838-f4ede5afd37d
	I0108 21:50:30.786879   10884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:50:30.786879   10884 round_trippers.go:580]     Content-Type: application/json
	I0108 21:50:30.787375   10884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9ebd4a85-36a3-4fdc-bd57-aab45239cbb7
	I0108 21:50:30.787375   10884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 15bc4c65-0928-4725-8730-353e6d4075e7
	I0108 21:50:30.787375   10884 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:50:30 GMT
	I0108 21:50:30.787960   10884 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-pdt95","generateName":"kube-proxy-","namespace":"kube-system","uid":"e4aa76bc-96be-46f8-bc0e-7f3a6caa9883","resourceVersion":"2186","creationTimestamp":"2024-01-08T21:31:11Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"0597a1a7-46a6-4b69-bb4b-83cbc4a269de","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-08T21:31:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0597a1a7-46a6-4b69-bb4b-83cbc4a269de\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5743 chars]
	I0108 21:50:30.970583   10884 request.go:629] Waited for 181.552ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.104.77:8443/api/v1/nodes/multinode-554300-m03
	I0108 21:50:30.970896   10884 round_trippers.go:463] GET https://172.29.104.77:8443/api/v1/nodes/multinode-554300-m03
	I0108 21:50:30.970896   10884 round_trippers.go:469] Request Headers:
	I0108 21:50:30.970896   10884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0108 21:50:30.970896   10884 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:50:30.975800   10884 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0108 21:50:30.975800   10884 round_trippers.go:577] Response Headers:
	I0108 21:50:30.975800   10884 round_trippers.go:580]     Audit-Id: 8494e988-7f35-4f8b-b8b9-236af4648680
	I0108 21:50:30.975800   10884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:50:30.975800   10884 round_trippers.go:580]     Content-Type: application/json
	I0108 21:50:30.975800   10884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9ebd4a85-36a3-4fdc-bd57-aab45239cbb7
	I0108 21:50:30.975800   10884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 15bc4c65-0928-4725-8730-353e6d4075e7
	I0108 21:50:30.975800   10884 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:50:30 GMT
	I0108 21:50:30.976676   10884 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-554300-m03","uid":"61f21f6d-05bc-433f-a5f9-4cc622905150","resourceVersion":"2217","creationTimestamp":"2024-01-08T21:50:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-554300-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-554300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T21_50_08_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-08T21:50:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:anno
tations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-det [truncated 3755 chars]
	I0108 21:50:30.977051   10884 pod_ready.go:92] pod "kube-proxy-pdt95" in "kube-system" namespace has status "Ready":"True"
	I0108 21:50:30.977051   10884 pod_ready.go:81] duration metric: took 393.5969ms waiting for pod "kube-proxy-pdt95" in "kube-system" namespace to be "Ready" ...
	I0108 21:50:30.977051   10884 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-554300" in "kube-system" namespace to be "Ready" ...
	I0108 21:50:31.172400   10884 request.go:629] Waited for 194.8568ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.104.77:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-554300
	I0108 21:50:31.172566   10884 round_trippers.go:463] GET https://172.29.104.77:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-554300
	I0108 21:50:31.172566   10884 round_trippers.go:469] Request Headers:
	I0108 21:50:31.172566   10884 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:50:31.172566   10884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0108 21:50:31.177091   10884 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0108 21:50:31.177408   10884 round_trippers.go:577] Response Headers:
	I0108 21:50:31.177474   10884 round_trippers.go:580]     Audit-Id: cb14eede-07d7-4b87-a723-d265194faef5
	I0108 21:50:31.177474   10884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:50:31.177474   10884 round_trippers.go:580]     Content-Type: application/json
	I0108 21:50:31.177474   10884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9ebd4a85-36a3-4fdc-bd57-aab45239cbb7
	I0108 21:50:31.177560   10884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 15bc4c65-0928-4725-8730-353e6d4075e7
	I0108 21:50:31.177560   10884 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:50:31 GMT
	I0108 21:50:31.177593   10884 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-554300","namespace":"kube-system","uid":"f5b78bba-6cd0-495b-b6d6-c9afd93b3534","resourceVersion":"1806","creationTimestamp":"2024-01-08T21:23:32Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"33f58bfb7438c4df2c3fb56db1f613ab","kubernetes.io/config.mirror":"33f58bfb7438c4df2c3fb56db1f613ab","kubernetes.io/config.seen":"2024-01-08T21:23:32.232192792Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-554300","uid":"00d4501c-311c-4564-812c-b620304a4e8e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-08T21:23:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{}
,"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{ [truncated 4905 chars]
	I0108 21:50:31.376036   10884 request.go:629] Waited for 197.2595ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.104.77:8443/api/v1/nodes/multinode-554300
	I0108 21:50:31.376282   10884 round_trippers.go:463] GET https://172.29.104.77:8443/api/v1/nodes/multinode-554300
	I0108 21:50:31.376282   10884 round_trippers.go:469] Request Headers:
	I0108 21:50:31.376282   10884 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:50:31.376282   10884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0108 21:50:31.382617   10884 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0108 21:50:31.382617   10884 round_trippers.go:577] Response Headers:
	I0108 21:50:31.382706   10884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 15bc4c65-0928-4725-8730-353e6d4075e7
	I0108 21:50:31.382706   10884 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:50:31 GMT
	I0108 21:50:31.382732   10884 round_trippers.go:580]     Audit-Id: 58d27d9c-ba2b-4562-9c85-331237928d98
	I0108 21:50:31.382732   10884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:50:31.382732   10884 round_trippers.go:580]     Content-Type: application/json
	I0108 21:50:31.382732   10884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9ebd4a85-36a3-4fdc-bd57-aab45239cbb7
	I0108 21:50:31.383800   10884 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-554300","uid":"00d4501c-311c-4564-812c-b620304a4e8e","resourceVersion":"1838","creationTimestamp":"2024-01-08T21:23:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-554300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-554300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T21_23_33_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-01-08T21:23:28Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0108 21:50:31.383800   10884 pod_ready.go:92] pod "kube-scheduler-multinode-554300" in "kube-system" namespace has status "Ready":"True"
	I0108 21:50:31.383800   10884 pod_ready.go:81] duration metric: took 406.7468ms waiting for pod "kube-scheduler-multinode-554300" in "kube-system" namespace to be "Ready" ...
	I0108 21:50:31.383800   10884 pod_ready.go:38] duration metric: took 1.6117934s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0108 21:50:31.383800   10884 system_svc.go:44] waiting for kubelet service to be running ....
	I0108 21:50:31.398451   10884 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0108 21:50:31.421256   10884 system_svc.go:56] duration metric: took 37.1813ms WaitForService to wait for kubelet.
	I0108 21:50:31.421256   10884 kubeadm.go:581] duration metric: took 22.2019121s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0108 21:50:31.421256   10884 node_conditions.go:102] verifying NodePressure condition ...
	I0108 21:50:31.579069   10884 request.go:629] Waited for 157.3432ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.104.77:8443/api/v1/nodes
	I0108 21:50:31.579266   10884 round_trippers.go:463] GET https://172.29.104.77:8443/api/v1/nodes
	I0108 21:50:31.579266   10884 round_trippers.go:469] Request Headers:
	I0108 21:50:31.579266   10884 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:50:31.579407   10884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0108 21:50:31.585433   10884 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0108 21:50:31.585433   10884 round_trippers.go:577] Response Headers:
	I0108 21:50:31.585433   10884 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:50:31 GMT
	I0108 21:50:31.585433   10884 round_trippers.go:580]     Audit-Id: e9e70826-f229-4706-8016-3cde536e34c9
	I0108 21:50:31.585433   10884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:50:31.585433   10884 round_trippers.go:580]     Content-Type: application/json
	I0108 21:50:31.585433   10884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9ebd4a85-36a3-4fdc-bd57-aab45239cbb7
	I0108 21:50:31.585433   10884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 15bc4c65-0928-4725-8730-353e6d4075e7
	I0108 21:50:31.586121   10884 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"2221"},"items":[{"metadata":{"name":"multinode-554300","uid":"00d4501c-311c-4564-812c-b620304a4e8e","resourceVersion":"1838","creationTimestamp":"2024-01-08T21:23:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-554300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-554300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T21_23_33_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFi
elds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","tim [truncated 14715 chars]
	I0108 21:50:31.586972   10884 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0108 21:50:31.586972   10884 node_conditions.go:123] node cpu capacity is 2
	I0108 21:50:31.586972   10884 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0108 21:50:31.586972   10884 node_conditions.go:123] node cpu capacity is 2
	I0108 21:50:31.586972   10884 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0108 21:50:31.586972   10884 node_conditions.go:123] node cpu capacity is 2
	I0108 21:50:31.586972   10884 node_conditions.go:105] duration metric: took 165.7152ms to run NodePressure ...
	I0108 21:50:31.586972   10884 start.go:228] waiting for startup goroutines ...
	I0108 21:50:31.586972   10884 start.go:242] writing updated cluster config ...
	I0108 21:50:31.602051   10884 ssh_runner.go:195] Run: rm -f paused
	I0108 21:50:31.750580   10884 start.go:600] kubectl: 1.29.0, cluster: 1.28.4 (minor skew: 1)
	I0108 21:50:31.751609   10884 out.go:177] * Done! kubectl is now configured to use "multinode-554300" cluster and "default" namespace by default
	
	
	==> Docker <==
	-- Journal begins at Mon 2024-01-08 21:43:55 UTC, ends at Mon 2024-01-08 21:50:52 UTC. --
	Jan 08 21:45:44 multinode-554300 dockerd[1058]: time="2024-01-08T21:45:44.684848121Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 08 21:45:44 multinode-554300 dockerd[1058]: time="2024-01-08T21:45:44.684867420Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jan 08 21:45:44 multinode-554300 dockerd[1058]: time="2024-01-08T21:45:44.684877320Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 08 21:45:45 multinode-554300 dockerd[1058]: time="2024-01-08T21:45:45.051389190Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jan 08 21:45:45 multinode-554300 dockerd[1058]: time="2024-01-08T21:45:45.051439588Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 08 21:45:45 multinode-554300 dockerd[1058]: time="2024-01-08T21:45:45.051463687Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jan 08 21:45:45 multinode-554300 dockerd[1058]: time="2024-01-08T21:45:45.051532885Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 08 21:45:45 multinode-554300 cri-dockerd[1260]: time="2024-01-08T21:45:45Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/44893cca16d0181229fcb486ef2105e79dbf9950abfef9243d18219de0f6df51/resolv.conf as [nameserver 172.29.96.1]"
	Jan 08 21:45:45 multinode-554300 dockerd[1058]: time="2024-01-08T21:45:45.398536039Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jan 08 21:45:45 multinode-554300 dockerd[1058]: time="2024-01-08T21:45:45.398772430Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 08 21:45:45 multinode-554300 dockerd[1058]: time="2024-01-08T21:45:45.398814728Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jan 08 21:45:45 multinode-554300 dockerd[1058]: time="2024-01-08T21:45:45.398967122Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 08 21:45:45 multinode-554300 cri-dockerd[1260]: time="2024-01-08T21:45:45Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/e90d282abbb78602b5ce3d6a48b4cd9052587fe8e8c86905b5c34597325c7329/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Jan 08 21:45:45 multinode-554300 dockerd[1058]: time="2024-01-08T21:45:45.920585460Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jan 08 21:45:45 multinode-554300 dockerd[1058]: time="2024-01-08T21:45:45.922098201Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 08 21:45:45 multinode-554300 dockerd[1058]: time="2024-01-08T21:45:45.922367990Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jan 08 21:45:45 multinode-554300 dockerd[1058]: time="2024-01-08T21:45:45.922544483Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 08 21:46:02 multinode-554300 dockerd[1052]: time="2024-01-08T21:46:02.717784736Z" level=info msg="ignoring event" container=e3cad3a1ddfdf89cd03d63803c1764554688854d27f25c9c42b1a884b048e8d2 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 08 21:46:02 multinode-554300 dockerd[1058]: time="2024-01-08T21:46:02.717745137Z" level=info msg="shim disconnected" id=e3cad3a1ddfdf89cd03d63803c1764554688854d27f25c9c42b1a884b048e8d2 namespace=moby
	Jan 08 21:46:02 multinode-554300 dockerd[1058]: time="2024-01-08T21:46:02.718988921Z" level=warning msg="cleaning up after shim disconnected" id=e3cad3a1ddfdf89cd03d63803c1764554688854d27f25c9c42b1a884b048e8d2 namespace=moby
	Jan 08 21:46:02 multinode-554300 dockerd[1058]: time="2024-01-08T21:46:02.719006120Z" level=info msg="cleaning up dead shim" namespace=moby
	Jan 08 21:46:14 multinode-554300 dockerd[1058]: time="2024-01-08T21:46:14.730888009Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jan 08 21:46:14 multinode-554300 dockerd[1058]: time="2024-01-08T21:46:14.731012409Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 08 21:46:14 multinode-554300 dockerd[1058]: time="2024-01-08T21:46:14.731065608Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jan 08 21:46:14 multinode-554300 dockerd[1058]: time="2024-01-08T21:46:14.731097908Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	e74957ad5097f       6e38f40d628db                                                                                         4 minutes ago       Running             storage-provisioner       2                   2482ba2780ab2       storage-provisioner
	71eb011610725       8c811b4aec35f                                                                                         5 minutes ago       Running             busybox                   1                   e90d282abbb78       busybox-5bc68d56bd-hrhnw
	e7be60c16d729       ead0a4a53df89                                                                                         5 minutes ago       Running             coredns                   1                   44893cca16d01       coredns-5dd5756b68-q7vd7
	ee987e6cf2145       c7d1297425461                                                                                         5 minutes ago       Running             kindnet-cni               1                   57cd05447d990       kindnet-5r79t
	e3cad3a1ddfdf       6e38f40d628db                                                                                         5 minutes ago       Exited              storage-provisioner       1                   2482ba2780ab2       storage-provisioner
	42677dfb82ad4       83f6cc407eed8                                                                                         5 minutes ago       Running             kube-proxy                1                   eba0ee37dcc3d       kube-proxy-jsq7c
	6c4b1830db271       73deb9a3f7025                                                                                         5 minutes ago       Running             etcd                      0                   9380e29ab4e2d       etcd-multinode-554300
	d376f0b5b0fa2       e3db313c6dbc0                                                                                         5 minutes ago       Running             kube-scheduler            1                   94776bb8d7954       kube-scheduler-multinode-554300
	d3824c6d8537a       7fe0e6f37db33                                                                                         5 minutes ago       Running             kube-apiserver            0                   372cfd4ed65fa       kube-apiserver-multinode-554300
	508ee1d30b007       d058aa5ab969c                                                                                         5 minutes ago       Running             kube-controller-manager   1                   952e22c525b75       kube-controller-manager-multinode-554300
	bb85cd47a0309       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   23 minutes ago      Exited              busybox                   0                   e5200fb3682db       busybox-5bc68d56bd-hrhnw
	146f9c24d2a4b       ead0a4a53df89                                                                                         26 minutes ago      Exited              coredns                   0                   2079ab544b8d9       coredns-5dd5756b68-q7vd7
	359babcc50a69       kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052              26 minutes ago      Exited              kindnet-cni               0                   ceed09dba4fb2       kindnet-5r79t
	2c18647ee3312       83f6cc407eed8                                                                                         27 minutes ago      Exited              kube-proxy                0                   5e4892494426d       kube-proxy-jsq7c
	c193667d32e41       e3db313c6dbc0                                                                                         27 minutes ago      Exited              kube-scheduler            0                   4081e28ae5451       kube-scheduler-multinode-554300
	5a21be70e8c82       d058aa5ab969c                                                                                         27 minutes ago      Exited              kube-controller-manager   0                   6c64a54424c9b       kube-controller-manager-multinode-554300
	
	
	==> coredns [146f9c24d2a4] <==
	[INFO] 10.244.1.2:37675 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000063209s
	[INFO] 10.244.1.2:36622 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00006491s
	[INFO] 10.244.1.2:46580 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000053708s
	[INFO] 10.244.1.2:42017 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000053208s
	[INFO] 10.244.1.2:38648 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000070711s
	[INFO] 10.244.1.2:38719 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000058008s
	[INFO] 10.244.1.2:53435 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000057109s
	[INFO] 10.244.0.3:36893 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000119617s
	[INFO] 10.244.0.3:58794 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000490371s
	[INFO] 10.244.0.3:53722 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000171725s
	[INFO] 10.244.0.3:46338 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000545679s
	[INFO] 10.244.1.2:58618 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000152622s
	[INFO] 10.244.1.2:54835 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000119318s
	[INFO] 10.244.1.2:36265 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000170025s
	[INFO] 10.244.1.2:55902 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000153422s
	[INFO] 10.244.0.3:52265 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000169424s
	[INFO] 10.244.0.3:43278 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000108816s
	[INFO] 10.244.0.3:35101 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000319945s
	[INFO] 10.244.0.3:36695 - 5 "PTR IN 1.96.29.172.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.00014182s
	[INFO] 10.244.1.2:44665 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00017322s
	[INFO] 10.244.1.2:52765 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000153417s
	[INFO] 10.244.1.2:57262 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000063407s
	[INFO] 10.244.1.2:44027 - 5 "PTR IN 1.96.29.172.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000114314s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [e7be60c16d72] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = ecb7ac485f9c2b1ea9804efa09f1e19321672736f367e944ec746de174838ff4ac13f0ea72d0f91eb72162a02d709deb909d06018a457ac2adfe17d34b3613d8
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:36451 - 47989 "HINFO IN 6403175210082942304.7545966864807666738. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.086503691s
	
	
	==> describe nodes <==
	Name:               multinode-554300
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-554300
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=255792ad75c0218cbe9d2c7121633a1b1d442f28
	                    minikube.k8s.io/name=multinode-554300
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_01_08T21_23_33_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 08 Jan 2024 21:23:28 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-554300
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 08 Jan 2024 21:50:45 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 08 Jan 2024 21:45:48 +0000   Mon, 08 Jan 2024 21:23:26 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 08 Jan 2024 21:45:48 +0000   Mon, 08 Jan 2024 21:23:26 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 08 Jan 2024 21:45:48 +0000   Mon, 08 Jan 2024 21:23:26 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 08 Jan 2024 21:45:48 +0000   Mon, 08 Jan 2024 21:45:48 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.29.104.77
	  Hostname:    multinode-554300
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165980Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165980Ki
	  pods:               110
	System Info:
	  Machine ID:                 1ae9b81399b44328b4b74cc48011b7c9
	  System UUID:                b9399726-afc3-4741-8f8d-1fb422dcdbf7
	  Boot ID:                    3496f925-826b-46d6-babf-90b3999f1b8f
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://24.0.7
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5bc68d56bd-hrhnw                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         23m
	  kube-system                 coredns-5dd5756b68-q7vd7                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     27m
	  kube-system                 etcd-multinode-554300                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         5m24s
	  kube-system                 kindnet-5r79t                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      27m
	  kube-system                 kube-apiserver-multinode-554300             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m24s
	  kube-system                 kube-controller-manager-multinode-554300    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27m
	  kube-system                 kube-proxy-jsq7c                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27m
	  kube-system                 kube-scheduler-multinode-554300             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27m
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         26m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 27m                    kube-proxy       
	  Normal  Starting                 5m20s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  27m (x8 over 27m)      kubelet          Node multinode-554300 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    27m (x8 over 27m)      kubelet          Node multinode-554300 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     27m (x7 over 27m)      kubelet          Node multinode-554300 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  27m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    27m                    kubelet          Node multinode-554300 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  27m                    kubelet          Node multinode-554300 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     27m                    kubelet          Node multinode-554300 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  27m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 27m                    kubelet          Starting kubelet.
	  Normal  RegisteredNode           27m                    node-controller  Node multinode-554300 event: Registered Node multinode-554300 in Controller
	  Normal  NodeReady                26m                    kubelet          Node multinode-554300 status is now: NodeReady
	  Normal  Starting                 5m30s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  5m30s (x8 over 5m30s)  kubelet          Node multinode-554300 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m30s (x8 over 5m30s)  kubelet          Node multinode-554300 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m30s (x7 over 5m30s)  kubelet          Node multinode-554300 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m30s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           5m12s                  node-controller  Node multinode-554300 event: Registered Node multinode-554300 in Controller
	
	
	Name:               multinode-554300-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-554300-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=255792ad75c0218cbe9d2c7121633a1b1d442f28
	                    minikube.k8s.io/name=multinode-554300
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_01_08T21_50_08_0700
	                    minikube.k8s.io/version=v1.32.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 08 Jan 2024 21:47:57 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-554300-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 08 Jan 2024 21:50:51 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 08 Jan 2024 21:48:07 +0000   Mon, 08 Jan 2024 21:47:57 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 08 Jan 2024 21:48:07 +0000   Mon, 08 Jan 2024 21:47:57 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 08 Jan 2024 21:48:07 +0000   Mon, 08 Jan 2024 21:47:57 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 08 Jan 2024 21:48:07 +0000   Mon, 08 Jan 2024 21:48:07 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.29.97.220
	  Hostname:    multinode-554300-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165980Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165980Ki
	  pods:               110
	System Info:
	  Machine ID:                 c0a909eeb8ca486e8126fdedb8e57c20
	  System UUID:                55f6d4cc-d2a8-8b44-8585-1032f5566229
	  Boot ID:                    a41548dc-67be-4968-a975-66b51ca35ff0
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://24.0.7
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5bc68d56bd-wx8lk    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m59s
	  kube-system                 kindnet-4q524               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      24m
	  kube-system                 kube-proxy-nbzjb            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         24m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  Starting                 2m52s                  kube-proxy  
	  Normal  Starting                 24m                    kube-proxy  
	  Normal  Starting                 24m                    kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  24m (x2 over 24m)      kubelet     Node multinode-554300-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    24m (x2 over 24m)      kubelet     Node multinode-554300-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     24m (x2 over 24m)      kubelet     Node multinode-554300-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  24m                    kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                23m                    kubelet     Node multinode-554300-m02 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  2m55s (x2 over 2m55s)  kubelet     Node multinode-554300-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m55s (x2 over 2m55s)  kubelet     Node multinode-554300-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m55s (x2 over 2m55s)  kubelet     Node multinode-554300-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m55s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  Starting                 2m55s                  kubelet     Starting kubelet.
	  Normal  NodeReady                2m45s                  kubelet     Node multinode-554300-m02 status is now: NodeReady
	
	
	Name:               multinode-554300-m03
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-554300-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=255792ad75c0218cbe9d2c7121633a1b1d442f28
	                    minikube.k8s.io/name=multinode-554300
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_01_08T21_50_08_0700
	                    minikube.k8s.io/version=v1.32.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 08 Jan 2024 21:50:06 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-554300-m03
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 08 Jan 2024 21:50:47 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 08 Jan 2024 21:50:29 +0000   Mon, 08 Jan 2024 21:50:06 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 08 Jan 2024 21:50:29 +0000   Mon, 08 Jan 2024 21:50:06 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 08 Jan 2024 21:50:29 +0000   Mon, 08 Jan 2024 21:50:06 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 08 Jan 2024 21:50:29 +0000   Mon, 08 Jan 2024 21:50:29 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.29.108.2
	  Hostname:    multinode-554300-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165980Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165980Ki
	  pods:               110
	System Info:
	  Machine ID:                 70cd5c54b11e4b09b2eadc037493f915
	  System UUID:                e4660c84-6d13-9446-8988-287405569442
	  Boot ID:                    f8f95253-e1d2-4ce9-bb2d-979a5630dbd5
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://24.0.7
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-dnjjm       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      19m
	  kube-system                 kube-proxy-pdt95    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         19m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 9m28s                  kube-proxy       
	  Normal  Starting                 19m                    kube-proxy       
	  Normal  Starting                 43s                    kube-proxy       
	  Normal  Starting                 19m                    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  19m (x2 over 19m)      kubelet          Node multinode-554300-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    19m (x2 over 19m)      kubelet          Node multinode-554300-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     19m (x2 over 19m)      kubelet          Node multinode-554300-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  19m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                19m                    kubelet          Node multinode-554300-m03 status is now: NodeReady
	  Normal  NodeHasNoDiskPressure    9m31s (x2 over 9m31s)  kubelet          Node multinode-554300-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m31s (x2 over 9m31s)  kubelet          Node multinode-554300-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  9m31s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  9m31s (x2 over 9m31s)  kubelet          Node multinode-554300-m03 status is now: NodeHasSufficientMemory
	  Normal  Starting                 9m31s                  kubelet          Starting kubelet.
	  Normal  NodeReady                9m23s                  kubelet          Node multinode-554300-m03 status is now: NodeReady
	  Normal  Starting                 46s                    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  46s (x2 over 46s)      kubelet          Node multinode-554300-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    46s (x2 over 46s)      kubelet          Node multinode-554300-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     46s (x2 over 46s)      kubelet          Node multinode-554300-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  46s                    kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           41s                    node-controller  Node multinode-554300-m03 event: Registered Node multinode-554300-m03 in Controller
	  Normal  NodeReady                23s                    kubelet          Node multinode-554300-m03 status is now: NodeReady
	
	
	==> dmesg <==
	              If you want to keep using the local clock, then add:
	                "trace_clock=local"
	              on the kernel command line
	[  +0.000563] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +0.646124] psmouse serio1: trackpoint: failed to get extended button data, assuming 3 buttons
	[  +1.640622] systemd-fstab-generator[113]: Ignoring "noauto" for root device
	[  +1.089595] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000003] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[Jan 8 21:44] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[ +44.493632] systemd-fstab-generator[660]: Ignoring "noauto" for root device
	[  +0.142356] systemd-fstab-generator[671]: Ignoring "noauto" for root device
	[Jan 8 21:45] systemd-fstab-generator[978]: Ignoring "noauto" for root device
	[  +0.598054] systemd-fstab-generator[1019]: Ignoring "noauto" for root device
	[  +0.165739] systemd-fstab-generator[1030]: Ignoring "noauto" for root device
	[  +0.218241] systemd-fstab-generator[1043]: Ignoring "noauto" for root device
	[  +1.442196] kauditd_printk_skb: 28 callbacks suppressed
	[  +0.402441] systemd-fstab-generator[1215]: Ignoring "noauto" for root device
	[  +0.156283] systemd-fstab-generator[1226]: Ignoring "noauto" for root device
	[  +0.153023] systemd-fstab-generator[1237]: Ignoring "noauto" for root device
	[  +0.228376] systemd-fstab-generator[1252]: Ignoring "noauto" for root device
	[  +4.071659] systemd-fstab-generator[1474]: Ignoring "noauto" for root device
	[  +0.857896] kauditd_printk_skb: 29 callbacks suppressed
	[ +18.525176] kauditd_printk_skb: 18 callbacks suppressed
	
	
	==> etcd [6c4b1830db27] <==
	{"level":"info","ts":"2024-01-08T21:45:25.682139Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"54a2b764b48fb5bd","local-member-id":"f556f2245c8dbb59","added-peer-id":"f556f2245c8dbb59","added-peer-peer-urls":["https://172.29.107.59:2380"]}
	{"level":"info","ts":"2024-01-08T21:45:25.682643Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"54a2b764b48fb5bd","local-member-id":"f556f2245c8dbb59","cluster-version":"3.5"}
	{"level":"info","ts":"2024-01-08T21:45:25.685722Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-01-08T21:45:25.70086Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-01-08T21:45:25.701137Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-01-08T21:45:25.701456Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-01-08T21:45:25.702048Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-01-08T21:45:25.702609Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"f556f2245c8dbb59","initial-advertise-peer-urls":["https://172.29.104.77:2380"],"listen-peer-urls":["https://172.29.104.77:2380"],"advertise-client-urls":["https://172.29.104.77:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://172.29.104.77:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-01-08T21:45:25.702883Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-01-08T21:45:25.703643Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"172.29.104.77:2380"}
	{"level":"info","ts":"2024-01-08T21:45:25.703841Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"172.29.104.77:2380"}
	{"level":"info","ts":"2024-01-08T21:45:26.795716Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f556f2245c8dbb59 is starting a new election at term 2"}
	{"level":"info","ts":"2024-01-08T21:45:26.79611Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f556f2245c8dbb59 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-01-08T21:45:26.796295Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f556f2245c8dbb59 received MsgPreVoteResp from f556f2245c8dbb59 at term 2"}
	{"level":"info","ts":"2024-01-08T21:45:26.796398Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f556f2245c8dbb59 became candidate at term 3"}
	{"level":"info","ts":"2024-01-08T21:45:26.796541Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f556f2245c8dbb59 received MsgVoteResp from f556f2245c8dbb59 at term 3"}
	{"level":"info","ts":"2024-01-08T21:45:26.796734Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f556f2245c8dbb59 became leader at term 3"}
	{"level":"info","ts":"2024-01-08T21:45:26.796857Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: f556f2245c8dbb59 elected leader f556f2245c8dbb59 at term 3"}
	{"level":"info","ts":"2024-01-08T21:45:26.806848Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"f556f2245c8dbb59","local-member-attributes":"{Name:multinode-554300 ClientURLs:[https://172.29.104.77:2379]}","request-path":"/0/members/f556f2245c8dbb59/attributes","cluster-id":"54a2b764b48fb5bd","publish-timeout":"7s"}
	{"level":"info","ts":"2024-01-08T21:45:26.807224Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-01-08T21:45:26.807463Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-01-08T21:45:26.808641Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"172.29.104.77:2379"}
	{"level":"info","ts":"2024-01-08T21:45:26.810974Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-01-08T21:45:26.811182Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-01-08T21:45:26.818867Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 21:50:52 up 7 min,  0 users,  load average: 0.61, 0.50, 0.26
	Linux multinode-554300 5.10.57 #1 SMP Sat Dec 16 11:03:54 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	
	==> kindnet [359babcc50a6] <==
	I0108 21:41:37.904566       1 main.go:250] Node multinode-554300-m03 has CIDR [10.244.3.0/24] 
	I0108 21:41:47.910806       1 main.go:223] Handling node with IPs: map[172.29.107.59:{}]
	I0108 21:41:47.910910       1 main.go:227] handling current node
	I0108 21:41:47.910926       1 main.go:223] Handling node with IPs: map[172.29.96.43:{}]
	I0108 21:41:47.910936       1 main.go:250] Node multinode-554300-m02 has CIDR [10.244.1.0/24] 
	I0108 21:41:47.911356       1 main.go:223] Handling node with IPs: map[172.29.100.57:{}]
	I0108 21:41:47.911438       1 main.go:250] Node multinode-554300-m03 has CIDR [10.244.3.0/24] 
	I0108 21:41:57.918714       1 main.go:223] Handling node with IPs: map[172.29.107.59:{}]
	I0108 21:41:57.918840       1 main.go:227] handling current node
	I0108 21:41:57.918856       1 main.go:223] Handling node with IPs: map[172.29.96.43:{}]
	I0108 21:41:57.918865       1 main.go:250] Node multinode-554300-m02 has CIDR [10.244.1.0/24] 
	I0108 21:41:57.919597       1 main.go:223] Handling node with IPs: map[172.29.100.57:{}]
	I0108 21:41:57.919721       1 main.go:250] Node multinode-554300-m03 has CIDR [10.244.3.0/24] 
	I0108 21:42:07.927288       1 main.go:223] Handling node with IPs: map[172.29.107.59:{}]
	I0108 21:42:07.927392       1 main.go:227] handling current node
	I0108 21:42:07.927407       1 main.go:223] Handling node with IPs: map[172.29.96.43:{}]
	I0108 21:42:07.927416       1 main.go:250] Node multinode-554300-m02 has CIDR [10.244.1.0/24] 
	I0108 21:42:07.927779       1 main.go:223] Handling node with IPs: map[172.29.100.57:{}]
	I0108 21:42:07.927951       1 main.go:250] Node multinode-554300-m03 has CIDR [10.244.3.0/24] 
	I0108 21:42:17.942289       1 main.go:223] Handling node with IPs: map[172.29.107.59:{}]
	I0108 21:42:17.942384       1 main.go:227] handling current node
	I0108 21:42:17.942594       1 main.go:223] Handling node with IPs: map[172.29.96.43:{}]
	I0108 21:42:17.942626       1 main.go:250] Node multinode-554300-m02 has CIDR [10.244.1.0/24] 
	I0108 21:42:17.942818       1 main.go:223] Handling node with IPs: map[172.29.100.57:{}]
	I0108 21:42:17.942963       1 main.go:250] Node multinode-554300-m03 has CIDR [10.244.3.0/24] 
	
	
	==> kindnet [ee987e6cf214] <==
	I0108 21:50:07.437888       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.2.0/24 Src: <nil> Gw: 172.29.108.2 Flags: [] Table: 0} 
	I0108 21:50:17.445311       1 main.go:223] Handling node with IPs: map[172.29.104.77:{}]
	I0108 21:50:17.445574       1 main.go:227] handling current node
	I0108 21:50:17.445618       1 main.go:223] Handling node with IPs: map[172.29.97.220:{}]
	I0108 21:50:17.445628       1 main.go:250] Node multinode-554300-m02 has CIDR [10.244.1.0/24] 
	I0108 21:50:17.445912       1 main.go:223] Handling node with IPs: map[172.29.108.2:{}]
	I0108 21:50:17.446011       1 main.go:250] Node multinode-554300-m03 has CIDR [10.244.2.0/24] 
	I0108 21:50:27.453454       1 main.go:223] Handling node with IPs: map[172.29.104.77:{}]
	I0108 21:50:27.453878       1 main.go:227] handling current node
	I0108 21:50:27.453975       1 main.go:223] Handling node with IPs: map[172.29.97.220:{}]
	I0108 21:50:27.454003       1 main.go:250] Node multinode-554300-m02 has CIDR [10.244.1.0/24] 
	I0108 21:50:27.454168       1 main.go:223] Handling node with IPs: map[172.29.108.2:{}]
	I0108 21:50:27.454201       1 main.go:250] Node multinode-554300-m03 has CIDR [10.244.2.0/24] 
	I0108 21:50:37.471373       1 main.go:223] Handling node with IPs: map[172.29.104.77:{}]
	I0108 21:50:37.471478       1 main.go:227] handling current node
	I0108 21:50:37.471494       1 main.go:223] Handling node with IPs: map[172.29.97.220:{}]
	I0108 21:50:37.471502       1 main.go:250] Node multinode-554300-m02 has CIDR [10.244.1.0/24] 
	I0108 21:50:37.471623       1 main.go:223] Handling node with IPs: map[172.29.108.2:{}]
	I0108 21:50:37.471794       1 main.go:250] Node multinode-554300-m03 has CIDR [10.244.2.0/24] 
	I0108 21:50:47.478139       1 main.go:223] Handling node with IPs: map[172.29.104.77:{}]
	I0108 21:50:47.478240       1 main.go:227] handling current node
	I0108 21:50:47.478255       1 main.go:223] Handling node with IPs: map[172.29.97.220:{}]
	I0108 21:50:47.478263       1 main.go:250] Node multinode-554300-m02 has CIDR [10.244.1.0/24] 
	I0108 21:50:47.478788       1 main.go:223] Handling node with IPs: map[172.29.108.2:{}]
	I0108 21:50:47.478888       1 main.go:250] Node multinode-554300-m03 has CIDR [10.244.2.0/24] 
	
	
	==> kube-apiserver [d3824c6d8537] <==
	E0108 21:46:48.676852       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["catch-all","exempt","global-default","leader-election","node-high","system","workload-high","workload-low"] items=[{},{},{},{},{},{},{},{}]
	E0108 21:46:58.678152       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["workload-high","workload-low","catch-all","exempt","global-default","leader-election","node-high","system"] items=[{},{},{},{},{},{},{},{}]
	E0108 21:47:08.679361       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["exempt","global-default","leader-election","node-high","system","workload-high","workload-low","catch-all"] items=[{},{},{},{},{},{},{},{}]
	E0108 21:47:18.680154       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["exempt","global-default","leader-election","node-high","system","workload-high","workload-low","catch-all"] items=[{},{},{},{},{},{},{},{}]
	E0108 21:47:28.680694       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["catch-all","exempt","global-default","leader-election","node-high","system","workload-high","workload-low"] items=[{},{},{},{},{},{},{},{}]
	E0108 21:47:38.681508       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["exempt","global-default","leader-election","node-high","system","workload-high","workload-low","catch-all"] items=[{},{},{},{},{},{},{},{}]
	E0108 21:47:48.682748       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["workload-high","workload-low","catch-all","exempt","global-default","leader-election","node-high","system"] items=[{},{},{},{},{},{},{},{}]
	E0108 21:47:58.683822       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["system","workload-high","workload-low","catch-all","exempt","global-default","leader-election","node-high"] items=[{},{},{},{},{},{},{},{}]
	E0108 21:48:08.684576       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["node-high","system","workload-high","workload-low","catch-all","exempt","global-default","leader-election"] items=[{},{},{},{},{},{},{},{}]
	E0108 21:48:18.685266       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["workload-low","catch-all","exempt","global-default","leader-election","node-high","system","workload-high"] items=[{},{},{},{},{},{},{},{}]
	E0108 21:48:28.686144       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["node-high","system","workload-high","workload-low","catch-all","exempt","global-default","leader-election"] items=[{},{},{},{},{},{},{},{}]
	E0108 21:48:38.687149       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["node-high","system","workload-high","workload-low","catch-all","exempt","global-default","leader-election"] items=[{},{},{},{},{},{},{},{}]
	E0108 21:48:48.687614       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["catch-all","exempt","global-default","leader-election","node-high","system","workload-high","workload-low"] items=[{},{},{},{},{},{},{},{}]
	E0108 21:48:58.688811       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["workload-low","catch-all","exempt","global-default","leader-election","node-high","system","workload-high"] items=[{},{},{},{},{},{},{},{}]
	E0108 21:49:08.690086       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["catch-all","exempt","global-default","leader-election","node-high","system","workload-high","workload-low"] items=[{},{},{},{},{},{},{},{}]
	E0108 21:49:18.690828       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["global-default","leader-election","node-high","system","workload-high","workload-low","catch-all","exempt"] items=[{},{},{},{},{},{},{},{}]
	E0108 21:49:28.692266       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["workload-high","workload-low","catch-all","exempt","global-default","leader-election","node-high","system"] items=[{},{},{},{},{},{},{},{}]
	E0108 21:49:38.693310       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["exempt","global-default","leader-election","node-high","system","workload-high","workload-low","catch-all"] items=[{},{},{},{},{},{},{},{}]
	E0108 21:49:48.694592       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["catch-all","exempt","global-default","leader-election","node-high","system","workload-high","workload-low"] items=[{},{},{},{},{},{},{},{}]
	E0108 21:49:58.695142       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["system","workload-high","workload-low","catch-all","exempt","global-default","leader-election","node-high"] items=[{},{},{},{},{},{},{},{}]
	E0108 21:50:08.695777       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["system","workload-high","workload-low","catch-all","exempt","global-default","leader-election","node-high"] items=[{},{},{},{},{},{},{},{}]
	E0108 21:50:18.696738       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["system","workload-high","workload-low","catch-all","exempt","global-default","leader-election","node-high"] items=[{},{},{},{},{},{},{},{}]
	E0108 21:50:28.697876       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["exempt","global-default","leader-election","node-high","system","workload-high","workload-low","catch-all"] items=[{},{},{},{},{},{},{},{}]
	E0108 21:50:38.699008       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["workload-high","workload-low","catch-all","exempt","global-default","leader-election","node-high","system"] items=[{},{},{},{},{},{},{},{}]
	E0108 21:50:48.699238       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["exempt","global-default","leader-election","node-high","system","workload-high","workload-low","catch-all"] items=[{},{},{},{},{},{},{},{}]
	
	
	==> kube-controller-manager [508ee1d30b00] <==
	I0108 21:47:53.588911       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="29.805275ms"
	I0108 21:47:53.600744       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="11.711568ms"
	I0108 21:47:53.601342       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="505.903µs"
	I0108 21:47:57.983886       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-554300-m02\" does not exist"
	I0108 21:47:57.984729       1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd-w2zbn" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/busybox-5bc68d56bd-w2zbn"
	I0108 21:47:57.992439       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-554300-m02" podCIDRs=["10.244.1.0/24"]
	I0108 21:47:58.849167       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="71.201µs"
	I0108 21:48:07.706178       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-554300-m02"
	I0108 21:48:07.734245       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="62.4µs"
	I0108 21:48:09.942058       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="109.601µs"
	I0108 21:48:09.951493       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="158.801µs"
	I0108 21:48:09.973573       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="35.8µs"
	I0108 21:48:10.105821       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="153.801µs"
	I0108 21:48:10.110494       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="33.7µs"
	I0108 21:48:10.110977       1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd-w2zbn" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/busybox-5bc68d56bd-w2zbn"
	I0108 21:48:11.185373       1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd-wx8lk" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/busybox-5bc68d56bd-wx8lk"
	I0108 21:48:12.212838       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="9.642645ms"
	I0108 21:48:12.213618       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="43.5µs"
	I0108 21:50:05.316428       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-554300-m02"
	I0108 21:50:06.208082       1 event.go:307] "Event occurred" object="multinode-554300-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RemovingNode" message="Node multinode-554300-m03 event: Removing Node multinode-554300-m03 from Controller"
	I0108 21:50:06.964773       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-554300-m03\" does not exist"
	I0108 21:50:06.964873       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-554300-m02"
	I0108 21:50:06.977204       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-554300-m03" podCIDRs=["10.244.2.0/24"]
	I0108 21:50:11.208888       1 event.go:307] "Event occurred" object="multinode-554300-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-554300-m03 event: Registered Node multinode-554300-m03 in Controller"
	I0108 21:50:29.458149       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-554300-m02"
	
	
	==> kube-controller-manager [5a21be70e8c8] <==
	I0108 21:27:26.204612       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="16.960131ms"
	I0108 21:27:26.229403       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="24.575902ms"
	I0108 21:27:26.230727       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="200.633µs"
	I0108 21:27:26.231565       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="50.308µs"
	I0108 21:27:29.232167       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="8.784491ms"
	I0108 21:27:29.232398       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="80.213µs"
	I0108 21:27:29.314449       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="8.884306ms"
	I0108 21:27:29.314556       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="74.011µs"
	I0108 21:31:11.760889       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-554300-m02"
	I0108 21:31:11.761173       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-554300-m03\" does not exist"
	I0108 21:31:11.784941       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-554300-m03" podCIDRs=["10.244.2.0/24"]
	I0108 21:31:11.791902       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-dnjjm"
	I0108 21:31:11.801584       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-pdt95"
	I0108 21:31:14.102615       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-554300-m03"
	I0108 21:31:14.102983       1 event.go:307] "Event occurred" object="multinode-554300-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-554300-m03 event: Registered Node multinode-554300-m03 in Controller"
	I0108 21:31:33.059902       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-554300-m02"
	I0108 21:39:04.217157       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-554300-m02"
	I0108 21:39:04.218810       1 event.go:307] "Event occurred" object="multinode-554300-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="NodeNotReady" message="Node multinode-554300-m03 status is now: NodeNotReady"
	I0108 21:39:04.240017       1 event.go:307] "Event occurred" object="kube-system/kindnet-dnjjm" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0108 21:39:04.252662       1 event.go:307] "Event occurred" object="kube-system/kube-proxy-pdt95" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0108 21:41:20.528950       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-554300-m02"
	I0108 21:41:21.795608       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-554300-m02"
	I0108 21:41:21.795722       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-554300-m03\" does not exist"
	I0108 21:41:21.816391       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-554300-m03" podCIDRs=["10.244.3.0/24"]
	I0108 21:41:29.696992       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-554300-m02"
	
	
	==> kube-proxy [2c18647ee331] <==
	I0108 21:23:46.067501       1 server_others.go:69] "Using iptables proxy"
	I0108 21:23:46.092785       1 node.go:141] Successfully retrieved node IP: 172.29.107.59
	I0108 21:23:46.212583       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0108 21:23:46.212714       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0108 21:23:46.216002       1 server_others.go:152] "Using iptables Proxier"
	I0108 21:23:46.216133       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0108 21:23:46.216370       1 server.go:846] "Version info" version="v1.28.4"
	I0108 21:23:46.216411       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0108 21:23:46.217701       1 config.go:188] "Starting service config controller"
	I0108 21:23:46.217830       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0108 21:23:46.217870       1 config.go:97] "Starting endpoint slice config controller"
	I0108 21:23:46.217882       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0108 21:23:46.218881       1 config.go:315] "Starting node config controller"
	I0108 21:23:46.218899       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0108 21:23:46.318424       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0108 21:23:46.318484       1 shared_informer.go:318] Caches are synced for service config
	I0108 21:23:46.319187       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-proxy [42677dfb82ad] <==
	I0108 21:45:32.248258       1 server_others.go:69] "Using iptables proxy"
	I0108 21:45:32.326269       1 node.go:141] Successfully retrieved node IP: 172.29.104.77
	I0108 21:45:32.451500       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0108 21:45:32.451528       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0108 21:45:32.457051       1 server_others.go:152] "Using iptables Proxier"
	I0108 21:45:32.458174       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0108 21:45:32.458842       1 server.go:846] "Version info" version="v1.28.4"
	I0108 21:45:32.458883       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0108 21:45:32.465630       1 config.go:188] "Starting service config controller"
	I0108 21:45:32.467624       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0108 21:45:32.468994       1 config.go:97] "Starting endpoint slice config controller"
	I0108 21:45:32.469004       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0108 21:45:32.473813       1 config.go:315] "Starting node config controller"
	I0108 21:45:32.473831       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0108 21:45:32.567956       1 shared_informer.go:318] Caches are synced for service config
	I0108 21:45:32.569201       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0108 21:45:32.574537       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [c193667d32e4] <==
	E0108 21:23:29.620325       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0108 21:23:29.667961       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0108 21:23:29.668069       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0108 21:23:29.684016       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0108 21:23:29.684046       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0108 21:23:29.821310       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0108 21:23:29.821697       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0108 21:23:29.831426       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0108 21:23:29.831522       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0108 21:23:29.908576       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0108 21:23:29.908612       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0108 21:23:29.937303       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0108 21:23:29.937511       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0108 21:23:29.957204       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0108 21:23:29.957244       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0108 21:23:30.011985       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0108 21:23:30.012015       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0108 21:23:30.039176       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0108 21:23:30.039207       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0108 21:23:30.060512       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0108 21:23:30.060909       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0108 21:23:32.549569       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0108 21:42:20.046407       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	I0108 21:42:20.046887       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	E0108 21:42:20.047242       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [d376f0b5b0fa] <==
	I0108 21:45:25.914964       1 serving.go:348] Generated self-signed cert in-memory
	W0108 21:45:28.522728       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0108 21:45:28.522766       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0108 21:45:28.522945       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0108 21:45:28.522959       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0108 21:45:28.660728       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.4"
	I0108 21:45:28.660775       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0108 21:45:28.667765       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0108 21:45:28.667864       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0108 21:45:28.676054       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0108 21:45:28.672868       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0108 21:45:28.777873       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	-- Journal begins at Mon 2024-01-08 21:43:55 UTC, ends at Mon 2024-01-08 21:50:53 UTC. --
	Jan 08 21:46:03 multinode-554300 kubelet[1480]: I0108 21:46:03.051457    1480 scope.go:117] "RemoveContainer" containerID="e3cad3a1ddfdf89cd03d63803c1764554688854d27f25c9c42b1a884b048e8d2"
	Jan 08 21:46:03 multinode-554300 kubelet[1480]: E0108 21:46:03.051993    1480 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(2fb8721f-01cc-4078-b45c-964d73e3da98)\"" pod="kube-system/storage-provisioner" podUID="2fb8721f-01cc-4078-b45c-964d73e3da98"
	Jan 08 21:46:14 multinode-554300 kubelet[1480]: I0108 21:46:14.620056    1480 scope.go:117] "RemoveContainer" containerID="e3cad3a1ddfdf89cd03d63803c1764554688854d27f25c9c42b1a884b048e8d2"
	Jan 08 21:46:22 multinode-554300 kubelet[1480]: E0108 21:46:22.645340    1480 iptables.go:575] "Could not set up iptables canary" err=<
	Jan 08 21:46:22 multinode-554300 kubelet[1480]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jan 08 21:46:22 multinode-554300 kubelet[1480]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jan 08 21:46:22 multinode-554300 kubelet[1480]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jan 08 21:46:22 multinode-554300 kubelet[1480]: I0108 21:46:22.675322    1480 scope.go:117] "RemoveContainer" containerID="eb93c2ad9198efe4f00dde51e8d9be4d532ac18013b6ed0d120d8f84b6abf8f5"
	Jan 08 21:46:22 multinode-554300 kubelet[1480]: I0108 21:46:22.709060    1480 scope.go:117] "RemoveContainer" containerID="3f926c6626bfc3bbab8bdfbb9715f064e0fc58c5a45ec3debb5700785086475c"
	Jan 08 21:47:22 multinode-554300 kubelet[1480]: E0108 21:47:22.645042    1480 iptables.go:575] "Could not set up iptables canary" err=<
	Jan 08 21:47:22 multinode-554300 kubelet[1480]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jan 08 21:47:22 multinode-554300 kubelet[1480]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jan 08 21:47:22 multinode-554300 kubelet[1480]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jan 08 21:48:22 multinode-554300 kubelet[1480]: E0108 21:48:22.643124    1480 iptables.go:575] "Could not set up iptables canary" err=<
	Jan 08 21:48:22 multinode-554300 kubelet[1480]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jan 08 21:48:22 multinode-554300 kubelet[1480]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jan 08 21:48:22 multinode-554300 kubelet[1480]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jan 08 21:49:22 multinode-554300 kubelet[1480]: E0108 21:49:22.643393    1480 iptables.go:575] "Could not set up iptables canary" err=<
	Jan 08 21:49:22 multinode-554300 kubelet[1480]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jan 08 21:49:22 multinode-554300 kubelet[1480]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jan 08 21:49:22 multinode-554300 kubelet[1480]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jan 08 21:50:22 multinode-554300 kubelet[1480]: E0108 21:50:22.643870    1480 iptables.go:575] "Could not set up iptables canary" err=<
	Jan 08 21:50:22 multinode-554300 kubelet[1480]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jan 08 21:50:22 multinode-554300 kubelet[1480]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jan 08 21:50:22 multinode-554300 kubelet[1480]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0108 21:50:44.375531    8084 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p multinode-554300 -n multinode-554300
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p multinode-554300 -n multinode-554300: (12.0786857s)
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-554300 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (541.39s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (708.39s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:133: (dbg) Run:  C:\Users\jenkins.minikube7\AppData\Local\Temp\minikube-v1.6.2.701252988.exe start -p running-upgrade-680100 --memory=2200 --vm-driver=hyperv
E0108 22:07:11.250919    3008 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ingress-addon-legacy-054400\client.crt: The system cannot find the path specified.
version_upgrade_test.go:133: (dbg) Done: C:\Users\jenkins.minikube7\AppData\Local\Temp\minikube-v1.6.2.701252988.exe start -p running-upgrade-680100 --memory=2200 --vm-driver=hyperv: (4m16.7894976s)
version_upgrade_test.go:143: (dbg) Run:  out/minikube-windows-amd64.exe start -p running-upgrade-680100 --memory=2200 --alsologtostderr -v=1 --driver=hyperv
version_upgrade_test.go:143: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p running-upgrade-680100 --memory=2200 --alsologtostderr -v=1 --driver=hyperv: exit status 90 (6m30.6255249s)

                                                
                                                
-- stdout --
	* [running-upgrade-680100] minikube v1.32.0 on Microsoft Windows 10 Enterprise N 10.0.19045.3803 Build 19045.3803
	  - KUBECONFIG=C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube7\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=17907
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	* Using the hyperv driver based on existing profile
	* Starting control plane node running-upgrade-680100 in cluster running-upgrade-680100
	* Updating the running hyperv "running-upgrade-680100" VM ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0108 22:11:14.985761   10304 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I0108 22:11:15.075557   10304 out.go:296] Setting OutFile to fd 1492 ...
	I0108 22:11:15.076231   10304 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 22:11:15.076231   10304 out.go:309] Setting ErrFile to fd 1376...
	I0108 22:11:15.076231   10304 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 22:11:15.101439   10304 out.go:303] Setting JSON to false
	I0108 22:11:15.106308   10304 start.go:128] hostinfo: {"hostname":"minikube7","uptime":29817,"bootTime":1704722057,"procs":202,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.3803 Build 19045.3803","kernelVersion":"10.0.19045.3803 Build 19045.3803","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f66ed2ea-6c04-4a6b-8eea-b2fb0953e990"}
	W0108 22:11:15.106434   10304 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0108 22:11:15.107432   10304 out.go:177] * [running-upgrade-680100] minikube v1.32.0 on Microsoft Windows 10 Enterprise N 10.0.19045.3803 Build 19045.3803
	I0108 22:11:15.108430   10304 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	I0108 22:11:15.107432   10304 notify.go:220] Checking for updates...
	I0108 22:11:15.108430   10304 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0108 22:11:15.109466   10304 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube7\minikube-integration\.minikube
	I0108 22:11:15.110522   10304 out.go:177]   - MINIKUBE_LOCATION=17907
	I0108 22:11:15.111448   10304 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0108 22:11:15.112454   10304 config.go:182] Loaded profile config "running-upgrade-680100": Driver=, ContainerRuntime=docker, KubernetesVersion=v1.17.0
	I0108 22:11:15.113444   10304 start_flags.go:694] config upgrade: Driver=hyperv
	I0108 22:11:15.113444   10304 start_flags.go:706] config upgrade: KicBaseImage=gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c
	I0108 22:11:15.113444   10304 profile.go:148] Saving config to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\running-upgrade-680100\config.json ...
	I0108 22:11:15.118441   10304 out.go:177] * Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	I0108 22:11:15.119449   10304 driver.go:392] Setting default libvirt URI to qemu:///system
	I0108 22:11:20.898080   10304 out.go:177] * Using the hyperv driver based on existing profile
	I0108 22:11:20.898785   10304 start.go:298] selected driver: hyperv
	I0108 22:11:20.898863   10304 start.go:902] validating driver "hyperv" against &{Name:running-upgrade-680100 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.6.0.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:2200 CPUs:2 DiskSize:20000 VMDriver:hyperv Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.17.0
ClusterName: Namespace: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:minikube IP:172.29.109.220 Port:8443 KubernetesVersion:v1.17.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[] StartHostTimeout:0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:
SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:0s GPUs:}
	I0108 22:11:20.898863   10304 start.go:913] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0108 22:11:20.966111   10304 cni.go:84] Creating CNI manager for ""
	I0108 22:11:20.966111   10304 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0108 22:11:20.966111   10304 start_flags.go:323] config:
	{Name:running-upgrade-680100 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.6.0.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:2200 CPUs:2 DiskSize:20000 VMDriver:hyperv Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.17.0 ClusterName: Namespace: APIServerName:minikubeCA APIServerNames:[] APIServerIPs
:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:minikube IP:172.29.109.220 Port:8443 KubernetesVersion:v1.17.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[] StartHostTimeout:0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:0s GPUs:}
	I0108 22:11:20.966656   10304 iso.go:125] acquiring lock: {Name:mk1869c5b5033c1301af33a7fa364ec43dd63efe Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 22:11:20.967626   10304 out.go:177] * Starting control plane node running-upgrade-680100 in cluster running-upgrade-680100
	I0108 22:11:20.968280   10304 preload.go:132] Checking if preload exists for k8s version v1.17.0 and runtime docker
	W0108 22:11:21.016252   10304 preload.go:115] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.17.0/preloaded-images-k8s-v18-v1.17.0-docker-overlay2-amd64.tar.lz4 status code: 404
	I0108 22:11:21.016469   10304 profile.go:148] Saving config to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\running-upgrade-680100\config.json ...
	I0108 22:11:21.016570   10304 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner:v5 -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner_v5
	I0108 22:11:21.016570   10304 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy:v1.17.0 -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.17.0
	I0108 22:11:21.016570   10304 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause:3.1 -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause_3.1
	I0108 22:11:21.016570   10304 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler:v1.17.0 -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler_v1.17.0
	I0108 22:11:21.016570   10304 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver:v1.17.0 -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver_v1.17.0
	I0108 22:11:21.016570   10304 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager:v1.17.0 -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager_v1.17.0
	I0108 22:11:21.016570   10304 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd:3.4.3-0 -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.4.3-0
	I0108 22:11:21.016570   10304 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns:1.6.5 -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns_1.6.5
	I0108 22:11:21.020226   10304 start.go:365] acquiring machines lock for running-upgrade-680100: {Name:mk814f158b6187cc9297257c36fdbe0d2871c950 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0108 22:11:21.210131   10304 cache.go:107] acquiring lock: {Name:mkc6e9060bea9211e4f8126ac5de344442cb8c23 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 22:11:21.210843   10304 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.17.0
	I0108 22:11:21.212131   10304 cache.go:107] acquiring lock: {Name:mk945c9573a262bf2c410f3ec338c9e4cbac7ce3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 22:11:21.212689   10304 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.17.0
	I0108 22:11:21.212895   10304 cache.go:107] acquiring lock: {Name:mkeac0ccf1d6f0e0eb0c19801602a218964c6025 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 22:11:21.213006   10304 cache.go:107] acquiring lock: {Name:mk1869bccfa4db5e538bd31af28e9c95a48df16c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 22:11:21.213349   10304 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.3-0
	I0108 22:11:21.213460   10304 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.17.0
	I0108 22:11:21.220253   10304 cache.go:107] acquiring lock: {Name:mk6522f86f404131d1768d0de0ce775513ec42e5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 22:11:21.220253   10304 cache.go:107] acquiring lock: {Name:mk3a663ba67028a054dd5a6e96ba367c56e950d4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 22:11:21.220253   10304 cache.go:107] acquiring lock: {Name:mk43c24b3570a50e54ec9f1dc43aba5ea2e54859 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 22:11:21.220253   10304 cache.go:107] acquiring lock: {Name:mke680978131adbec647605a81bab7c783de93d6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 22:11:21.220990   10304 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner_v5 exists
	I0108 22:11:21.221094   10304 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.5
	I0108 22:11:21.221148   10304 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\cache\\images\\amd64\\gcr.io\\k8s-minikube\\storage-provisioner_v5" took 204.577ms
	I0108 22:11:21.221148   10304 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner_v5 succeeded
	I0108 22:11:21.221148   10304 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.17.0
	I0108 22:11:21.221148   10304 image.go:134] retrieving image: registry.k8s.io/pause:3.1
	I0108 22:11:21.228497   10304 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.3-0
	I0108 22:11:21.228497   10304 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.17.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.17.0
	I0108 22:11:21.229503   10304 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.17.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.17.0
	I0108 22:11:21.231485   10304 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.5: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.5
	I0108 22:11:21.234514   10304 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.17.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.17.0
	I0108 22:11:21.237504   10304 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.17.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.17.0
	I0108 22:11:21.242495   10304 image.go:177] daemon lookup for registry.k8s.io/pause:3.1: Error response from daemon: No such image: registry.k8s.io/pause:3.1
	W0108 22:11:21.336018   10304 image.go:187] authn lookup for registry.k8s.io/etcd:3.4.3-0 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W0108 22:11:21.444744   10304 image.go:187] authn lookup for registry.k8s.io/kube-controller-manager:v1.17.0 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W0108 22:11:21.555140   10304 image.go:187] authn lookup for registry.k8s.io/kube-proxy:v1.17.0 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W0108 22:11:21.662968   10304 image.go:187] authn lookup for registry.k8s.io/coredns:1.6.5 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W0108 22:11:21.768200   10304 image.go:187] authn lookup for registry.k8s.io/kube-apiserver:v1.17.0 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W0108 22:11:21.866020   10304 image.go:187] authn lookup for registry.k8s.io/kube-scheduler:v1.17.0 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	I0108 22:11:21.897449   10304 cache.go:162] opening:  \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.4.3-0
	I0108 22:11:21.905133   10304 cache.go:162] opening:  \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager_v1.17.0
	I0108 22:11:21.957098   10304 cache.go:162] opening:  \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.17.0
	I0108 22:11:21.963943   10304 cache.go:162] opening:  \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns_1.6.5
	W0108 22:11:21.965440   10304 image.go:187] authn lookup for registry.k8s.io/pause:3.1 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	I0108 22:11:22.297945   10304 cache.go:162] opening:  \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause_3.1
	I0108 22:11:22.297945   10304 cache.go:162] opening:  \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler_v1.17.0
	I0108 22:11:22.336966   10304 cache.go:162] opening:  \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver_v1.17.0
	I0108 22:11:22.574465   10304 cache.go:157] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause_3.1 exists
	I0108 22:11:22.575318   10304 cache.go:96] cache image "registry.k8s.io/pause:3.1" -> "C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\pause_3.1" took 1.5587404s
	I0108 22:11:22.575318   10304 cache.go:80] save to tar file registry.k8s.io/pause:3.1 -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause_3.1 succeeded
	I0108 22:11:23.306724   10304 cache.go:157] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler_v1.17.0 exists
	I0108 22:11:23.306724   10304 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.17.0" -> "C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\kube-scheduler_v1.17.0" took 2.2901423s
	I0108 22:11:23.306724   10304 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.17.0 -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler_v1.17.0 succeeded
	I0108 22:11:23.559469   10304 cache.go:157] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver_v1.17.0 exists
	I0108 22:11:23.560046   10304 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.17.0" -> "C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\kube-apiserver_v1.17.0" took 2.5434627s
	I0108 22:11:23.560046   10304 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.17.0 -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver_v1.17.0 succeeded
	I0108 22:11:23.924965   10304 cache.go:157] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.4.3-0 exists
	I0108 22:11:23.924965   10304 cache.go:96] cache image "registry.k8s.io/etcd:3.4.3-0" -> "C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\etcd_3.4.3-0" took 2.9076739s
	I0108 22:11:23.924965   10304 cache.go:80] save to tar file registry.k8s.io/etcd:3.4.3-0 -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.4.3-0 succeeded
	I0108 22:11:24.510367   10304 cache.go:157] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager_v1.17.0 exists
	I0108 22:11:24.510367   10304 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.17.0" -> "C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\kube-controller-manager_v1.17.0" took 3.4932352s
	I0108 22:11:24.510367   10304 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.17.0 -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager_v1.17.0 succeeded
	I0108 22:11:24.767588   10304 cache.go:157] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns_1.6.5 exists
	I0108 22:11:24.768253   10304 cache.go:96] cache image "registry.k8s.io/coredns:1.6.5" -> "C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\coredns_1.6.5" took 3.750957s
	I0108 22:11:24.768328   10304 cache.go:80] save to tar file registry.k8s.io/coredns:1.6.5 -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns_1.6.5 succeeded
	I0108 22:11:32.768947   10304 cache.go:157] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.17.0 exists
	I0108 22:11:32.769700   10304 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.17.0" -> "C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\kube-proxy_v1.17.0" took 11.7530451s
	I0108 22:11:32.769700   10304 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.17.0 -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.17.0 succeeded
	I0108 22:11:32.769799   10304 cache.go:87] Successfully saved all images to host disk.
	I0108 22:16:07.770669   10304 start.go:369] acquired machines lock for "running-upgrade-680100" in 4m46.7488828s
	I0108 22:16:07.771477   10304 start.go:96] Skipping create...Using existing machine configuration
	I0108 22:16:07.771617   10304 fix.go:54] fixHost starting: minikube
	I0108 22:16:07.772758   10304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM running-upgrade-680100 ).state
	I0108 22:16:10.027728   10304 main.go:141] libmachine: [stdout =====>] : Running
	
	I0108 22:16:10.027728   10304 main.go:141] libmachine: [stderr =====>] : 
	I0108 22:16:10.027728   10304 fix.go:102] recreateIfNeeded on running-upgrade-680100: state=Running err=<nil>
	W0108 22:16:10.027728   10304 fix.go:128] unexpected machine state, will restart: <nil>
	I0108 22:16:10.028861   10304 out.go:177] * Updating the running hyperv "running-upgrade-680100" VM ...
	I0108 22:16:10.029661   10304 machine.go:88] provisioning docker machine ...
	I0108 22:16:10.029719   10304 buildroot.go:166] provisioning hostname "running-upgrade-680100"
	I0108 22:16:10.029778   10304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM running-upgrade-680100 ).state
	I0108 22:16:12.254943   10304 main.go:141] libmachine: [stdout =====>] : Running
	
	I0108 22:16:12.255017   10304 main.go:141] libmachine: [stderr =====>] : 
	I0108 22:16:12.255017   10304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM running-upgrade-680100 ).networkadapters[0]).ipaddresses[0]
	I0108 22:16:15.039144   10304 main.go:141] libmachine: [stdout =====>] : 172.29.109.220
	
	I0108 22:16:15.039221   10304 main.go:141] libmachine: [stderr =====>] : 
	I0108 22:16:15.050289   10304 main.go:141] libmachine: Using SSH client type: native
	I0108 22:16:15.051093   10304 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4a6120] 0x4a8c60 <nil>  [] 0s} 172.29.109.220 22 <nil> <nil>}
	I0108 22:16:15.051093   10304 main.go:141] libmachine: About to run SSH command:
	sudo hostname running-upgrade-680100 && echo "running-upgrade-680100" | sudo tee /etc/hostname
	I0108 22:16:15.229363   10304 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-680100
	
	I0108 22:16:15.229363   10304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM running-upgrade-680100 ).state
	I0108 22:16:17.522248   10304 main.go:141] libmachine: [stdout =====>] : Running
	
	I0108 22:16:17.522501   10304 main.go:141] libmachine: [stderr =====>] : 
	I0108 22:16:17.522591   10304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM running-upgrade-680100 ).networkadapters[0]).ipaddresses[0]
	I0108 22:16:20.117510   10304 main.go:141] libmachine: [stdout =====>] : 172.29.109.220
	
	I0108 22:16:20.117863   10304 main.go:141] libmachine: [stderr =====>] : 
	I0108 22:16:20.124372   10304 main.go:141] libmachine: Using SSH client type: native
	I0108 22:16:20.125095   10304 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4a6120] 0x4a8c60 <nil>  [] 0s} 172.29.109.220 22 <nil> <nil>}
	I0108 22:16:20.125095   10304 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\srunning-upgrade-680100' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 running-upgrade-680100/g' /etc/hosts;
				else 
					echo '127.0.1.1 running-upgrade-680100' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0108 22:16:20.265717   10304 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0108 22:16:20.265717   10304 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube7\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube7\minikube-integration\.minikube}
	I0108 22:16:20.265717   10304 buildroot.go:174] setting up certificates
	I0108 22:16:20.265717   10304 provision.go:83] configureAuth start
	I0108 22:16:20.265717   10304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM running-upgrade-680100 ).state
	I0108 22:16:22.495413   10304 main.go:141] libmachine: [stdout =====>] : Running
	
	I0108 22:16:22.495413   10304 main.go:141] libmachine: [stderr =====>] : 
	I0108 22:16:22.495413   10304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM running-upgrade-680100 ).networkadapters[0]).ipaddresses[0]
	I0108 22:16:25.495185   10304 main.go:141] libmachine: [stdout =====>] : 172.29.109.220
	
	I0108 22:16:25.495365   10304 main.go:141] libmachine: [stderr =====>] : 
	I0108 22:16:25.495365   10304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM running-upgrade-680100 ).state
	I0108 22:16:27.651581   10304 main.go:141] libmachine: [stdout =====>] : Running
	
	I0108 22:16:27.651656   10304 main.go:141] libmachine: [stderr =====>] : 
	I0108 22:16:27.651656   10304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM running-upgrade-680100 ).networkadapters[0]).ipaddresses[0]
	I0108 22:16:30.404807   10304 main.go:141] libmachine: [stdout =====>] : 172.29.109.220
	
	I0108 22:16:30.405039   10304 main.go:141] libmachine: [stderr =====>] : 
	I0108 22:16:30.405039   10304 provision.go:138] copyHostCerts
	I0108 22:16:30.405531   10304 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/ca.pem, removing ...
	I0108 22:16:30.405531   10304 exec_runner.go:203] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.pem
	I0108 22:16:30.406226   10304 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0108 22:16:30.407711   10304 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/cert.pem, removing ...
	I0108 22:16:30.407761   10304 exec_runner.go:203] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cert.pem
	I0108 22:16:30.408068   10304 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0108 22:16:30.409861   10304 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/key.pem, removing ...
	I0108 22:16:30.409920   10304 exec_runner.go:203] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\key.pem
	I0108 22:16:30.410391   10304 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/key.pem (1679 bytes)
	I0108 22:16:30.411383   10304 provision.go:112] generating server cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.running-upgrade-680100 san=[172.29.109.220 172.29.109.220 localhost 127.0.0.1 minikube running-upgrade-680100]
	I0108 22:16:30.555967   10304 provision.go:172] copyRemoteCerts
	I0108 22:16:30.569001   10304 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0108 22:16:30.569206   10304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM running-upgrade-680100 ).state
	I0108 22:16:32.769331   10304 main.go:141] libmachine: [stdout =====>] : Running
	
	I0108 22:16:32.769538   10304 main.go:141] libmachine: [stderr =====>] : 
	I0108 22:16:32.769538   10304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM running-upgrade-680100 ).networkadapters[0]).ipaddresses[0]
	I0108 22:16:35.411256   10304 main.go:141] libmachine: [stdout =====>] : 172.29.109.220
	
	I0108 22:16:35.411256   10304 main.go:141] libmachine: [stderr =====>] : 
	I0108 22:16:35.411256   10304 sshutil.go:53] new ssh client: &{IP:172.29.109.220 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\running-upgrade-680100\id_rsa Username:docker}
	I0108 22:16:35.513903   10304 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.9448178s)
	I0108 22:16:35.513932   10304 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0108 22:16:35.531347   10304 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1241 bytes)
	I0108 22:16:35.548614   10304 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0108 22:16:35.578606   10304 provision.go:86] duration metric: configureAuth took 15.3128129s
	I0108 22:16:35.578606   10304 buildroot.go:189] setting minikube options for container-runtime
	I0108 22:16:35.579897   10304 config.go:182] Loaded profile config "running-upgrade-680100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.17.0
	I0108 22:16:35.579897   10304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM running-upgrade-680100 ).state
	I0108 22:16:37.813786   10304 main.go:141] libmachine: [stdout =====>] : Running
	
	I0108 22:16:37.813786   10304 main.go:141] libmachine: [stderr =====>] : 
	I0108 22:16:37.813786   10304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM running-upgrade-680100 ).networkadapters[0]).ipaddresses[0]
	I0108 22:16:40.543946   10304 main.go:141] libmachine: [stdout =====>] : 172.29.109.220
	
	I0108 22:16:40.544022   10304 main.go:141] libmachine: [stderr =====>] : 
	I0108 22:16:40.549807   10304 main.go:141] libmachine: Using SSH client type: native
	I0108 22:16:40.550520   10304 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4a6120] 0x4a8c60 <nil>  [] 0s} 172.29.109.220 22 <nil> <nil>}
	I0108 22:16:40.550520   10304 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0108 22:16:40.693984   10304 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0108 22:16:40.693984   10304 buildroot.go:70] root file system type: tmpfs
	I0108 22:16:40.693984   10304 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0108 22:16:40.693984   10304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM running-upgrade-680100 ).state
	I0108 22:16:42.813529   10304 main.go:141] libmachine: [stdout =====>] : Running
	
	I0108 22:16:42.813609   10304 main.go:141] libmachine: [stderr =====>] : 
	I0108 22:16:42.813609   10304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM running-upgrade-680100 ).networkadapters[0]).ipaddresses[0]
	I0108 22:16:45.456690   10304 main.go:141] libmachine: [stdout =====>] : 172.29.109.220
	
	I0108 22:16:45.456690   10304 main.go:141] libmachine: [stderr =====>] : 
	I0108 22:16:45.461674   10304 main.go:141] libmachine: Using SSH client type: native
	I0108 22:16:45.462683   10304 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4a6120] 0x4a8c60 <nil>  [] 0s} 172.29.109.220 22 <nil> <nil>}
	I0108 22:16:45.462683   10304 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0108 22:16:45.614174   10304 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0108 22:16:45.614715   10304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM running-upgrade-680100 ).state
	I0108 22:16:47.792510   10304 main.go:141] libmachine: [stdout =====>] : Running
	
	I0108 22:16:47.792510   10304 main.go:141] libmachine: [stderr =====>] : 
	I0108 22:16:47.792510   10304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM running-upgrade-680100 ).networkadapters[0]).ipaddresses[0]
	I0108 22:16:50.382485   10304 main.go:141] libmachine: [stdout =====>] : 172.29.109.220
	
	I0108 22:16:50.382708   10304 main.go:141] libmachine: [stderr =====>] : 
	I0108 22:16:50.387707   10304 main.go:141] libmachine: Using SSH client type: native
	I0108 22:16:50.387707   10304 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4a6120] 0x4a8c60 <nil>  [] 0s} 172.29.109.220 22 <nil> <nil>}
	I0108 22:16:50.387707   10304 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0108 22:17:04.582439   10304 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service
	+++ /lib/systemd/system/docker.service.new
	@@ -3,9 +3,12 @@
	 Documentation=https://docs.docker.com
	 After=network.target  minikube-automount.service docker.socket
	 Requires= minikube-automount.service docker.socket 
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	+Restart=on-failure
	 
	 
	 
	@@ -21,7 +24,7 @@
	 # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	 ExecStart=
	 ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	-ExecReload=/bin/kill -s HUP 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0108 22:17:04.582439   10304 machine.go:91] provisioned docker machine in 54.5525044s
	I0108 22:17:04.582439   10304 start.go:300] post-start starting for "running-upgrade-680100" (driver="hyperv")
	I0108 22:17:04.582439   10304 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0108 22:17:04.597443   10304 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0108 22:17:04.597443   10304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM running-upgrade-680100 ).state
	I0108 22:17:06.890634   10304 main.go:141] libmachine: [stdout =====>] : Running
	
	I0108 22:17:06.890721   10304 main.go:141] libmachine: [stderr =====>] : 
	I0108 22:17:06.891089   10304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM running-upgrade-680100 ).networkadapters[0]).ipaddresses[0]
	I0108 22:17:09.756979   10304 main.go:141] libmachine: [stdout =====>] : 172.29.109.220
	
	I0108 22:17:09.756979   10304 main.go:141] libmachine: [stderr =====>] : 
	I0108 22:17:09.756979   10304 sshutil.go:53] new ssh client: &{IP:172.29.109.220 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\running-upgrade-680100\id_rsa Username:docker}
	I0108 22:17:09.861172   10304 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (5.2637028s)
	I0108 22:17:09.881590   10304 ssh_runner.go:195] Run: cat /etc/os-release
	I0108 22:17:09.888595   10304 info.go:137] Remote host: Buildroot 2019.02.7
	I0108 22:17:09.889576   10304 filesync.go:126] Scanning C:\Users\jenkins.minikube7\minikube-integration\.minikube\addons for local assets ...
	I0108 22:17:09.889576   10304 filesync.go:126] Scanning C:\Users\jenkins.minikube7\minikube-integration\.minikube\files for local assets ...
	I0108 22:17:09.890588   10304 filesync.go:149] local asset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\30082.pem -> 30082.pem in /etc/ssl/certs
	I0108 22:17:09.911582   10304 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0108 22:17:09.931016   10304 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\30082.pem --> /etc/ssl/certs/30082.pem (1708 bytes)
	I0108 22:17:09.958181   10304 start.go:303] post-start completed in 5.3757155s
	I0108 22:17:09.958181   10304 fix.go:56] fixHost completed within 1m2.1863222s
	I0108 22:17:09.958181   10304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM running-upgrade-680100 ).state
	I0108 22:17:12.199593   10304 main.go:141] libmachine: [stdout =====>] : Running
	
	I0108 22:17:12.199763   10304 main.go:141] libmachine: [stderr =====>] : 
	I0108 22:17:12.199822   10304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM running-upgrade-680100 ).networkadapters[0]).ipaddresses[0]
	I0108 22:17:14.839875   10304 main.go:141] libmachine: [stdout =====>] : 172.29.109.220
	
	I0108 22:17:14.840131   10304 main.go:141] libmachine: [stderr =====>] : 
	I0108 22:17:14.845403   10304 main.go:141] libmachine: Using SSH client type: native
	I0108 22:17:14.846110   10304 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4a6120] 0x4a8c60 <nil>  [] 0s} 172.29.109.220 22 <nil> <nil>}
	I0108 22:17:14.846110   10304 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0108 22:17:14.987682   10304 main.go:141] libmachine: SSH cmd err, output: <nil>: 1704752234.982336663
	
	I0108 22:17:14.987682   10304 fix.go:206] guest clock: 1704752234.982336663
	I0108 22:17:14.987682   10304 fix.go:219] Guest: 2024-01-08 22:17:14.982336663 +0000 UTC Remote: 2024-01-08 22:17:09.9581814 +0000 UTC m=+355.093753201 (delta=5.024155263s)
	I0108 22:17:14.987682   10304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM running-upgrade-680100 ).state
	I0108 22:17:17.221686   10304 main.go:141] libmachine: [stdout =====>] : Running
	
	I0108 22:17:17.221955   10304 main.go:141] libmachine: [stderr =====>] : 
	I0108 22:17:17.221955   10304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM running-upgrade-680100 ).networkadapters[0]).ipaddresses[0]
	I0108 22:17:19.886832   10304 main.go:141] libmachine: [stdout =====>] : 172.29.109.220
	
	I0108 22:17:19.886832   10304 main.go:141] libmachine: [stderr =====>] : 
	I0108 22:17:19.893040   10304 main.go:141] libmachine: Using SSH client type: native
	I0108 22:17:19.893269   10304 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4a6120] 0x4a8c60 <nil>  [] 0s} 172.29.109.220 22 <nil> <nil>}
	I0108 22:17:19.893863   10304 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1704752234
	I0108 22:17:20.037814   10304 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Jan  8 22:17:14 UTC 2024
	
	I0108 22:17:20.037814   10304 fix.go:226] clock set: Mon Jan  8 22:17:14 UTC 2024
	 (err=<nil>)
	I0108 22:17:20.037897   10304 start.go:83] releasing machines lock for "running-upgrade-680100", held for 1m12.2662305s
	I0108 22:17:20.038159   10304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM running-upgrade-680100 ).state
	I0108 22:17:22.359927   10304 main.go:141] libmachine: [stdout =====>] : Running
	
	I0108 22:17:22.359927   10304 main.go:141] libmachine: [stderr =====>] : 
	I0108 22:17:22.359927   10304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM running-upgrade-680100 ).networkadapters[0]).ipaddresses[0]
	I0108 22:17:25.311938   10304 main.go:141] libmachine: [stdout =====>] : 172.29.109.220
	
	I0108 22:17:25.311938   10304 main.go:141] libmachine: [stderr =====>] : 
	I0108 22:17:25.316962   10304 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0108 22:17:25.316962   10304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM running-upgrade-680100 ).state
	I0108 22:17:25.329959   10304 ssh_runner.go:195] Run: cat /version.json
	I0108 22:17:25.330970   10304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM running-upgrade-680100 ).state
	I0108 22:17:28.005786   10304 main.go:141] libmachine: [stdout =====>] : Running
	
	I0108 22:17:28.005786   10304 main.go:141] libmachine: [stderr =====>] : 
	I0108 22:17:28.005786   10304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM running-upgrade-680100 ).networkadapters[0]).ipaddresses[0]
	I0108 22:17:28.085931   10304 main.go:141] libmachine: [stdout =====>] : Running
	
	I0108 22:17:28.085931   10304 main.go:141] libmachine: [stderr =====>] : 
	I0108 22:17:28.086038   10304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM running-upgrade-680100 ).networkadapters[0]).ipaddresses[0]
	I0108 22:17:31.262374   10304 main.go:141] libmachine: [stdout =====>] : 172.29.109.220
	
	I0108 22:17:31.262555   10304 main.go:141] libmachine: [stderr =====>] : 
	I0108 22:17:31.262661   10304 sshutil.go:53] new ssh client: &{IP:172.29.109.220 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\running-upgrade-680100\id_rsa Username:docker}
	I0108 22:17:31.339979   10304 main.go:141] libmachine: [stdout =====>] : 172.29.109.220
	
	I0108 22:17:31.340081   10304 main.go:141] libmachine: [stderr =====>] : 
	I0108 22:17:31.340757   10304 sshutil.go:53] new ssh client: &{IP:172.29.109.220 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\running-upgrade-680100\id_rsa Username:docker}
	I0108 22:17:31.385235   10304 ssh_runner.go:235] Completed: cat /version.json: (6.0542346s)
	W0108 22:17:31.385338   10304 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0108 22:17:31.400518   10304 ssh_runner.go:195] Run: systemctl --version
	I0108 22:17:31.424508   10304 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0108 22:17:31.560671   10304 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0108 22:17:31.560671   10304 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (6.2436786s)
	I0108 22:17:31.578656   10304 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0108 22:17:31.604100   10304 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0108 22:17:31.613038   10304 cni.go:305] no active bridge cni configs found in "/etc/cni/net.d" - nothing to configure
	I0108 22:17:31.613038   10304 start.go:475] detecting cgroup driver to use...
	I0108 22:17:31.613396   10304 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0108 22:17:31.655390   10304 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.1"|' /etc/containerd/config.toml"
	I0108 22:17:31.684226   10304 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0108 22:17:31.702467   10304 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0108 22:17:31.715216   10304 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0108 22:17:31.743828   10304 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0108 22:17:31.777437   10304 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0108 22:17:31.812489   10304 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0108 22:17:31.847999   10304 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0108 22:17:31.884645   10304 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0108 22:17:31.916921   10304 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0108 22:17:31.952418   10304 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0108 22:17:31.976497   10304 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0108 22:17:32.299331   10304 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0108 22:17:32.322904   10304 start.go:475] detecting cgroup driver to use...
	I0108 22:17:32.338264   10304 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0108 22:17:32.364236   10304 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0108 22:17:32.410100   10304 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0108 22:17:32.459742   10304 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0108 22:17:32.487883   10304 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0108 22:17:32.508131   10304 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0108 22:17:32.546867   10304 ssh_runner.go:195] Run: which cri-dockerd
	I0108 22:17:32.566339   10304 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0108 22:17:32.575498   10304 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0108 22:17:32.608413   10304 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0108 22:17:32.798902   10304 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0108 22:17:32.993508   10304 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0108 22:17:32.993640   10304 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0108 22:17:33.020131   10304 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0108 22:17:33.208523   10304 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0108 22:17:45.216534   10304 ssh_runner.go:235] Completed: sudo systemctl restart docker: (12.0078793s)
	I0108 22:17:45.229405   10304 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0108 22:17:45.304149   10304 out.go:177] 
	W0108 22:17:45.305684   10304 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	-- Logs begin at Mon 2024-01-08 22:08:29 UTC, end at Mon 2024-01-08 22:17:45 UTC. --
	Jan 08 22:09:53 running-upgrade-680100 systemd[1]: Starting Docker Application Container Engine...
	Jan 08 22:09:53 running-upgrade-680100 dockerd[2753]: time="2024-01-08T22:09:53.389967109Z" level=info msg="Starting up"
	Jan 08 22:09:53 running-upgrade-680100 dockerd[2753]: time="2024-01-08T22:09:53.392699309Z" level=info msg="libcontainerd: started new containerd process" pid=2761
	Jan 08 22:09:53 running-upgrade-680100 dockerd[2753]: time="2024-01-08T22:09:53.392789609Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	Jan 08 22:09:53 running-upgrade-680100 dockerd[2753]: time="2024-01-08T22:09:53.392801209Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	Jan 08 22:09:53 running-upgrade-680100 dockerd[2753]: time="2024-01-08T22:09:53.392826609Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/containerd/containerd.sock 0  <nil>}] <nil>}" module=grpc
	Jan 08 22:09:53 running-upgrade-680100 dockerd[2753]: time="2024-01-08T22:09:53.392843509Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Jan 08 22:09:53 running-upgrade-680100 dockerd[2753]: time="2024-01-08T22:09:53.432536109Z" level=info msg="starting containerd" revision=b34a5c8af56e510852c35414db4c1f4fa6172339 version=v1.2.10
	Jan 08 22:09:53 running-upgrade-680100 dockerd[2753]: time="2024-01-08T22:09:53.432940809Z" level=info msg="loading plugin "io.containerd.content.v1.content"..." type=io.containerd.content.v1
	Jan 08 22:09:53 running-upgrade-680100 dockerd[2753]: time="2024-01-08T22:09:53.433124109Z" level=info msg="loading plugin "io.containerd.snapshotter.v1.btrfs"..." type=io.containerd.snapshotter.v1
	Jan 08 22:09:53 running-upgrade-680100 dockerd[2753]: time="2024-01-08T22:09:53.433625909Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.btrfs" error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs must be a btrfs filesystem to be used with the btrfs snapshotter"
	Jan 08 22:09:53 running-upgrade-680100 dockerd[2753]: time="2024-01-08T22:09:53.433730409Z" level=info msg="loading plugin "io.containerd.snapshotter.v1.aufs"..." type=io.containerd.snapshotter.v1
	Jan 08 22:09:53 running-upgrade-680100 dockerd[2753]: time="2024-01-08T22:09:53.435997509Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.aufs" error="modprobe aufs failed: "modprobe: FATAL: Module aufs not found in directory /lib/modules/4.19.81\n": exit status 1"
	Jan 08 22:09:53 running-upgrade-680100 dockerd[2753]: time="2024-01-08T22:09:53.436044509Z" level=info msg="loading plugin "io.containerd.snapshotter.v1.native"..." type=io.containerd.snapshotter.v1
	Jan 08 22:09:53 running-upgrade-680100 dockerd[2753]: time="2024-01-08T22:09:53.436160009Z" level=info msg="loading plugin "io.containerd.snapshotter.v1.overlayfs"..." type=io.containerd.snapshotter.v1
	Jan 08 22:09:53 running-upgrade-680100 dockerd[2753]: time="2024-01-08T22:09:53.436555009Z" level=info msg="loading plugin "io.containerd.snapshotter.v1.zfs"..." type=io.containerd.snapshotter.v1
	Jan 08 22:09:53 running-upgrade-680100 dockerd[2753]: time="2024-01-08T22:09:53.436874109Z" level=info msg="skip loading plugin "io.containerd.snapshotter.v1.zfs"..." type=io.containerd.snapshotter.v1
	Jan 08 22:09:53 running-upgrade-680100 dockerd[2753]: time="2024-01-08T22:09:53.436989209Z" level=info msg="loading plugin "io.containerd.metadata.v1.bolt"..." type=io.containerd.metadata.v1
	Jan 08 22:09:53 running-upgrade-680100 dockerd[2753]: time="2024-01-08T22:09:53.437049309Z" level=warning msg="could not use snapshotter zfs in metadata plugin" error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin"
	Jan 08 22:09:53 running-upgrade-680100 dockerd[2753]: time="2024-01-08T22:09:53.437136309Z" level=warning msg="could not use snapshotter btrfs in metadata plugin" error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs must be a btrfs filesystem to be used with the btrfs snapshotter"
	Jan 08 22:09:53 running-upgrade-680100 dockerd[2753]: time="2024-01-08T22:09:53.437149609Z" level=warning msg="could not use snapshotter aufs in metadata plugin" error="modprobe aufs failed: "modprobe: FATAL: Module aufs not found in directory /lib/modules/4.19.81\n": exit status 1"
	Jan 08 22:09:53 running-upgrade-680100 dockerd[2753]: time="2024-01-08T22:09:53.447933009Z" level=info msg="loading plugin "io.containerd.differ.v1.walking"..." type=io.containerd.differ.v1
	Jan 08 22:09:53 running-upgrade-680100 dockerd[2753]: time="2024-01-08T22:09:53.448057409Z" level=info msg="loading plugin "io.containerd.gc.v1.scheduler"..." type=io.containerd.gc.v1
	Jan 08 22:09:53 running-upgrade-680100 dockerd[2753]: time="2024-01-08T22:09:53.448255909Z" level=info msg="loading plugin "io.containerd.service.v1.containers-service"..." type=io.containerd.service.v1
	Jan 08 22:09:53 running-upgrade-680100 dockerd[2753]: time="2024-01-08T22:09:53.448298409Z" level=info msg="loading plugin "io.containerd.service.v1.content-service"..." type=io.containerd.service.v1
	Jan 08 22:09:53 running-upgrade-680100 dockerd[2753]: time="2024-01-08T22:09:53.448313309Z" level=info msg="loading plugin "io.containerd.service.v1.diff-service"..." type=io.containerd.service.v1
	Jan 08 22:09:53 running-upgrade-680100 dockerd[2753]: time="2024-01-08T22:09:53.448326109Z" level=info msg="loading plugin "io.containerd.service.v1.images-service"..." type=io.containerd.service.v1
	Jan 08 22:09:53 running-upgrade-680100 dockerd[2753]: time="2024-01-08T22:09:53.448338309Z" level=info msg="loading plugin "io.containerd.service.v1.leases-service"..." type=io.containerd.service.v1
	Jan 08 22:09:53 running-upgrade-680100 dockerd[2753]: time="2024-01-08T22:09:53.448401609Z" level=info msg="loading plugin "io.containerd.service.v1.namespaces-service"..." type=io.containerd.service.v1
	Jan 08 22:09:53 running-upgrade-680100 dockerd[2753]: time="2024-01-08T22:09:53.448419409Z" level=info msg="loading plugin "io.containerd.service.v1.snapshots-service"..." type=io.containerd.service.v1
	Jan 08 22:09:53 running-upgrade-680100 dockerd[2753]: time="2024-01-08T22:09:53.448505509Z" level=info msg="loading plugin "io.containerd.runtime.v1.linux"..." type=io.containerd.runtime.v1
	Jan 08 22:09:53 running-upgrade-680100 dockerd[2753]: time="2024-01-08T22:09:53.448681609Z" level=info msg="loading plugin "io.containerd.runtime.v2.task"..." type=io.containerd.runtime.v2
	Jan 08 22:09:53 running-upgrade-680100 dockerd[2753]: time="2024-01-08T22:09:53.448825309Z" level=info msg="loading plugin "io.containerd.monitor.v1.cgroups"..." type=io.containerd.monitor.v1
	Jan 08 22:09:53 running-upgrade-680100 dockerd[2753]: time="2024-01-08T22:09:53.449514709Z" level=info msg="loading plugin "io.containerd.service.v1.tasks-service"..." type=io.containerd.service.v1
	Jan 08 22:09:53 running-upgrade-680100 dockerd[2753]: time="2024-01-08T22:09:53.449708609Z" level=info msg="loading plugin "io.containerd.internal.v1.restart"..." type=io.containerd.internal.v1
	Jan 08 22:09:53 running-upgrade-680100 dockerd[2753]: time="2024-01-08T22:09:53.449748609Z" level=info msg="loading plugin "io.containerd.grpc.v1.containers"..." type=io.containerd.grpc.v1
	Jan 08 22:09:53 running-upgrade-680100 dockerd[2753]: time="2024-01-08T22:09:53.449761609Z" level=info msg="loading plugin "io.containerd.grpc.v1.content"..." type=io.containerd.grpc.v1
	Jan 08 22:09:53 running-upgrade-680100 dockerd[2753]: time="2024-01-08T22:09:53.449773109Z" level=info msg="loading plugin "io.containerd.grpc.v1.diff"..." type=io.containerd.grpc.v1
	Jan 08 22:09:53 running-upgrade-680100 dockerd[2753]: time="2024-01-08T22:09:53.449792309Z" level=info msg="loading plugin "io.containerd.grpc.v1.events"..." type=io.containerd.grpc.v1
	Jan 08 22:09:53 running-upgrade-680100 dockerd[2753]: time="2024-01-08T22:09:53.449805409Z" level=info msg="loading plugin "io.containerd.grpc.v1.healthcheck"..." type=io.containerd.grpc.v1
	Jan 08 22:09:53 running-upgrade-680100 dockerd[2753]: time="2024-01-08T22:09:53.449825309Z" level=info msg="loading plugin "io.containerd.grpc.v1.images"..." type=io.containerd.grpc.v1
	Jan 08 22:09:53 running-upgrade-680100 dockerd[2753]: time="2024-01-08T22:09:53.449840609Z" level=info msg="loading plugin "io.containerd.grpc.v1.leases"..." type=io.containerd.grpc.v1
	Jan 08 22:09:53 running-upgrade-680100 dockerd[2753]: time="2024-01-08T22:09:53.449853209Z" level=info msg="loading plugin "io.containerd.grpc.v1.namespaces"..." type=io.containerd.grpc.v1
	Jan 08 22:09:53 running-upgrade-680100 dockerd[2753]: time="2024-01-08T22:09:53.449863509Z" level=info msg="loading plugin "io.containerd.internal.v1.opt"..." type=io.containerd.internal.v1
	Jan 08 22:09:53 running-upgrade-680100 dockerd[2753]: time="2024-01-08T22:09:53.449918909Z" level=info msg="loading plugin "io.containerd.grpc.v1.snapshots"..." type=io.containerd.grpc.v1
	Jan 08 22:09:53 running-upgrade-680100 dockerd[2753]: time="2024-01-08T22:09:53.449933509Z" level=info msg="loading plugin "io.containerd.grpc.v1.tasks"..." type=io.containerd.grpc.v1
	Jan 08 22:09:53 running-upgrade-680100 dockerd[2753]: time="2024-01-08T22:09:53.449944309Z" level=info msg="loading plugin "io.containerd.grpc.v1.version"..." type=io.containerd.grpc.v1
	Jan 08 22:09:53 running-upgrade-680100 dockerd[2753]: time="2024-01-08T22:09:53.449954509Z" level=info msg="loading plugin "io.containerd.grpc.v1.introspection"..." type=io.containerd.grpc.v1
	Jan 08 22:09:53 running-upgrade-680100 dockerd[2753]: time="2024-01-08T22:09:53.450086509Z" level=info msg=serving... address="/var/run/docker/containerd/containerd-debug.sock"
	Jan 08 22:09:53 running-upgrade-680100 dockerd[2753]: time="2024-01-08T22:09:53.450242509Z" level=info msg=serving... address="/var/run/docker/containerd/containerd.sock"
	Jan 08 22:09:53 running-upgrade-680100 dockerd[2753]: time="2024-01-08T22:09:53.450266809Z" level=info msg="containerd successfully booted in 0.019217s"
	Jan 08 22:09:53 running-upgrade-680100 dockerd[2753]: time="2024-01-08T22:09:53.460781209Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	Jan 08 22:09:53 running-upgrade-680100 dockerd[2753]: time="2024-01-08T22:09:53.460992009Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	Jan 08 22:09:53 running-upgrade-680100 dockerd[2753]: time="2024-01-08T22:09:53.461086609Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/containerd/containerd.sock 0  <nil>}] <nil>}" module=grpc
	Jan 08 22:09:53 running-upgrade-680100 dockerd[2753]: time="2024-01-08T22:09:53.461147809Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Jan 08 22:09:53 running-upgrade-680100 dockerd[2753]: time="2024-01-08T22:09:53.463218609Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	Jan 08 22:09:53 running-upgrade-680100 dockerd[2753]: time="2024-01-08T22:09:53.463590409Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	Jan 08 22:09:53 running-upgrade-680100 dockerd[2753]: time="2024-01-08T22:09:53.463768309Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/containerd/containerd.sock 0  <nil>}] <nil>}" module=grpc
	Jan 08 22:09:53 running-upgrade-680100 dockerd[2753]: time="2024-01-08T22:09:53.463797909Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Jan 08 22:09:53 running-upgrade-680100 dockerd[2753]: time="2024-01-08T22:09:53.504313909Z" level=warning msg="Your kernel does not support cgroup blkio weight"
	Jan 08 22:09:53 running-upgrade-680100 dockerd[2753]: time="2024-01-08T22:09:53.504577309Z" level=warning msg="Your kernel does not support cgroup blkio weight_device"
	Jan 08 22:09:53 running-upgrade-680100 dockerd[2753]: time="2024-01-08T22:09:53.504676409Z" level=warning msg="Your kernel does not support cgroup blkio throttle.read_bps_device"
	Jan 08 22:09:53 running-upgrade-680100 dockerd[2753]: time="2024-01-08T22:09:53.504692009Z" level=warning msg="Your kernel does not support cgroup blkio throttle.write_bps_device"
	Jan 08 22:09:53 running-upgrade-680100 dockerd[2753]: time="2024-01-08T22:09:53.504699309Z" level=warning msg="Your kernel does not support cgroup blkio throttle.read_iops_device"
	Jan 08 22:09:53 running-upgrade-680100 dockerd[2753]: time="2024-01-08T22:09:53.504706809Z" level=warning msg="Your kernel does not support cgroup blkio throttle.write_iops_device"
	Jan 08 22:09:53 running-upgrade-680100 dockerd[2753]: time="2024-01-08T22:09:53.504976609Z" level=info msg="Loading containers: start."
	Jan 08 22:09:53 running-upgrade-680100 dockerd[2753]: time="2024-01-08T22:09:53.628302209Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Jan 08 22:09:53 running-upgrade-680100 dockerd[2753]: time="2024-01-08T22:09:53.703595009Z" level=info msg="Loading containers: done."
	Jan 08 22:09:53 running-upgrade-680100 dockerd[2753]: time="2024-01-08T22:09:53.730153809Z" level=info msg="Docker daemon" commit=633a0ea838 graphdriver(s)=overlay2 version=19.03.5
	Jan 08 22:09:53 running-upgrade-680100 dockerd[2753]: time="2024-01-08T22:09:53.730669609Z" level=info msg="Daemon has completed initialization"
	Jan 08 22:09:53 running-upgrade-680100 dockerd[2753]: time="2024-01-08T22:09:53.777719609Z" level=info msg="API listen on /var/run/docker.sock"
	Jan 08 22:09:53 running-upgrade-680100 systemd[1]: Started Docker Application Container Engine.
	Jan 08 22:09:53 running-upgrade-680100 dockerd[2753]: time="2024-01-08T22:09:53.778838009Z" level=info msg="API listen on [::]:2376"
	Jan 08 22:10:56 running-upgrade-680100 dockerd[2753]: time="2024-01-08T22:10:56.953714712Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/94f7e45b20a527a5016733050e2f44ecbefc6fed5ab92791251c545a12c89b69/shim.sock" debug=false pid=4403
	Jan 08 22:10:57 running-upgrade-680100 dockerd[2753]: time="2024-01-08T22:10:57.062728963Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/0007852aa1c1940d8794545d3b9710fb7f3f7c8f741e21e203fe7dad02d1660b/shim.sock" debug=false pid=4428
	Jan 08 22:10:57 running-upgrade-680100 dockerd[2753]: time="2024-01-08T22:10:57.758501901Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/2bd2f07b15e627c89390c268b3a39368050ccc4d7052722b5f7efd07133abe04/shim.sock" debug=false pid=4529
	Jan 08 22:10:58 running-upgrade-680100 dockerd[2753]: time="2024-01-08T22:10:58.957590961Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/1e677fc038a76937e4499fb992e5f25922103d050a25f44bdef5f5a694295d30/shim.sock" debug=false pid=4597
	Jan 08 22:10:59 running-upgrade-680100 dockerd[2753]: time="2024-01-08T22:10:59.077623425Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/3226d1aa7d70931ab1cadb9921a9a6214df9ff81fe3ed7c39f353910e859f9b8/shim.sock" debug=false pid=4623
	Jan 08 22:10:59 running-upgrade-680100 dockerd[2753]: time="2024-01-08T22:10:59.081391915Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/a015b3b97fa8a20c2672e0d1f993e656dea04e1aed2b68f25c2674be2d76822e/shim.sock" debug=false pid=4629
	Jan 08 22:10:59 running-upgrade-680100 dockerd[2753]: time="2024-01-08T22:10:59.124052898Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/b76f003b30ff5dec2d22f44d781db593ac1028f132a911cfb331eec05bd75a14/shim.sock" debug=false pid=4662
	Jan 08 22:10:59 running-upgrade-680100 dockerd[2753]: time="2024-01-08T22:10:59.307250098Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/5e61f0771ced02225087196c0509b302bc1efea658bee57c8bfebb24c8d88dde/shim.sock" debug=false pid=4717
	Jan 08 22:11:00 running-upgrade-680100 dockerd[2753]: time="2024-01-08T22:11:00.614948131Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/bd05e21ddd4d7f8b9c228dd39f1fb19e98c86c09e346f968b5d44bc797520215/shim.sock" debug=false pid=4919
	Jan 08 22:11:01 running-upgrade-680100 dockerd[2753]: time="2024-01-08T22:11:01.066524186Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/3a806fc937e35ca050ccc8d15a4902a14c751f1f03fa868c64272b6d4b1ac713/shim.sock" debug=false pid=5048
	Jan 08 22:11:19 running-upgrade-680100 dockerd[2753]: time="2024-01-08T22:11:19.747726870Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/ad052c1fa5ed19cf06bd74b8695d7462e7a6a62456916495b0803b12464bb880/shim.sock" debug=false pid=5676
	Jan 08 22:11:20 running-upgrade-680100 dockerd[2753]: time="2024-01-08T22:11:20.087444426Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/64f04967e22a7460b24035cb378c5cb0d27a3b027c493566db2d163a659fa47a/shim.sock" debug=false pid=5728
	Jan 08 22:11:24 running-upgrade-680100 dockerd[2753]: time="2024-01-08T22:11:24.074547544Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/148c8e224531223a1f2eb709da2677c3b504eb624c72dcf3b672b6de6b77b0a2/shim.sock" debug=false pid=5881
	Jan 08 22:11:24 running-upgrade-680100 dockerd[2753]: time="2024-01-08T22:11:24.630643059Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/dc26a8ed679e70b3f45b977523cd1acb0464ac985cdd678264d9ea47785d6972/shim.sock" debug=false pid=5962
	Jan 08 22:11:24 running-upgrade-680100 dockerd[2753]: time="2024-01-08T22:11:24.651334573Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/5b04c76c5b72b0623c840cac5eb8f6d4d43a4e419301824d8277496c4848c682/shim.sock" debug=false pid=5974
	Jan 08 22:11:25 running-upgrade-680100 dockerd[2753]: time="2024-01-08T22:11:25.160739695Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/987ac24fd0d480229d28918f742d8659edf69e499371dd09a6c3e3a8695f486a/shim.sock" debug=false pid=6071
	Jan 08 22:11:25 running-upgrade-680100 dockerd[2753]: time="2024-01-08T22:11:25.848332315Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/ee753ed9ca6b81ab0d6ffb1345d6dd67d3a81e5dc26bf70dff04877449b4b4ea/shim.sock" debug=false pid=6129
	Jan 08 22:11:26 running-upgrade-680100 dockerd[2753]: time="2024-01-08T22:11:26.276919868Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/073f462bd9a112252fd5243a3bcd19ebb25ca043781e792ecdbe676bd8119890/shim.sock" debug=false pid=6196
	Jan 08 22:16:50 running-upgrade-680100 systemd[1]: Stopping Docker Application Container Engine...
	Jan 08 22:16:50 running-upgrade-680100 dockerd[2753]: time="2024-01-08T22:16:50.900514834Z" level=info msg="Processing signal 'terminated'"
	Jan 08 22:16:51 running-upgrade-680100 dockerd[2753]: time="2024-01-08T22:16:51.941964872Z" level=info msg="shim reaped" id=bd05e21ddd4d7f8b9c228dd39f1fb19e98c86c09e346f968b5d44bc797520215
	Jan 08 22:16:51 running-upgrade-680100 dockerd[2753]: time="2024-01-08T22:16:51.946136686Z" level=info msg="shim reaped" id=5e61f0771ced02225087196c0509b302bc1efea658bee57c8bfebb24c8d88dde
	Jan 08 22:16:51 running-upgrade-680100 dockerd[2753]: time="2024-01-08T22:16:51.952161906Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 08 22:16:51 running-upgrade-680100 dockerd[2753]: time="2024-01-08T22:16:51.952448507Z" level=warning msg="bd05e21ddd4d7f8b9c228dd39f1fb19e98c86c09e346f968b5d44bc797520215 cleanup: failed to unmount IPC: umount /var/lib/docker/containers/bd05e21ddd4d7f8b9c228dd39f1fb19e98c86c09e346f968b5d44bc797520215/mounts/shm, flags: 0x2: no such file or directory"
	Jan 08 22:16:51 running-upgrade-680100 dockerd[2753]: time="2024-01-08T22:16:51.960089133Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 08 22:16:51 running-upgrade-680100 dockerd[2753]: time="2024-01-08T22:16:51.960227133Z" level=warning msg="5e61f0771ced02225087196c0509b302bc1efea658bee57c8bfebb24c8d88dde cleanup: failed to unmount IPC: umount /var/lib/docker/containers/5e61f0771ced02225087196c0509b302bc1efea658bee57c8bfebb24c8d88dde/mounts/shm, flags: 0x2: no such file or directory"
	Jan 08 22:16:51 running-upgrade-680100 dockerd[2753]: time="2024-01-08T22:16:51.990200834Z" level=info msg="shim reaped" id=0007852aa1c1940d8794545d3b9710fb7f3f7c8f741e21e203fe7dad02d1660b
	Jan 08 22:16:51 running-upgrade-680100 dockerd[2753]: time="2024-01-08T22:16:51.995534552Z" level=info msg="shim reaped" id=94f7e45b20a527a5016733050e2f44ecbefc6fed5ab92791251c545a12c89b69
	Jan 08 22:16:52 running-upgrade-680100 dockerd[2753]: time="2024-01-08T22:16:52.000276968Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 08 22:16:52 running-upgrade-680100 dockerd[2753]: time="2024-01-08T22:16:52.004319081Z" level=info msg="shim reaped" id=ee753ed9ca6b81ab0d6ffb1345d6dd67d3a81e5dc26bf70dff04877449b4b4ea
	Jan 08 22:16:52 running-upgrade-680100 dockerd[2753]: time="2024-01-08T22:16:52.005477185Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 08 22:16:52 running-upgrade-680100 dockerd[2753]: time="2024-01-08T22:16:52.019962930Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 08 22:16:52 running-upgrade-680100 dockerd[2753]: time="2024-01-08T22:16:52.087454938Z" level=info msg="shim reaped" id=ad052c1fa5ed19cf06bd74b8695d7462e7a6a62456916495b0803b12464bb880
	Jan 08 22:16:52 running-upgrade-680100 dockerd[2753]: time="2024-01-08T22:16:52.088715142Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 08 22:16:52 running-upgrade-680100 dockerd[2753]: time="2024-01-08T22:16:52.096329466Z" level=info msg="shim reaped" id=987ac24fd0d480229d28918f742d8659edf69e499371dd09a6c3e3a8695f486a
	Jan 08 22:16:52 running-upgrade-680100 dockerd[2753]: time="2024-01-08T22:16:52.102705685Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 08 22:16:52 running-upgrade-680100 dockerd[2753]: time="2024-01-08T22:16:52.102967386Z" level=warning msg="987ac24fd0d480229d28918f742d8659edf69e499371dd09a6c3e3a8695f486a cleanup: failed to unmount IPC: umount /var/lib/docker/containers/987ac24fd0d480229d28918f742d8659edf69e499371dd09a6c3e3a8695f486a/mounts/shm, flags: 0x2: no such file or directory"
	Jan 08 22:16:52 running-upgrade-680100 dockerd[2753]: time="2024-01-08T22:16:52.107710001Z" level=info msg="shim reaped" id=3a806fc937e35ca050ccc8d15a4902a14c751f1f03fa868c64272b6d4b1ac713
	Jan 08 22:16:52 running-upgrade-680100 dockerd[2753]: time="2024-01-08T22:16:52.108110802Z" level=info msg="shim reaped" id=148c8e224531223a1f2eb709da2677c3b504eb624c72dcf3b672b6de6b77b0a2
	Jan 08 22:16:52 running-upgrade-680100 dockerd[2753]: time="2024-01-08T22:16:52.109778507Z" level=info msg="shim reaped" id=64f04967e22a7460b24035cb378c5cb0d27a3b027c493566db2d163a659fa47a
	Jan 08 22:16:52 running-upgrade-680100 dockerd[2753]: time="2024-01-08T22:16:52.113897520Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 08 22:16:52 running-upgrade-680100 dockerd[2753]: time="2024-01-08T22:16:52.118800535Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 08 22:16:52 running-upgrade-680100 dockerd[2753]: time="2024-01-08T22:16:52.118990836Z" level=warning msg="3a806fc937e35ca050ccc8d15a4902a14c751f1f03fa868c64272b6d4b1ac713 cleanup: failed to unmount IPC: umount /var/lib/docker/containers/3a806fc937e35ca050ccc8d15a4902a14c751f1f03fa868c64272b6d4b1ac713/mounts/shm, flags: 0x2: no such file or directory"
	Jan 08 22:16:52 running-upgrade-680100 dockerd[2753]: time="2024-01-08T22:16:52.133779882Z" level=info msg="shim reaped" id=a015b3b97fa8a20c2672e0d1f993e656dea04e1aed2b68f25c2674be2d76822e
	Jan 08 22:16:52 running-upgrade-680100 dockerd[2753]: time="2024-01-08T22:16:52.133798682Z" level=info msg="shim reaped" id=b76f003b30ff5dec2d22f44d781db593ac1028f132a911cfb331eec05bd75a14
	Jan 08 22:16:52 running-upgrade-680100 dockerd[2753]: time="2024-01-08T22:16:52.134501084Z" level=info msg="shim reaped" id=2bd2f07b15e627c89390c268b3a39368050ccc4d7052722b5f7efd07133abe04
	Jan 08 22:16:52 running-upgrade-680100 dockerd[2753]: time="2024-01-08T22:16:52.137625993Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 08 22:16:52 running-upgrade-680100 dockerd[2753]: time="2024-01-08T22:16:52.137964795Z" level=warning msg="64f04967e22a7460b24035cb378c5cb0d27a3b027c493566db2d163a659fa47a cleanup: failed to unmount IPC: umount /var/lib/docker/containers/64f04967e22a7460b24035cb378c5cb0d27a3b027c493566db2d163a659fa47a/mounts/shm, flags: 0x2: no such file or directory"
	Jan 08 22:16:52 running-upgrade-680100 dockerd[2753]: time="2024-01-08T22:16:52.151952738Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 08 22:16:52 running-upgrade-680100 dockerd[2753]: time="2024-01-08T22:16:52.152172638Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 08 22:16:52 running-upgrade-680100 dockerd[2753]: time="2024-01-08T22:16:52.152280839Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 08 22:16:52 running-upgrade-680100 dockerd[2753]: time="2024-01-08T22:16:52.156822953Z" level=info msg="shim reaped" id=5b04c76c5b72b0623c840cac5eb8f6d4d43a4e419301824d8277496c4848c682
	Jan 08 22:16:52 running-upgrade-680100 dockerd[2753]: time="2024-01-08T22:16:52.167143185Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 08 22:16:52 running-upgrade-680100 dockerd[2753]: time="2024-01-08T22:16:52.660386810Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/5f9792e14ebd7ea17983f6ca19a526242e0b8b3b7e9cd0ea59a60f33b3d32f0e/shim.sock" debug=false pid=10370
	Jan 08 22:16:52 running-upgrade-680100 dockerd[2753]: time="2024-01-08T22:16:52.694046415Z" level=info msg="shim reaped" id=1e677fc038a76937e4499fb992e5f25922103d050a25f44bdef5f5a694295d30
	Jan 08 22:16:52 running-upgrade-680100 dockerd[2753]: time="2024-01-08T22:16:52.704138746Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 08 22:16:52 running-upgrade-680100 dockerd[2753]: time="2024-01-08T22:16:52.704585347Z" level=warning msg="1e677fc038a76937e4499fb992e5f25922103d050a25f44bdef5f5a694295d30 cleanup: failed to unmount IPC: umount /var/lib/docker/containers/1e677fc038a76937e4499fb992e5f25922103d050a25f44bdef5f5a694295d30/mounts/shm, flags: 0x2: no such file or directory"
	Jan 08 22:16:52 running-upgrade-680100 dockerd[2753]: time="2024-01-08T22:16:52.853531008Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/16151f48b46869da3bd820f67eed06ea8fdb02cf1dda81674a8605eb2f0d7912/shim.sock" debug=false pid=10414
	Jan 08 22:16:53 running-upgrade-680100 dockerd[2753]: time="2024-01-08T22:16:53.011948995Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/82b6010651646e5ba1e966f3970fd180042139fac6a9027a5eacce56544556b1/shim.sock" debug=false pid=10468
	Jan 08 22:16:53 running-upgrade-680100 dockerd[2753]: time="2024-01-08T22:16:53.126997820Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/a0fce02b0410c95075869dfed9e33b33e769bbe7eb83ddad64c83928464f3d81/shim.sock" debug=false pid=10500
	Jan 08 22:16:53 running-upgrade-680100 dockerd[2753]: time="2024-01-08T22:16:53.718977290Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/dda254c6179a8b68ba4ca97bbe5e3e8ad7ad6c700f2cca3bb8a10876ded3a9ba/shim.sock" debug=false pid=10586
	Jan 08 22:16:54 running-upgrade-680100 dockerd[2753]: time="2024-01-08T22:16:54.360809504Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/7c2fecfed2c3536c77d661dd40bb892177423f960e557fe0783d55b1b99c0498/shim.sock" debug=false pid=10640
	Jan 08 22:16:56 running-upgrade-680100 dockerd[2753]: time="2024-01-08T22:16:56.261745774Z" level=info msg="shim reaped" id=dc26a8ed679e70b3f45b977523cd1acb0464ac985cdd678264d9ea47785d6972
	Jan 08 22:16:56 running-upgrade-680100 dockerd[2753]: time="2024-01-08T22:16:56.272185295Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 08 22:16:56 running-upgrade-680100 dockerd[2753]: time="2024-01-08T22:16:56.272478596Z" level=warning msg="dc26a8ed679e70b3f45b977523cd1acb0464ac985cdd678264d9ea47785d6972 cleanup: failed to unmount IPC: umount /var/lib/docker/containers/dc26a8ed679e70b3f45b977523cd1acb0464ac985cdd678264d9ea47785d6972/mounts/shm, flags: 0x2: no such file or directory"
	Jan 08 22:16:56 running-upgrade-680100 dockerd[2753]: time="2024-01-08T22:16:56.432407221Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/417cee3eebce6db7a75216376eb73a2e09ee4fed700845778e11ac9f1b1b8921/shim.sock" debug=false pid=10760
	Jan 08 22:16:56 running-upgrade-680100 dockerd[2753]: time="2024-01-08T22:16:56.483123324Z" level=info msg="shim reaped" id=073f462bd9a112252fd5243a3bcd19ebb25ca043781e792ecdbe676bd8119890
	Jan 08 22:16:56 running-upgrade-680100 dockerd[2753]: time="2024-01-08T22:16:56.487126532Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 08 22:16:56 running-upgrade-680100 dockerd[2753]: time="2024-01-08T22:16:56.489197636Z" level=warning msg="073f462bd9a112252fd5243a3bcd19ebb25ca043781e792ecdbe676bd8119890 cleanup: failed to unmount IPC: umount /var/lib/docker/containers/073f462bd9a112252fd5243a3bcd19ebb25ca043781e792ecdbe676bd8119890/mounts/shm, flags: 0x2: no such file or directory"
	Jan 08 22:16:56 running-upgrade-680100 dockerd[2753]: time="2024-01-08T22:16:56.825923420Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/e27b735f200c536cfd1858889ad7a2d5b85d42e0435792dd07384c85527166cb/shim.sock" debug=false pid=10838
	Jan 08 22:17:01 running-upgrade-680100 dockerd[2753]: time="2024-01-08T22:17:01.153010816Z" level=info msg="Container 3226d1aa7d70931ab1cadb9921a9a6214df9ff81fe3ed7c39f353910e859f9b8 failed to exit within 10 seconds of signal 15 - using the force"
	Jan 08 22:17:01 running-upgrade-680100 dockerd[2753]: time="2024-01-08T22:17:01.180569438Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/cbdffc70e4bb2375bebd23a26469c55c70d763ae2c32c8342667b7783ee62f62/shim.sock" debug=false pid=10913
	Jan 08 22:17:01 running-upgrade-680100 dockerd[2753]: time="2024-01-08T22:17:01.324476252Z" level=info msg="shim reaped" id=3226d1aa7d70931ab1cadb9921a9a6214df9ff81fe3ed7c39f353910e859f9b8
	Jan 08 22:17:01 running-upgrade-680100 dockerd[2753]: time="2024-01-08T22:17:01.332236558Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 08 22:17:01 running-upgrade-680100 dockerd[2753]: time="2024-01-08T22:17:01.332836458Z" level=warning msg="3226d1aa7d70931ab1cadb9921a9a6214df9ff81fe3ed7c39f353910e859f9b8 cleanup: failed to unmount IPC: umount /var/lib/docker/containers/3226d1aa7d70931ab1cadb9921a9a6214df9ff81fe3ed7c39f353910e859f9b8/mounts/shm, flags: 0x2: no such file or directory"
	Jan 08 22:17:01 running-upgrade-680100 dockerd[2753]: time="2024-01-08T22:17:01.413462622Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jan 08 22:17:01 running-upgrade-680100 dockerd[2753]: time="2024-01-08T22:17:01.414218423Z" level=info msg="Daemon shutdown complete"
	Jan 08 22:17:01 running-upgrade-680100 dockerd[2753]: time="2024-01-08T22:17:01.414261523Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jan 08 22:17:01 running-upgrade-680100 dockerd[2753]: time="2024-01-08T22:17:01.414285423Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Jan 08 22:17:01 running-upgrade-680100 dockerd[2753]: time="2024-01-08T22:17:01.446726349Z" level=warning msg="failed to get endpoint_count map for scope local: open : no such file or directory"
	Jan 08 22:17:01 running-upgrade-680100 dockerd[2753]: time="2024-01-08T22:17:01.453104054Z" level=warning msg="1ef9903c448f09800be67d82f63abe612d105bb6dc55da33982b9eab8513d39b cleanup: failed to unmount IPC: umount /var/lib/docker/containers/1ef9903c448f09800be67d82f63abe612d105bb6dc55da33982b9eab8513d39b/mounts/shm, flags: 0x2: no such file or directory"
	Jan 08 22:17:01 running-upgrade-680100 dockerd[2753]: time="2024-01-08T22:17:01.459626259Z" level=error msg="1ef9903c448f09800be67d82f63abe612d105bb6dc55da33982b9eab8513d39b cleanup: failed to delete container from containerd: grpc: the client connection is closing: unknown"
	Jan 08 22:17:01 running-upgrade-680100 dockerd[2753]: time="2024-01-08T22:17:01.459742559Z" level=error msg="Handler for POST /containers/1ef9903c448f09800be67d82f63abe612d105bb6dc55da33982b9eab8513d39b/start returned error: failed to update store for object type *libnetwork.endpoint: open : no such file or directory"
	Jan 08 22:17:01 running-upgrade-680100 dockerd[2753]: time="2024-01-08T22:17:01.479128975Z" level=warning msg="d3f7854e54f757d9faa1705c1e6f72ff0941d5a5bfb7d820adab609c57b0931e cleanup: failed to unmount IPC: umount /var/lib/docker/containers/d3f7854e54f757d9faa1705c1e6f72ff0941d5a5bfb7d820adab609c57b0931e/mounts/shm, flags: 0x2: no such file or directory"
	Jan 08 22:17:01 running-upgrade-680100 dockerd[2753]: time="2024-01-08T22:17:01.488865382Z" level=error msg="d3f7854e54f757d9faa1705c1e6f72ff0941d5a5bfb7d820adab609c57b0931e cleanup: failed to delete container from containerd: grpc: the client connection is closing: unknown"
	Jan 08 22:17:01 running-upgrade-680100 dockerd[2753]: time="2024-01-08T22:17:01.489049382Z" level=error msg="Handler for POST /containers/d3f7854e54f757d9faa1705c1e6f72ff0941d5a5bfb7d820adab609c57b0931e/start returned error: grpc: the client connection is closing: unknown"
	Jan 08 22:17:01 running-upgrade-680100 dockerd[2753]: time="2024-01-08T22:17:01.506927597Z" level=warning msg="failed to retrieve containerd version: rpc error: code = Canceled desc = grpc: the client connection is closing"
	Jan 08 22:17:01 running-upgrade-680100 dockerd[2753]: time="2024-01-08T22:17:01.508092198Z" level=warning msg="failed to get endpoint_count map for scope local: open : no such file or directory"
	Jan 08 22:17:01 running-upgrade-680100 dockerd[2753]: time="2024-01-08T22:17:01.922500126Z" level=warning msg="failed to get endpoint_count map for scope local: open : no such file or directory"
	Jan 08 22:17:01 running-upgrade-680100 dockerd[2753]: time="2024-01-08T22:17:01.928466931Z" level=warning msg="0e2f77bf34fe166b6d3ced2a5ae8cad28c16feeb114e28d48e080a1b731a83bc cleanup: failed to unmount IPC: umount /var/lib/docker/containers/0e2f77bf34fe166b6d3ced2a5ae8cad28c16feeb114e28d48e080a1b731a83bc/mounts/shm, flags: 0x2: no such file or directory"
	Jan 08 22:17:01 running-upgrade-680100 dockerd[2753]: time="2024-01-08T22:17:01.938826239Z" level=error msg="0e2f77bf34fe166b6d3ced2a5ae8cad28c16feeb114e28d48e080a1b731a83bc cleanup: failed to delete container from containerd: grpc: the client connection is closing: unknown"
	Jan 08 22:17:01 running-upgrade-680100 dockerd[2753]: time="2024-01-08T22:17:01.938866039Z" level=error msg="Handler for POST /containers/0e2f77bf34fe166b6d3ced2a5ae8cad28c16feeb114e28d48e080a1b731a83bc/start returned error: failed to update store for object type *libnetwork.endpoint: open : no such file or directory"
	Jan 08 22:17:02 running-upgrade-680100 systemd[1]: docker.service: Succeeded.
	Jan 08 22:17:02 running-upgrade-680100 systemd[1]: Stopped Docker Application Container Engine.
	Jan 08 22:17:02 running-upgrade-680100 systemd[1]: docker.service: Found left-over process 10370 (containerd-shim) in control group while starting unit. Ignoring.
	Jan 08 22:17:02 running-upgrade-680100 systemd[1]: This usually indicates unclean termination of a previous run, or service implementation deficiencies.
	Jan 08 22:17:02 running-upgrade-680100 systemd[1]: docker.service: Found left-over process 10414 (containerd-shim) in control group while starting unit. Ignoring.
	Jan 08 22:17:02 running-upgrade-680100 systemd[1]: This usually indicates unclean termination of a previous run, or service implementation deficiencies.
	Jan 08 22:17:02 running-upgrade-680100 systemd[1]: docker.service: Found left-over process 10468 (containerd-shim) in control group while starting unit. Ignoring.
	Jan 08 22:17:02 running-upgrade-680100 systemd[1]: This usually indicates unclean termination of a previous run, or service implementation deficiencies.
	Jan 08 22:17:02 running-upgrade-680100 systemd[1]: docker.service: Found left-over process 10500 (containerd-shim) in control group while starting unit. Ignoring.
	Jan 08 22:17:02 running-upgrade-680100 systemd[1]: This usually indicates unclean termination of a previous run, or service implementation deficiencies.
	Jan 08 22:17:02 running-upgrade-680100 systemd[1]: docker.service: Found left-over process 10586 (containerd-shim) in control group while starting unit. Ignoring.
	Jan 08 22:17:02 running-upgrade-680100 systemd[1]: This usually indicates unclean termination of a previous run, or service implementation deficiencies.
	Jan 08 22:17:02 running-upgrade-680100 systemd[1]: docker.service: Found left-over process 10640 (containerd-shim) in control group while starting unit. Ignoring.
	Jan 08 22:17:02 running-upgrade-680100 systemd[1]: This usually indicates unclean termination of a previous run, or service implementation deficiencies.
	Jan 08 22:17:02 running-upgrade-680100 systemd[1]: docker.service: Found left-over process 10760 (containerd-shim) in control group while starting unit. Ignoring.
	Jan 08 22:17:02 running-upgrade-680100 systemd[1]: This usually indicates unclean termination of a previous run, or service implementation deficiencies.
	Jan 08 22:17:02 running-upgrade-680100 systemd[1]: docker.service: Found left-over process 10838 (containerd-shim) in control group while starting unit. Ignoring.
	Jan 08 22:17:02 running-upgrade-680100 systemd[1]: This usually indicates unclean termination of a previous run, or service implementation deficiencies.
	Jan 08 22:17:02 running-upgrade-680100 systemd[1]: docker.service: Found left-over process 10913 (containerd-shim) in control group while starting unit. Ignoring.
	Jan 08 22:17:02 running-upgrade-680100 systemd[1]: This usually indicates unclean termination of a previous run, or service implementation deficiencies.
	Jan 08 22:17:02 running-upgrade-680100 systemd[1]: Starting Docker Application Container Engine...
	Jan 08 22:17:02 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:02.476314754Z" level=info msg="Starting up"
	Jan 08 22:17:02 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:02.479521356Z" level=info msg="libcontainerd: started new containerd process" pid=11013
	Jan 08 22:17:02 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:02.479708256Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	Jan 08 22:17:02 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:02.479861256Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	Jan 08 22:17:02 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:02.479944456Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/containerd/containerd.sock 0  <nil>}] <nil>}" module=grpc
	Jan 08 22:17:02 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:02.480071156Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Jan 08 22:17:02 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:02.523270380Z" level=info msg="starting containerd" revision=b34a5c8af56e510852c35414db4c1f4fa6172339 version=v1.2.10
	Jan 08 22:17:02 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:02.523765881Z" level=info msg="loading plugin "io.containerd.content.v1.content"..." type=io.containerd.content.v1
	Jan 08 22:17:02 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:02.524337181Z" level=info msg="loading plugin "io.containerd.snapshotter.v1.btrfs"..." type=io.containerd.snapshotter.v1
	Jan 08 22:17:02 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:02.524736681Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.btrfs" error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs must be a btrfs filesystem to be used with the btrfs snapshotter"
	Jan 08 22:17:02 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:02.524852681Z" level=info msg="loading plugin "io.containerd.snapshotter.v1.aufs"..." type=io.containerd.snapshotter.v1
	Jan 08 22:17:02 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:02.527030482Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.aufs" error="modprobe aufs failed: "modprobe: FATAL: Module aufs not found in directory /lib/modules/4.19.81\n": exit status 1"
	Jan 08 22:17:02 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:02.527137382Z" level=info msg="loading plugin "io.containerd.snapshotter.v1.native"..." type=io.containerd.snapshotter.v1
	Jan 08 22:17:02 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:02.527553283Z" level=info msg="loading plugin "io.containerd.snapshotter.v1.overlayfs"..." type=io.containerd.snapshotter.v1
	Jan 08 22:17:02 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:02.528190683Z" level=info msg="loading plugin "io.containerd.snapshotter.v1.zfs"..." type=io.containerd.snapshotter.v1
	Jan 08 22:17:02 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:02.528718383Z" level=info msg="skip loading plugin "io.containerd.snapshotter.v1.zfs"..." type=io.containerd.snapshotter.v1
	Jan 08 22:17:02 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:02.528820083Z" level=info msg="loading plugin "io.containerd.metadata.v1.bolt"..." type=io.containerd.metadata.v1
	Jan 08 22:17:02 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:02.528849583Z" level=warning msg="could not use snapshotter btrfs in metadata plugin" error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs must be a btrfs filesystem to be used with the btrfs snapshotter"
	Jan 08 22:17:02 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:02.528859183Z" level=warning msg="could not use snapshotter aufs in metadata plugin" error="modprobe aufs failed: "modprobe: FATAL: Module aufs not found in directory /lib/modules/4.19.81\n": exit status 1"
	Jan 08 22:17:02 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:02.528867283Z" level=warning msg="could not use snapshotter zfs in metadata plugin" error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin"
	Jan 08 22:17:02 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:02.529132384Z" level=info msg="loading plugin "io.containerd.differ.v1.walking"..." type=io.containerd.differ.v1
	Jan 08 22:17:02 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:02.529156684Z" level=info msg="loading plugin "io.containerd.gc.v1.scheduler"..." type=io.containerd.gc.v1
	Jan 08 22:17:02 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:02.529220884Z" level=info msg="loading plugin "io.containerd.service.v1.containers-service"..." type=io.containerd.service.v1
	Jan 08 22:17:02 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:02.529241084Z" level=info msg="loading plugin "io.containerd.service.v1.content-service"..." type=io.containerd.service.v1
	Jan 08 22:17:02 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:02.529568484Z" level=info msg="loading plugin "io.containerd.service.v1.diff-service"..." type=io.containerd.service.v1
	Jan 08 22:17:02 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:02.529589084Z" level=info msg="loading plugin "io.containerd.service.v1.images-service"..." type=io.containerd.service.v1
	Jan 08 22:17:02 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:02.529601284Z" level=info msg="loading plugin "io.containerd.service.v1.leases-service"..." type=io.containerd.service.v1
	Jan 08 22:17:02 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:02.529614184Z" level=info msg="loading plugin "io.containerd.service.v1.namespaces-service"..." type=io.containerd.service.v1
	Jan 08 22:17:02 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:02.529625584Z" level=info msg="loading plugin "io.containerd.service.v1.snapshots-service"..." type=io.containerd.service.v1
	Jan 08 22:17:02 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:02.529647784Z" level=info msg="loading plugin "io.containerd.runtime.v1.linux"..." type=io.containerd.runtime.v1
	Jan 08 22:17:02 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:02.561673202Z" level=info msg="loading plugin "io.containerd.runtime.v2.task"..." type=io.containerd.runtime.v2
	Jan 08 22:17:02 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:02.561858902Z" level=info msg="loading plugin "io.containerd.monitor.v1.cgroups"..." type=io.containerd.monitor.v1
	Jan 08 22:17:02 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:02.562547502Z" level=info msg="loading plugin "io.containerd.service.v1.tasks-service"..." type=io.containerd.service.v1
	Jan 08 22:17:02 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:02.563608403Z" level=info msg="loading plugin "io.containerd.internal.v1.restart"..." type=io.containerd.internal.v1
	Jan 08 22:17:02 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:02.563739603Z" level=info msg="loading plugin "io.containerd.grpc.v1.containers"..." type=io.containerd.grpc.v1
	Jan 08 22:17:02 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:02.563767203Z" level=info msg="loading plugin "io.containerd.grpc.v1.content"..." type=io.containerd.grpc.v1
	Jan 08 22:17:02 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:02.563786503Z" level=info msg="loading plugin "io.containerd.grpc.v1.diff"..." type=io.containerd.grpc.v1
	Jan 08 22:17:02 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:02.563800703Z" level=info msg="loading plugin "io.containerd.grpc.v1.events"..." type=io.containerd.grpc.v1
	Jan 08 22:17:02 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:02.563812703Z" level=info msg="loading plugin "io.containerd.grpc.v1.healthcheck"..." type=io.containerd.grpc.v1
	Jan 08 22:17:02 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:02.563825303Z" level=info msg="loading plugin "io.containerd.grpc.v1.images"..." type=io.containerd.grpc.v1
	Jan 08 22:17:02 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:02.563838203Z" level=info msg="loading plugin "io.containerd.grpc.v1.leases"..." type=io.containerd.grpc.v1
	Jan 08 22:17:02 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:02.563850603Z" level=info msg="loading plugin "io.containerd.grpc.v1.namespaces"..." type=io.containerd.grpc.v1
	Jan 08 22:17:02 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:02.563862303Z" level=info msg="loading plugin "io.containerd.internal.v1.opt"..." type=io.containerd.internal.v1
	Jan 08 22:17:02 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:02.563904503Z" level=info msg="loading plugin "io.containerd.grpc.v1.snapshots"..." type=io.containerd.grpc.v1
	Jan 08 22:17:02 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:02.563977503Z" level=info msg="loading plugin "io.containerd.grpc.v1.tasks"..." type=io.containerd.grpc.v1
	Jan 08 22:17:02 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:02.563999603Z" level=info msg="loading plugin "io.containerd.grpc.v1.version"..." type=io.containerd.grpc.v1
	Jan 08 22:17:02 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:02.564011703Z" level=info msg="loading plugin "io.containerd.grpc.v1.introspection"..." type=io.containerd.grpc.v1
	Jan 08 22:17:02 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:02.564166603Z" level=info msg=serving... address="/var/run/docker/containerd/containerd-debug.sock"
	Jan 08 22:17:02 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:02.564214503Z" level=info msg=serving... address="/var/run/docker/containerd/containerd.sock"
	Jan 08 22:17:02 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:02.564228503Z" level=info msg="containerd successfully booted in 0.043028s"
	Jan 08 22:17:02 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:02.579588912Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	Jan 08 22:17:02 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:02.579777912Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	Jan 08 22:17:02 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:02.579868912Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/containerd/containerd.sock 0  <nil>}] <nil>}" module=grpc
	Jan 08 22:17:02 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:02.579994312Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Jan 08 22:17:02 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:02.581719913Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	Jan 08 22:17:02 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:02.581760313Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	Jan 08 22:17:02 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:02.581778913Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/containerd/containerd.sock 0  <nil>}] <nil>}" module=grpc
	Jan 08 22:17:02 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:02.581795813Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Jan 08 22:17:02 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:02.589326617Z" level=info msg="[graphdriver] using prior storage driver: overlay2"
	Jan 08 22:17:02 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:02.668071461Z" level=warning msg="Your kernel does not support cgroup blkio weight"
	Jan 08 22:17:02 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:02.668297961Z" level=warning msg="Your kernel does not support cgroup blkio weight_device"
	Jan 08 22:17:02 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:02.668488661Z" level=warning msg="Your kernel does not support cgroup blkio throttle.read_bps_device"
	Jan 08 22:17:02 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:02.668557161Z" level=warning msg="Your kernel does not support cgroup blkio throttle.write_bps_device"
	Jan 08 22:17:02 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:02.668622861Z" level=warning msg="Your kernel does not support cgroup blkio throttle.read_iops_device"
	Jan 08 22:17:02 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:02.668734961Z" level=warning msg="Your kernel does not support cgroup blkio throttle.write_iops_device"
	Jan 08 22:17:02 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:02.668955962Z" level=info msg="Loading containers: start."
	Jan 08 22:17:03 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:03.621737549Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 08 22:17:03 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:03.633644653Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 08 22:17:03 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:03.659656261Z" level=warning msg="unmount task rootfs" error="no such file or directory" id=16151f48b46869da3bd820f67eed06ea8fdb02cf1dda81674a8605eb2f0d7912 path="/var/run/docker/containerd/daemon/io.containerd.runtime.v1.linux/moby/16151f48b46869da3bd820f67eed06ea8fdb02cf1dda81674a8605eb2f0d7912"
	Jan 08 22:17:03 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:03.661859062Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 08 22:17:03 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:03.663040362Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 08 22:17:03 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:03.683277769Z" level=warning msg="unmount task rootfs" error="no such file or directory" id=dda254c6179a8b68ba4ca97bbe5e3e8ad7ad6c700f2cca3bb8a10876ded3a9ba path="/var/run/docker/containerd/daemon/io.containerd.runtime.v1.linux/moby/dda254c6179a8b68ba4ca97bbe5e3e8ad7ad6c700f2cca3bb8a10876ded3a9ba"
	Jan 08 22:17:03 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:03.683757569Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 08 22:17:03 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:03.698315274Z" level=warning msg="unmount task rootfs" error="no such file or directory" id=5f9792e14ebd7ea17983f6ca19a526242e0b8b3b7e9cd0ea59a60f33b3d32f0e path="/var/run/docker/containerd/daemon/io.containerd.runtime.v1.linux/moby/5f9792e14ebd7ea17983f6ca19a526242e0b8b3b7e9cd0ea59a60f33b3d32f0e"
	Jan 08 22:17:03 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:03.698793374Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 08 22:17:03 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:03.727621083Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 08 22:17:03 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:03.727818483Z" level=warning msg="82b6010651646e5ba1e966f3970fd180042139fac6a9027a5eacce56544556b1 cleanup: failed to unmount IPC: umount /var/lib/docker/containers/82b6010651646e5ba1e966f3970fd180042139fac6a9027a5eacce56544556b1/mounts/shm, flags: 0x2: no such file or directory"
	Jan 08 22:17:03 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:03.738941087Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 08 22:17:03 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:03.739252887Z" level=warning msg="7c2fecfed2c3536c77d661dd40bb892177423f960e557fe0783d55b1b99c0498 cleanup: failed to unmount IPC: umount /var/lib/docker/containers/7c2fecfed2c3536c77d661dd40bb892177423f960e557fe0783d55b1b99c0498/mounts/shm, flags: 0x2: no such file or directory"
	Jan 08 22:17:03 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:03.763923195Z" level=warning msg="unmount task rootfs" error="no such file or directory" id=82b6010651646e5ba1e966f3970fd180042139fac6a9027a5eacce56544556b1 path="/var/run/docker/containerd/daemon/io.containerd.runtime.v1.linux/moby/82b6010651646e5ba1e966f3970fd180042139fac6a9027a5eacce56544556b1"
	Jan 08 22:17:03 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:03.764465195Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 08 22:17:03 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:03.764699495Z" level=warning msg="unmount task rootfs" error="no such file or directory" id=7c2fecfed2c3536c77d661dd40bb892177423f960e557fe0783d55b1b99c0498 path="/var/run/docker/containerd/daemon/io.containerd.runtime.v1.linux/moby/7c2fecfed2c3536c77d661dd40bb892177423f960e557fe0783d55b1b99c0498"
	Jan 08 22:17:03 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:03.765036695Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 08 22:17:03 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:03.769404297Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 08 22:17:03 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:03.777278699Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 08 22:17:03 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:03.785334802Z" level=warning msg="unmount task rootfs" error="no such file or directory" id=417cee3eebce6db7a75216376eb73a2e09ee4fed700845778e11ac9f1b1b8921 path="/var/run/docker/containerd/daemon/io.containerd.runtime.v1.linux/moby/417cee3eebce6db7a75216376eb73a2e09ee4fed700845778e11ac9f1b1b8921"
	Jan 08 22:17:03 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:03.786907403Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 08 22:17:03 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:03.787965603Z" level=warning msg="unmount task rootfs" error="no such file or directory" id=cbdffc70e4bb2375bebd23a26469c55c70d763ae2c32c8342667b7783ee62f62 path="/var/run/docker/containerd/daemon/io.containerd.runtime.v1.linux/moby/cbdffc70e4bb2375bebd23a26469c55c70d763ae2c32c8342667b7783ee62f62"
	Jan 08 22:17:03 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:03.788259503Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 08 22:17:03 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:03.959881559Z" level=info msg="Removing stale sandbox 54606fc5760a0bb636b945ce19ee35c68fd741f970a1e678918cc138989deae4 (cbdffc70e4bb2375bebd23a26469c55c70d763ae2c32c8342667b7783ee62f62)"
	Jan 08 22:17:03 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:03.962800560Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint b5c68db7ae78ff10cc41dfe06eda05218de51d41e157285de1ca25a9e387e73b 133ffc0a0140ef1291b8e8fe634ef34ad590720e58f721d5a7ad35dd46476dc1], retrying...."
	Jan 08 22:17:04 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:04.104902382Z" level=info msg="Removing stale sandbox a95e8ec129362068a46a406ed29d8ed4355910e2dc27784cded0d740bfbe6d29 (16151f48b46869da3bd820f67eed06ea8fdb02cf1dda81674a8605eb2f0d7912)"
	Jan 08 22:17:04 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:04.108259583Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint b5c68db7ae78ff10cc41dfe06eda05218de51d41e157285de1ca25a9e387e73b 2fc52b319f765d45c22aab8037e5877d402369f05e0ecb9e48098517aa473728], retrying...."
	Jan 08 22:17:04 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:04.225760994Z" level=info msg="Removing stale sandbox b54b42228f059b43e69c16db7d830db81ead0ff4fba183e94e670dab65a28266 (dda254c6179a8b68ba4ca97bbe5e3e8ad7ad6c700f2cca3bb8a10876ded3a9ba)"
	Jan 08 22:17:04 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:04.229169994Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint b5c68db7ae78ff10cc41dfe06eda05218de51d41e157285de1ca25a9e387e73b 835454e44cb891a45577311ffd395572fa8adc9996180a29bc87c2b0bf5cb868], retrying...."
	Jan 08 22:17:04 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:04.360880107Z" level=info msg="Removing stale sandbox b79e9428b2255fe394cd707ecf7d828f274765d335567723b5359e675777cb7d (417cee3eebce6db7a75216376eb73a2e09ee4fed700845778e11ac9f1b1b8921)"
	Jan 08 22:17:04 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:04.367590908Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint 81ea524f78b9c25368981085fd4903aefa279ec2e33eca681822225394f96a38 b7a7d01b8cacc2760eac317c3bb7c626d7c16439661ab4b04287c82c0a223cc4], retrying...."
	Jan 08 22:17:04 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:04.485583319Z" level=info msg="Removing stale sandbox cd2ee630010eda43b09e74c8f341ff654dcbeb803222b8b5134d28a7eddeb145 (5f9792e14ebd7ea17983f6ca19a526242e0b8b3b7e9cd0ea59a60f33b3d32f0e)"
	Jan 08 22:17:04 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:04.488726920Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint b5c68db7ae78ff10cc41dfe06eda05218de51d41e157285de1ca25a9e387e73b bcddfe03291e7e1d21e6b352595b4ff26071d3d479f14fbb8537659aaa146065], retrying...."
	Jan 08 22:17:04 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:04.505475321Z" level=info msg="There are old running containers, the network config will not take affect"
	Jan 08 22:17:04 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:04.524575523Z" level=info msg="Loading containers: done."
	Jan 08 22:17:04 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:04.553634126Z" level=info msg="Docker daemon" commit=633a0ea838 graphdriver(s)=overlay2 version=19.03.5
	Jan 08 22:17:04 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:04.553822626Z" level=info msg="Daemon has completed initialization"
	Jan 08 22:17:04 running-upgrade-680100 systemd[1]: Started Docker Application Container Engine.
	Jan 08 22:17:04 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:04.575522328Z" level=info msg="API listen on /var/run/docker.sock"
	Jan 08 22:17:04 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:04.575578228Z" level=info msg="API listen on [::]:2376"
	Jan 08 22:17:04 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:04.986662968Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/765cae7a806c5d98c7bb2ddc9617fc86ae05331f8304c42e52fcf3dc35579602/shim.sock" debug=false pid=11645
	Jan 08 22:17:04 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:04.989891668Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/fe952f39beabedbd5b1d0159905a6835e42486213b7e721f79fb76946f782047/shim.sock" debug=false pid=11646
	Jan 08 22:17:05 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:05.059671862Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/286c5f0415000e09d2d945079c18e13ca0958096a7695d2cc26f54c48653316a/shim.sock" debug=false pid=11682
	Jan 08 22:17:05 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:05.120597054Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/ec5e099e612bd239f4be4a67dc87fd947c5d8599e3f7f86eea178d77aabf89d6/shim.sock" debug=false pid=11688
	Jan 08 22:17:05 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:05.125914953Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/fc00af195f4d5abff7d4e8a867c40004abe64ac8586dff21ea21e8326feb8a56/shim.sock" debug=false pid=11700
	Jan 08 22:17:05 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:05.285943833Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 08 22:17:05 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:05.286319233Z" level=warning msg="a0fce02b0410c95075869dfed9e33b33e769bbe7eb83ddad64c83928464f3d81 cleanup: failed to unmount IPC: umount /var/lib/docker/containers/a0fce02b0410c95075869dfed9e33b33e769bbe7eb83ddad64c83928464f3d81/mounts/shm, flags: 0x2: no such file or directory"
	Jan 08 22:17:05 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:05.336798826Z" level=warning msg="unmount task rootfs" error="no such file or directory" id=a0fce02b0410c95075869dfed9e33b33e769bbe7eb83ddad64c83928464f3d81 path="/var/run/docker/containerd/daemon/io.containerd.runtime.v1.linux/moby/a0fce02b0410c95075869dfed9e33b33e769bbe7eb83ddad64c83928464f3d81"
	Jan 08 22:17:05 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:05.337514926Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 08 22:17:05 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:05.477500508Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/55a70babd955a14f8cf0dde5acd5c59e3c9f23969e7f85985f543a9a74251106/shim.sock" debug=false pid=11847
	Jan 08 22:17:05 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:05.649148986Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/f9c0d5cacfe92db986a709f69b556f690b9409e6284842a5d2e7182aea23e4cb/shim.sock" debug=false pid=11907
	Jan 08 22:17:05 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:05.784085869Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/04c2b20ab76e803ca209782707f610715502a095bd9de02b0d569c2e50f4f4ba/shim.sock" debug=false pid=11943
	Jan 08 22:17:05 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:05.945202248Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/9b649ed41c63dbbc6edaa8d404297c03b67d5cecbfe197112348c23aae87a45c/shim.sock" debug=false pid=11983
	Jan 08 22:17:08 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:08.378509431Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 08 22:17:08 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:08.379512030Z" level=warning msg="e27b735f200c536cfd1858889ad7a2d5b85d42e0435792dd07384c85527166cb cleanup: failed to unmount IPC: umount /var/lib/docker/containers/e27b735f200c536cfd1858889ad7a2d5b85d42e0435792dd07384c85527166cb/mounts/shm, flags: 0x2: no such file or directory"
	Jan 08 22:17:08 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:08.389208822Z" level=warning msg="unmount task rootfs" error="no such file or directory" id=e27b735f200c536cfd1858889ad7a2d5b85d42e0435792dd07384c85527166cb path="/var/run/docker/containerd/daemon/io.containerd.runtime.v1.linux/moby/e27b735f200c536cfd1858889ad7a2d5b85d42e0435792dd07384c85527166cb"
	Jan 08 22:17:08 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:08.389522722Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 08 22:17:08 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:08.526934514Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/3032a2b99f2906facc0a8658d26a6a37931d453297f774b554911a55c400328e/shim.sock" debug=false pid=12130
	Jan 08 22:17:09 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:09.013768631Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/70ecf5f9d8d8336740535e876507144f03c7e98cfa80c26267045209dd200ad2/shim.sock" debug=false pid=12193
	Jan 08 22:17:09 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:09.587288861Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/d037da605157aca849e39deba5e54b1e98fe5338c701befa8207cfc96de9de3d/shim.sock" debug=false pid=12263
	Jan 08 22:17:09 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:09.975549675Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/4db7689dc25eaa0929acbd62e6f654b3693bec31604e0b7122dd4cc23d2da29e/shim.sock" debug=false pid=12356
	Jan 08 22:17:10 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:10.988758062Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/531a9db4529c4313654f78f4cba8f8fb39560cda67a2573901321cae465e18b1/shim.sock" debug=false pid=12409
	Jan 08 22:17:11 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:11.303909922Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/ded3fdde774b13af952974a90dd94e04c7166aeb9d709e25b36029391dd117bf/shim.sock" debug=false pid=12456
	Jan 08 22:17:16 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:16.533674166Z" level=info msg="shim reaped" id=f9c0d5cacfe92db986a709f69b556f690b9409e6284842a5d2e7182aea23e4cb
	Jan 08 22:17:16 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:16.544263041Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 08 22:17:16 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:16.544443840Z" level=warning msg="f9c0d5cacfe92db986a709f69b556f690b9409e6284842a5d2e7182aea23e4cb cleanup: failed to unmount IPC: umount /var/lib/docker/containers/f9c0d5cacfe92db986a709f69b556f690b9409e6284842a5d2e7182aea23e4cb/mounts/shm, flags: 0x2: no such file or directory"
	Jan 08 22:17:17 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:17.842679191Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/122d29bda1f506f812663fcd8d27e8ee8050bc254abf52cd661c16155e83e714/shim.sock" debug=false pid=12580
	Jan 08 22:17:22 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:22.897944695Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/b42d8c701c28a43f390c0b957d7e0f885f3a8f4a1f07b941f146a270c4a2389a/shim.sock" debug=false pid=12681
	Jan 08 22:17:23 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:23.852278240Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/57f933937b30561281c9f3575ca0dcb382d05d1ae34f8236d376da9398965fc3/shim.sock" debug=false pid=12734
	Jan 08 22:17:27 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:27.922522803Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/1d8f4eebfd72c8a74160f56a85987babd8c2f119ddefba80fabc493667ff1d67/shim.sock" debug=false pid=12824
	Jan 08 22:17:33 running-upgrade-680100 systemd[1]: Stopping Docker Application Container Engine...
	Jan 08 22:17:33 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:33.215062177Z" level=info msg="Processing signal 'terminated'"
	Jan 08 22:17:34 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:34.241683688Z" level=info msg="shim reaped" id=55a70babd955a14f8cf0dde5acd5c59e3c9f23969e7f85985f543a9a74251106
	Jan 08 22:17:34 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:34.252507033Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 08 22:17:34 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:34.255265893Z" level=info msg="shim reaped" id=286c5f0415000e09d2d945079c18e13ca0958096a7695d2cc26f54c48653316a
	Jan 08 22:17:34 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:34.258871242Z" level=info msg="shim reaped" id=57f933937b30561281c9f3575ca0dcb382d05d1ae34f8236d376da9398965fc3
	Jan 08 22:17:34 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:34.261546803Z" level=info msg="shim reaped" id=3032a2b99f2906facc0a8658d26a6a37931d453297f774b554911a55c400328e
	Jan 08 22:17:34 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:34.264369463Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 08 22:17:34 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:34.280011539Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 08 22:17:34 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:34.286115052Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 08 22:17:34 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:34.286496646Z" level=warning msg="57f933937b30561281c9f3575ca0dcb382d05d1ae34f8236d376da9398965fc3 cleanup: failed to unmount IPC: umount /var/lib/docker/containers/57f933937b30561281c9f3575ca0dcb382d05d1ae34f8236d376da9398965fc3/mounts/shm, flags: 0x2: no such file or directory"
	Jan 08 22:17:34 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:34.321918340Z" level=info msg="shim reaped" id=fe952f39beabedbd5b1d0159905a6835e42486213b7e721f79fb76946f782047
	Jan 08 22:17:34 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:34.322932125Z" level=info msg="shim reaped" id=765cae7a806c5d98c7bb2ddc9617fc86ae05331f8304c42e52fcf3dc35579602
	Jan 08 22:17:34 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:34.329496731Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 08 22:17:34 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:34.329833026Z" level=info msg="shim reaped" id=fc00af195f4d5abff7d4e8a867c40004abe64ac8586dff21ea21e8326feb8a56
	Jan 08 22:17:34 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:34.331417304Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 08 22:17:34 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:34.341259663Z" level=info msg="shim reaped" id=1d8f4eebfd72c8a74160f56a85987babd8c2f119ddefba80fabc493667ff1d67
	Jan 08 22:17:34 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:34.348950153Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 08 22:17:34 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:34.352386404Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 08 22:17:34 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:34.360976681Z" level=warning msg="1d8f4eebfd72c8a74160f56a85987babd8c2f119ddefba80fabc493667ff1d67 cleanup: failed to unmount IPC: umount /var/lib/docker/containers/1d8f4eebfd72c8a74160f56a85987babd8c2f119ddefba80fabc493667ff1d67/mounts/shm, flags: 0x2: no such file or directory"
	Jan 08 22:17:34 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:34.361181978Z" level=info msg="shim reaped" id=d037da605157aca849e39deba5e54b1e98fe5338c701befa8207cfc96de9de3d
	Jan 08 22:17:34 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:34.368743770Z" level=info msg="shim reaped" id=531a9db4529c4313654f78f4cba8f8fb39560cda67a2573901321cae465e18b1
	Jan 08 22:17:34 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:34.376378060Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 08 22:17:34 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:34.376867453Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 08 22:17:34 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:34.408117206Z" level=info msg="shim reaped" id=ded3fdde774b13af952974a90dd94e04c7166aeb9d709e25b36029391dd117bf
	Jan 08 22:17:34 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:34.410409874Z" level=info msg="shim reaped" id=ec5e099e612bd239f4be4a67dc87fd947c5d8599e3f7f86eea178d77aabf89d6
	Jan 08 22:17:34 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:34.417539672Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 08 22:17:34 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:34.417700869Z" level=warning msg="ded3fdde774b13af952974a90dd94e04c7166aeb9d709e25b36029391dd117bf cleanup: failed to unmount IPC: umount /var/lib/docker/containers/ded3fdde774b13af952974a90dd94e04c7166aeb9d709e25b36029391dd117bf/mounts/shm, flags: 0x2: no such file or directory"
	Jan 08 22:17:34 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:34.460759953Z" level=info msg="shim reaped" id=122d29bda1f506f812663fcd8d27e8ee8050bc254abf52cd661c16155e83e714
	Jan 08 22:17:34 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:34.463110819Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 08 22:17:34 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:34.472996478Z" level=warning msg="122d29bda1f506f812663fcd8d27e8ee8050bc254abf52cd661c16155e83e714 cleanup: failed to unmount IPC: umount /var/lib/docker/containers/122d29bda1f506f812663fcd8d27e8ee8050bc254abf52cd661c16155e83e714/mounts/shm, flags: 0x2: no such file or directory"
	Jan 08 22:17:34 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:34.486887579Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 08 22:17:34 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:34.718397367Z" level=info msg="shim reaped" id=b42d8c701c28a43f390c0b957d7e0f885f3a8f4a1f07b941f146a270c4a2389a
	Jan 08 22:17:34 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:34.719536351Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 08 22:17:34 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:34.719658549Z" level=warning msg="b42d8c701c28a43f390c0b957d7e0f885f3a8f4a1f07b941f146a270c4a2389a cleanup: failed to unmount IPC: umount /var/lib/docker/containers/b42d8c701c28a43f390c0b957d7e0f885f3a8f4a1f07b941f146a270c4a2389a/mounts/shm, flags: 0x2: no such file or directory"
	Jan 08 22:17:34 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:34.870956184Z" level=info msg="shim reaped" id=9b649ed41c63dbbc6edaa8d404297c03b67d5cecbfe197112348c23aae87a45c
	Jan 08 22:17:34 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:34.881193438Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 08 22:17:34 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:34.881703730Z" level=warning msg="9b649ed41c63dbbc6edaa8d404297c03b67d5cecbfe197112348c23aae87a45c cleanup: failed to unmount IPC: umount /var/lib/docker/containers/9b649ed41c63dbbc6edaa8d404297c03b67d5cecbfe197112348c23aae87a45c/mounts/shm, flags: 0x2: no such file or directory"
	Jan 08 22:17:35 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:35.003017294Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/9dde7dccf20c01f2a124dc6e1e1cf61eb182d9db9302219a07111d295a7af5be/shim.sock" debug=false pid=13696
	Jan 08 22:17:35 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:35.275588494Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/b5dbb07377cb3f18866a6683bc12764d5af0a31978b38faccb1a16664f48f2b2/shim.sock" debug=false pid=13737
	Jan 08 22:17:35 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:35.826805408Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/8dbb003a9d171ffff34197955c6ace3ee0f819d227c3106fbb3912fbac737a18/shim.sock" debug=false pid=13799
	Jan 08 22:17:38 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:38.597725361Z" level=info msg="shim reaped" id=4db7689dc25eaa0929acbd62e6f654b3693bec31604e0b7122dd4cc23d2da29e
	Jan 08 22:17:38 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:38.606095141Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 08 22:17:38 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:38.606274739Z" level=warning msg="4db7689dc25eaa0929acbd62e6f654b3693bec31604e0b7122dd4cc23d2da29e cleanup: failed to unmount IPC: umount /var/lib/docker/containers/4db7689dc25eaa0929acbd62e6f654b3693bec31604e0b7122dd4cc23d2da29e/mounts/shm, flags: 0x2: no such file or directory"
	Jan 08 22:17:38 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:38.667052469Z" level=info msg="shim reaped" id=70ecf5f9d8d8336740535e876507144f03c7e98cfa80c26267045209dd200ad2
	Jan 08 22:17:38 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:38.700088496Z" level=warning msg="70ecf5f9d8d8336740535e876507144f03c7e98cfa80c26267045209dd200ad2 cleanup: failed to unmount IPC: umount /var/lib/docker/containers/70ecf5f9d8d8336740535e876507144f03c7e98cfa80c26267045209dd200ad2/mounts/shm, flags: 0x2: no such file or directory"
	Jan 08 22:17:38 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:38.700099996Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 08 22:17:39 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:39.455629786Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/46ca216b863b19f48bbcfee5ec9ca95fa4088f8f77d16bc185f5053700853a2b/shim.sock" debug=false pid=13910
	Jan 08 22:17:41 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:41.004647222Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/9461c83ba4133d94316c71f59168caab37c41a27e421328f86ff46510def85f1/shim.sock" debug=false pid=13964
	Jan 08 22:17:41 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:41.514167232Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/97a808a305928677851d9083961a5500927b3a129ddb5e927f2324511c7a36dd/shim.sock" debug=false pid=14027
	Jan 08 22:17:42 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:42.756904951Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/38c13b445fc98d5a630bd5b9b98e3b0f2cf89216d905bcbe5e033990b6a1a8d1/shim.sock" debug=false pid=14071
	Jan 08 22:17:43 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:43.577159415Z" level=info msg="Container 04c2b20ab76e803ca209782707f610715502a095bd9de02b0d569c2e50f4f4ba failed to exit within 10 seconds of signal 15 - using the force"
	Jan 08 22:17:43 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:43.751927114Z" level=info msg="shim reaped" id=04c2b20ab76e803ca209782707f610715502a095bd9de02b0d569c2e50f4f4ba
	Jan 08 22:17:43 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:43.759642804Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 08 22:17:43 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:43.759671303Z" level=warning msg="04c2b20ab76e803ca209782707f610715502a095bd9de02b0d569c2e50f4f4ba cleanup: failed to unmount IPC: umount /var/lib/docker/containers/04c2b20ab76e803ca209782707f610715502a095bd9de02b0d569c2e50f4f4ba/mounts/shm, flags: 0x2: no such file or directory"
	Jan 08 22:17:44 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:44.134243044Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jan 08 22:17:44 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:44.135368028Z" level=info msg="Daemon shutdown complete"
	Jan 08 22:17:44 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:44.135434727Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Jan 08 22:17:44 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:44.135719023Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jan 08 22:17:45 running-upgrade-680100 systemd[1]: docker.service: Succeeded.
	Jan 08 22:17:45 running-upgrade-680100 systemd[1]: Stopped Docker Application Container Engine.
	Jan 08 22:17:45 running-upgrade-680100 systemd[1]: docker.service: Found left-over process 13696 (containerd-shim) in control group while starting unit. Ignoring.
	Jan 08 22:17:45 running-upgrade-680100 systemd[1]: This usually indicates unclean termination of a previous run, or service implementation deficiencies.
	Jan 08 22:17:45 running-upgrade-680100 systemd[1]: docker.service: Found left-over process 13737 (containerd-shim) in control group while starting unit. Ignoring.
	Jan 08 22:17:45 running-upgrade-680100 systemd[1]: This usually indicates unclean termination of a previous run, or service implementation deficiencies.
	Jan 08 22:17:45 running-upgrade-680100 systemd[1]: docker.service: Found left-over process 13799 (containerd-shim) in control group while starting unit. Ignoring.
	Jan 08 22:17:45 running-upgrade-680100 systemd[1]: This usually indicates unclean termination of a previous run, or service implementation deficiencies.
	Jan 08 22:17:45 running-upgrade-680100 systemd[1]: docker.service: Found left-over process 13910 (containerd-shim) in control group while starting unit. Ignoring.
	Jan 08 22:17:45 running-upgrade-680100 systemd[1]: This usually indicates unclean termination of a previous run, or service implementation deficiencies.
	Jan 08 22:17:45 running-upgrade-680100 systemd[1]: docker.service: Found left-over process 13964 (containerd-shim) in control group while starting unit. Ignoring.
	Jan 08 22:17:45 running-upgrade-680100 systemd[1]: This usually indicates unclean termination of a previous run, or service implementation deficiencies.
	Jan 08 22:17:45 running-upgrade-680100 systemd[1]: docker.service: Found left-over process 14027 (containerd-shim) in control group while starting unit. Ignoring.
	Jan 08 22:17:45 running-upgrade-680100 systemd[1]: This usually indicates unclean termination of a previous run, or service implementation deficiencies.
	Jan 08 22:17:45 running-upgrade-680100 systemd[1]: docker.service: Found left-over process 14071 (containerd-shim) in control group while starting unit. Ignoring.
	Jan 08 22:17:45 running-upgrade-680100 systemd[1]: This usually indicates unclean termination of a previous run, or service implementation deficiencies.
	Jan 08 22:17:45 running-upgrade-680100 systemd[1]: Starting Docker Application Container Engine...
	Jan 08 22:17:45 running-upgrade-680100 dockerd[14178]: time="2024-01-08T22:17:45.201158078Z" level=info msg="Starting up"
	Jan 08 22:17:45 running-upgrade-680100 dockerd[14178]: time="2024-01-08T22:17:45.204354033Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	Jan 08 22:17:45 running-upgrade-680100 dockerd[14178]: time="2024-01-08T22:17:45.204421432Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	Jan 08 22:17:45 running-upgrade-680100 dockerd[14178]: time="2024-01-08T22:17:45.204447831Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock 0  <nil>}] <nil>}" module=grpc
	Jan 08 22:17:45 running-upgrade-680100 dockerd[14178]: time="2024-01-08T22:17:45.204465531Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Jan 08 22:17:45 running-upgrade-680100 dockerd[14178]: time="2024-01-08T22:17:45.204812426Z" level=warning msg="grpc: addrConn.createTransport failed to connect to {unix:///run/containerd/containerd.sock 0  <nil>}. Err :connection error: desc = \"transport: Error while dialing dial unix /run/containerd/containerd.sock: connect: connection refused\". Reconnecting..." module=grpc
	Jan 08 22:17:45 running-upgrade-680100 dockerd[14178]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": all SubConns are in TransientFailure, latest connection error: connection error: desc = "transport: Error while dialing dial unix /run/containerd/containerd.sock: connect: connection refused": unavailable
	Jan 08 22:17:45 running-upgrade-680100 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jan 08 22:17:45 running-upgrade-680100 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jan 08 22:17:45 running-upgrade-680100 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	-- Logs begin at Mon 2024-01-08 22:08:29 UTC, end at Mon 2024-01-08 22:17:45 UTC. --
	Jan 08 22:09:53 running-upgrade-680100 systemd[1]: Starting Docker Application Container Engine...
	Jan 08 22:09:53 running-upgrade-680100 dockerd[2753]: time="2024-01-08T22:09:53.389967109Z" level=info msg="Starting up"
	Jan 08 22:09:53 running-upgrade-680100 dockerd[2753]: time="2024-01-08T22:09:53.392699309Z" level=info msg="libcontainerd: started new containerd process" pid=2761
	Jan 08 22:09:53 running-upgrade-680100 dockerd[2753]: time="2024-01-08T22:09:53.392789609Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	Jan 08 22:09:53 running-upgrade-680100 dockerd[2753]: time="2024-01-08T22:09:53.392801209Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	Jan 08 22:09:53 running-upgrade-680100 dockerd[2753]: time="2024-01-08T22:09:53.392826609Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/containerd/containerd.sock 0  <nil>}] <nil>}" module=grpc
	Jan 08 22:09:53 running-upgrade-680100 dockerd[2753]: time="2024-01-08T22:09:53.392843509Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Jan 08 22:09:53 running-upgrade-680100 dockerd[2753]: time="2024-01-08T22:09:53.432536109Z" level=info msg="starting containerd" revision=b34a5c8af56e510852c35414db4c1f4fa6172339 version=v1.2.10
	Jan 08 22:09:53 running-upgrade-680100 dockerd[2753]: time="2024-01-08T22:09:53.432940809Z" level=info msg="loading plugin "io.containerd.content.v1.content"..." type=io.containerd.content.v1
	Jan 08 22:09:53 running-upgrade-680100 dockerd[2753]: time="2024-01-08T22:09:53.433124109Z" level=info msg="loading plugin "io.containerd.snapshotter.v1.btrfs"..." type=io.containerd.snapshotter.v1
	Jan 08 22:09:53 running-upgrade-680100 dockerd[2753]: time="2024-01-08T22:09:53.433625909Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.btrfs" error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs must be a btrfs filesystem to be used with the btrfs snapshotter"
	Jan 08 22:09:53 running-upgrade-680100 dockerd[2753]: time="2024-01-08T22:09:53.433730409Z" level=info msg="loading plugin "io.containerd.snapshotter.v1.aufs"..." type=io.containerd.snapshotter.v1
	Jan 08 22:09:53 running-upgrade-680100 dockerd[2753]: time="2024-01-08T22:09:53.435997509Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.aufs" error="modprobe aufs failed: "modprobe: FATAL: Module aufs not found in directory /lib/modules/4.19.81\n": exit status 1"
	Jan 08 22:09:53 running-upgrade-680100 dockerd[2753]: time="2024-01-08T22:09:53.436044509Z" level=info msg="loading plugin "io.containerd.snapshotter.v1.native"..." type=io.containerd.snapshotter.v1
	Jan 08 22:09:53 running-upgrade-680100 dockerd[2753]: time="2024-01-08T22:09:53.436160009Z" level=info msg="loading plugin "io.containerd.snapshotter.v1.overlayfs"..." type=io.containerd.snapshotter.v1
	Jan 08 22:09:53 running-upgrade-680100 dockerd[2753]: time="2024-01-08T22:09:53.436555009Z" level=info msg="loading plugin "io.containerd.snapshotter.v1.zfs"..." type=io.containerd.snapshotter.v1
	Jan 08 22:09:53 running-upgrade-680100 dockerd[2753]: time="2024-01-08T22:09:53.436874109Z" level=info msg="skip loading plugin "io.containerd.snapshotter.v1.zfs"..." type=io.containerd.snapshotter.v1
	Jan 08 22:09:53 running-upgrade-680100 dockerd[2753]: time="2024-01-08T22:09:53.436989209Z" level=info msg="loading plugin "io.containerd.metadata.v1.bolt"..." type=io.containerd.metadata.v1
	Jan 08 22:09:53 running-upgrade-680100 dockerd[2753]: time="2024-01-08T22:09:53.437049309Z" level=warning msg="could not use snapshotter zfs in metadata plugin" error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin"
	Jan 08 22:09:53 running-upgrade-680100 dockerd[2753]: time="2024-01-08T22:09:53.437136309Z" level=warning msg="could not use snapshotter btrfs in metadata plugin" error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs must be a btrfs filesystem to be used with the btrfs snapshotter"
	Jan 08 22:09:53 running-upgrade-680100 dockerd[2753]: time="2024-01-08T22:09:53.437149609Z" level=warning msg="could not use snapshotter aufs in metadata plugin" error="modprobe aufs failed: "modprobe: FATAL: Module aufs not found in directory /lib/modules/4.19.81\n": exit status 1"
	Jan 08 22:09:53 running-upgrade-680100 dockerd[2753]: time="2024-01-08T22:09:53.447933009Z" level=info msg="loading plugin "io.containerd.differ.v1.walking"..." type=io.containerd.differ.v1
	Jan 08 22:09:53 running-upgrade-680100 dockerd[2753]: time="2024-01-08T22:09:53.448057409Z" level=info msg="loading plugin "io.containerd.gc.v1.scheduler"..." type=io.containerd.gc.v1
	Jan 08 22:09:53 running-upgrade-680100 dockerd[2753]: time="2024-01-08T22:09:53.448255909Z" level=info msg="loading plugin "io.containerd.service.v1.containers-service"..." type=io.containerd.service.v1
	Jan 08 22:09:53 running-upgrade-680100 dockerd[2753]: time="2024-01-08T22:09:53.448298409Z" level=info msg="loading plugin "io.containerd.service.v1.content-service"..." type=io.containerd.service.v1
	Jan 08 22:09:53 running-upgrade-680100 dockerd[2753]: time="2024-01-08T22:09:53.448313309Z" level=info msg="loading plugin "io.containerd.service.v1.diff-service"..." type=io.containerd.service.v1
	Jan 08 22:09:53 running-upgrade-680100 dockerd[2753]: time="2024-01-08T22:09:53.448326109Z" level=info msg="loading plugin "io.containerd.service.v1.images-service"..." type=io.containerd.service.v1
	Jan 08 22:09:53 running-upgrade-680100 dockerd[2753]: time="2024-01-08T22:09:53.448338309Z" level=info msg="loading plugin "io.containerd.service.v1.leases-service"..." type=io.containerd.service.v1
	Jan 08 22:09:53 running-upgrade-680100 dockerd[2753]: time="2024-01-08T22:09:53.448401609Z" level=info msg="loading plugin "io.containerd.service.v1.namespaces-service"..." type=io.containerd.service.v1
	Jan 08 22:09:53 running-upgrade-680100 dockerd[2753]: time="2024-01-08T22:09:53.448419409Z" level=info msg="loading plugin "io.containerd.service.v1.snapshots-service"..." type=io.containerd.service.v1
	Jan 08 22:09:53 running-upgrade-680100 dockerd[2753]: time="2024-01-08T22:09:53.448505509Z" level=info msg="loading plugin "io.containerd.runtime.v1.linux"..." type=io.containerd.runtime.v1
	Jan 08 22:09:53 running-upgrade-680100 dockerd[2753]: time="2024-01-08T22:09:53.448681609Z" level=info msg="loading plugin "io.containerd.runtime.v2.task"..." type=io.containerd.runtime.v2
	Jan 08 22:09:53 running-upgrade-680100 dockerd[2753]: time="2024-01-08T22:09:53.448825309Z" level=info msg="loading plugin "io.containerd.monitor.v1.cgroups"..." type=io.containerd.monitor.v1
	Jan 08 22:09:53 running-upgrade-680100 dockerd[2753]: time="2024-01-08T22:09:53.449514709Z" level=info msg="loading plugin "io.containerd.service.v1.tasks-service"..." type=io.containerd.service.v1
	Jan 08 22:09:53 running-upgrade-680100 dockerd[2753]: time="2024-01-08T22:09:53.449708609Z" level=info msg="loading plugin "io.containerd.internal.v1.restart"..." type=io.containerd.internal.v1
	Jan 08 22:09:53 running-upgrade-680100 dockerd[2753]: time="2024-01-08T22:09:53.449748609Z" level=info msg="loading plugin "io.containerd.grpc.v1.containers"..." type=io.containerd.grpc.v1
	Jan 08 22:09:53 running-upgrade-680100 dockerd[2753]: time="2024-01-08T22:09:53.449761609Z" level=info msg="loading plugin "io.containerd.grpc.v1.content"..." type=io.containerd.grpc.v1
	Jan 08 22:09:53 running-upgrade-680100 dockerd[2753]: time="2024-01-08T22:09:53.449773109Z" level=info msg="loading plugin "io.containerd.grpc.v1.diff"..." type=io.containerd.grpc.v1
	Jan 08 22:09:53 running-upgrade-680100 dockerd[2753]: time="2024-01-08T22:09:53.449792309Z" level=info msg="loading plugin "io.containerd.grpc.v1.events"..." type=io.containerd.grpc.v1
	Jan 08 22:09:53 running-upgrade-680100 dockerd[2753]: time="2024-01-08T22:09:53.449805409Z" level=info msg="loading plugin "io.containerd.grpc.v1.healthcheck"..." type=io.containerd.grpc.v1
	Jan 08 22:09:53 running-upgrade-680100 dockerd[2753]: time="2024-01-08T22:09:53.449825309Z" level=info msg="loading plugin "io.containerd.grpc.v1.images"..." type=io.containerd.grpc.v1
	Jan 08 22:09:53 running-upgrade-680100 dockerd[2753]: time="2024-01-08T22:09:53.449840609Z" level=info msg="loading plugin "io.containerd.grpc.v1.leases"..." type=io.containerd.grpc.v1
	Jan 08 22:09:53 running-upgrade-680100 dockerd[2753]: time="2024-01-08T22:09:53.449853209Z" level=info msg="loading plugin "io.containerd.grpc.v1.namespaces"..." type=io.containerd.grpc.v1
	Jan 08 22:09:53 running-upgrade-680100 dockerd[2753]: time="2024-01-08T22:09:53.449863509Z" level=info msg="loading plugin "io.containerd.internal.v1.opt"..." type=io.containerd.internal.v1
	Jan 08 22:09:53 running-upgrade-680100 dockerd[2753]: time="2024-01-08T22:09:53.449918909Z" level=info msg="loading plugin "io.containerd.grpc.v1.snapshots"..." type=io.containerd.grpc.v1
	Jan 08 22:09:53 running-upgrade-680100 dockerd[2753]: time="2024-01-08T22:09:53.449933509Z" level=info msg="loading plugin "io.containerd.grpc.v1.tasks"..." type=io.containerd.grpc.v1
	Jan 08 22:09:53 running-upgrade-680100 dockerd[2753]: time="2024-01-08T22:09:53.449944309Z" level=info msg="loading plugin "io.containerd.grpc.v1.version"..." type=io.containerd.grpc.v1
	Jan 08 22:09:53 running-upgrade-680100 dockerd[2753]: time="2024-01-08T22:09:53.449954509Z" level=info msg="loading plugin "io.containerd.grpc.v1.introspection"..." type=io.containerd.grpc.v1
	Jan 08 22:09:53 running-upgrade-680100 dockerd[2753]: time="2024-01-08T22:09:53.450086509Z" level=info msg=serving... address="/var/run/docker/containerd/containerd-debug.sock"
	Jan 08 22:09:53 running-upgrade-680100 dockerd[2753]: time="2024-01-08T22:09:53.450242509Z" level=info msg=serving... address="/var/run/docker/containerd/containerd.sock"
	Jan 08 22:09:53 running-upgrade-680100 dockerd[2753]: time="2024-01-08T22:09:53.450266809Z" level=info msg="containerd successfully booted in 0.019217s"
	Jan 08 22:09:53 running-upgrade-680100 dockerd[2753]: time="2024-01-08T22:09:53.460781209Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	Jan 08 22:09:53 running-upgrade-680100 dockerd[2753]: time="2024-01-08T22:09:53.460992009Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	Jan 08 22:09:53 running-upgrade-680100 dockerd[2753]: time="2024-01-08T22:09:53.461086609Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/containerd/containerd.sock 0  <nil>}] <nil>}" module=grpc
	Jan 08 22:09:53 running-upgrade-680100 dockerd[2753]: time="2024-01-08T22:09:53.461147809Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Jan 08 22:09:53 running-upgrade-680100 dockerd[2753]: time="2024-01-08T22:09:53.463218609Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	Jan 08 22:09:53 running-upgrade-680100 dockerd[2753]: time="2024-01-08T22:09:53.463590409Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	Jan 08 22:09:53 running-upgrade-680100 dockerd[2753]: time="2024-01-08T22:09:53.463768309Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/containerd/containerd.sock 0  <nil>}] <nil>}" module=grpc
	Jan 08 22:09:53 running-upgrade-680100 dockerd[2753]: time="2024-01-08T22:09:53.463797909Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Jan 08 22:09:53 running-upgrade-680100 dockerd[2753]: time="2024-01-08T22:09:53.504313909Z" level=warning msg="Your kernel does not support cgroup blkio weight"
	Jan 08 22:09:53 running-upgrade-680100 dockerd[2753]: time="2024-01-08T22:09:53.504577309Z" level=warning msg="Your kernel does not support cgroup blkio weight_device"
	Jan 08 22:09:53 running-upgrade-680100 dockerd[2753]: time="2024-01-08T22:09:53.504676409Z" level=warning msg="Your kernel does not support cgroup blkio throttle.read_bps_device"
	Jan 08 22:09:53 running-upgrade-680100 dockerd[2753]: time="2024-01-08T22:09:53.504692009Z" level=warning msg="Your kernel does not support cgroup blkio throttle.write_bps_device"
	Jan 08 22:09:53 running-upgrade-680100 dockerd[2753]: time="2024-01-08T22:09:53.504699309Z" level=warning msg="Your kernel does not support cgroup blkio throttle.read_iops_device"
	Jan 08 22:09:53 running-upgrade-680100 dockerd[2753]: time="2024-01-08T22:09:53.504706809Z" level=warning msg="Your kernel does not support cgroup blkio throttle.write_iops_device"
	Jan 08 22:09:53 running-upgrade-680100 dockerd[2753]: time="2024-01-08T22:09:53.504976609Z" level=info msg="Loading containers: start."
	Jan 08 22:09:53 running-upgrade-680100 dockerd[2753]: time="2024-01-08T22:09:53.628302209Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Jan 08 22:09:53 running-upgrade-680100 dockerd[2753]: time="2024-01-08T22:09:53.703595009Z" level=info msg="Loading containers: done."
	Jan 08 22:09:53 running-upgrade-680100 dockerd[2753]: time="2024-01-08T22:09:53.730153809Z" level=info msg="Docker daemon" commit=633a0ea838 graphdriver(s)=overlay2 version=19.03.5
	Jan 08 22:09:53 running-upgrade-680100 dockerd[2753]: time="2024-01-08T22:09:53.730669609Z" level=info msg="Daemon has completed initialization"
	Jan 08 22:09:53 running-upgrade-680100 dockerd[2753]: time="2024-01-08T22:09:53.777719609Z" level=info msg="API listen on /var/run/docker.sock"
	Jan 08 22:09:53 running-upgrade-680100 systemd[1]: Started Docker Application Container Engine.
	Jan 08 22:09:53 running-upgrade-680100 dockerd[2753]: time="2024-01-08T22:09:53.778838009Z" level=info msg="API listen on [::]:2376"
	Jan 08 22:10:56 running-upgrade-680100 dockerd[2753]: time="2024-01-08T22:10:56.953714712Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/94f7e45b20a527a5016733050e2f44ecbefc6fed5ab92791251c545a12c89b69/shim.sock" debug=false pid=4403
	Jan 08 22:10:57 running-upgrade-680100 dockerd[2753]: time="2024-01-08T22:10:57.062728963Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/0007852aa1c1940d8794545d3b9710fb7f3f7c8f741e21e203fe7dad02d1660b/shim.sock" debug=false pid=4428
	Jan 08 22:10:57 running-upgrade-680100 dockerd[2753]: time="2024-01-08T22:10:57.758501901Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/2bd2f07b15e627c89390c268b3a39368050ccc4d7052722b5f7efd07133abe04/shim.sock" debug=false pid=4529
	Jan 08 22:10:58 running-upgrade-680100 dockerd[2753]: time="2024-01-08T22:10:58.957590961Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/1e677fc038a76937e4499fb992e5f25922103d050a25f44bdef5f5a694295d30/shim.sock" debug=false pid=4597
	Jan 08 22:10:59 running-upgrade-680100 dockerd[2753]: time="2024-01-08T22:10:59.077623425Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/3226d1aa7d70931ab1cadb9921a9a6214df9ff81fe3ed7c39f353910e859f9b8/shim.sock" debug=false pid=4623
	Jan 08 22:10:59 running-upgrade-680100 dockerd[2753]: time="2024-01-08T22:10:59.081391915Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/a015b3b97fa8a20c2672e0d1f993e656dea04e1aed2b68f25c2674be2d76822e/shim.sock" debug=false pid=4629
	Jan 08 22:10:59 running-upgrade-680100 dockerd[2753]: time="2024-01-08T22:10:59.124052898Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/b76f003b30ff5dec2d22f44d781db593ac1028f132a911cfb331eec05bd75a14/shim.sock" debug=false pid=4662
	Jan 08 22:10:59 running-upgrade-680100 dockerd[2753]: time="2024-01-08T22:10:59.307250098Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/5e61f0771ced02225087196c0509b302bc1efea658bee57c8bfebb24c8d88dde/shim.sock" debug=false pid=4717
	Jan 08 22:11:00 running-upgrade-680100 dockerd[2753]: time="2024-01-08T22:11:00.614948131Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/bd05e21ddd4d7f8b9c228dd39f1fb19e98c86c09e346f968b5d44bc797520215/shim.sock" debug=false pid=4919
	Jan 08 22:11:01 running-upgrade-680100 dockerd[2753]: time="2024-01-08T22:11:01.066524186Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/3a806fc937e35ca050ccc8d15a4902a14c751f1f03fa868c64272b6d4b1ac713/shim.sock" debug=false pid=5048
	Jan 08 22:11:19 running-upgrade-680100 dockerd[2753]: time="2024-01-08T22:11:19.747726870Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/ad052c1fa5ed19cf06bd74b8695d7462e7a6a62456916495b0803b12464bb880/shim.sock" debug=false pid=5676
	Jan 08 22:11:20 running-upgrade-680100 dockerd[2753]: time="2024-01-08T22:11:20.087444426Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/64f04967e22a7460b24035cb378c5cb0d27a3b027c493566db2d163a659fa47a/shim.sock" debug=false pid=5728
	Jan 08 22:11:24 running-upgrade-680100 dockerd[2753]: time="2024-01-08T22:11:24.074547544Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/148c8e224531223a1f2eb709da2677c3b504eb624c72dcf3b672b6de6b77b0a2/shim.sock" debug=false pid=5881
	Jan 08 22:11:24 running-upgrade-680100 dockerd[2753]: time="2024-01-08T22:11:24.630643059Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/dc26a8ed679e70b3f45b977523cd1acb0464ac985cdd678264d9ea47785d6972/shim.sock" debug=false pid=5962
	Jan 08 22:11:24 running-upgrade-680100 dockerd[2753]: time="2024-01-08T22:11:24.651334573Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/5b04c76c5b72b0623c840cac5eb8f6d4d43a4e419301824d8277496c4848c682/shim.sock" debug=false pid=5974
	Jan 08 22:11:25 running-upgrade-680100 dockerd[2753]: time="2024-01-08T22:11:25.160739695Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/987ac24fd0d480229d28918f742d8659edf69e499371dd09a6c3e3a8695f486a/shim.sock" debug=false pid=6071
	Jan 08 22:11:25 running-upgrade-680100 dockerd[2753]: time="2024-01-08T22:11:25.848332315Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/ee753ed9ca6b81ab0d6ffb1345d6dd67d3a81e5dc26bf70dff04877449b4b4ea/shim.sock" debug=false pid=6129
	Jan 08 22:11:26 running-upgrade-680100 dockerd[2753]: time="2024-01-08T22:11:26.276919868Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/073f462bd9a112252fd5243a3bcd19ebb25ca043781e792ecdbe676bd8119890/shim.sock" debug=false pid=6196
	Jan 08 22:16:50 running-upgrade-680100 systemd[1]: Stopping Docker Application Container Engine...
	Jan 08 22:16:50 running-upgrade-680100 dockerd[2753]: time="2024-01-08T22:16:50.900514834Z" level=info msg="Processing signal 'terminated'"
	Jan 08 22:16:51 running-upgrade-680100 dockerd[2753]: time="2024-01-08T22:16:51.941964872Z" level=info msg="shim reaped" id=bd05e21ddd4d7f8b9c228dd39f1fb19e98c86c09e346f968b5d44bc797520215
	Jan 08 22:16:51 running-upgrade-680100 dockerd[2753]: time="2024-01-08T22:16:51.946136686Z" level=info msg="shim reaped" id=5e61f0771ced02225087196c0509b302bc1efea658bee57c8bfebb24c8d88dde
	Jan 08 22:16:51 running-upgrade-680100 dockerd[2753]: time="2024-01-08T22:16:51.952161906Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 08 22:16:51 running-upgrade-680100 dockerd[2753]: time="2024-01-08T22:16:51.952448507Z" level=warning msg="bd05e21ddd4d7f8b9c228dd39f1fb19e98c86c09e346f968b5d44bc797520215 cleanup: failed to unmount IPC: umount /var/lib/docker/containers/bd05e21ddd4d7f8b9c228dd39f1fb19e98c86c09e346f968b5d44bc797520215/mounts/shm, flags: 0x2: no such file or directory"
	Jan 08 22:16:51 running-upgrade-680100 dockerd[2753]: time="2024-01-08T22:16:51.960089133Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 08 22:16:51 running-upgrade-680100 dockerd[2753]: time="2024-01-08T22:16:51.960227133Z" level=warning msg="5e61f0771ced02225087196c0509b302bc1efea658bee57c8bfebb24c8d88dde cleanup: failed to unmount IPC: umount /var/lib/docker/containers/5e61f0771ced02225087196c0509b302bc1efea658bee57c8bfebb24c8d88dde/mounts/shm, flags: 0x2: no such file or directory"
	Jan 08 22:16:51 running-upgrade-680100 dockerd[2753]: time="2024-01-08T22:16:51.990200834Z" level=info msg="shim reaped" id=0007852aa1c1940d8794545d3b9710fb7f3f7c8f741e21e203fe7dad02d1660b
	Jan 08 22:16:51 running-upgrade-680100 dockerd[2753]: time="2024-01-08T22:16:51.995534552Z" level=info msg="shim reaped" id=94f7e45b20a527a5016733050e2f44ecbefc6fed5ab92791251c545a12c89b69
	Jan 08 22:16:52 running-upgrade-680100 dockerd[2753]: time="2024-01-08T22:16:52.000276968Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 08 22:16:52 running-upgrade-680100 dockerd[2753]: time="2024-01-08T22:16:52.004319081Z" level=info msg="shim reaped" id=ee753ed9ca6b81ab0d6ffb1345d6dd67d3a81e5dc26bf70dff04877449b4b4ea
	Jan 08 22:16:52 running-upgrade-680100 dockerd[2753]: time="2024-01-08T22:16:52.005477185Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 08 22:16:52 running-upgrade-680100 dockerd[2753]: time="2024-01-08T22:16:52.019962930Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 08 22:16:52 running-upgrade-680100 dockerd[2753]: time="2024-01-08T22:16:52.087454938Z" level=info msg="shim reaped" id=ad052c1fa5ed19cf06bd74b8695d7462e7a6a62456916495b0803b12464bb880
	Jan 08 22:16:52 running-upgrade-680100 dockerd[2753]: time="2024-01-08T22:16:52.088715142Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 08 22:16:52 running-upgrade-680100 dockerd[2753]: time="2024-01-08T22:16:52.096329466Z" level=info msg="shim reaped" id=987ac24fd0d480229d28918f742d8659edf69e499371dd09a6c3e3a8695f486a
	Jan 08 22:16:52 running-upgrade-680100 dockerd[2753]: time="2024-01-08T22:16:52.102705685Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 08 22:16:52 running-upgrade-680100 dockerd[2753]: time="2024-01-08T22:16:52.102967386Z" level=warning msg="987ac24fd0d480229d28918f742d8659edf69e499371dd09a6c3e3a8695f486a cleanup: failed to unmount IPC: umount /var/lib/docker/containers/987ac24fd0d480229d28918f742d8659edf69e499371dd09a6c3e3a8695f486a/mounts/shm, flags: 0x2: no such file or directory"
	Jan 08 22:16:52 running-upgrade-680100 dockerd[2753]: time="2024-01-08T22:16:52.107710001Z" level=info msg="shim reaped" id=3a806fc937e35ca050ccc8d15a4902a14c751f1f03fa868c64272b6d4b1ac713
	Jan 08 22:16:52 running-upgrade-680100 dockerd[2753]: time="2024-01-08T22:16:52.108110802Z" level=info msg="shim reaped" id=148c8e224531223a1f2eb709da2677c3b504eb624c72dcf3b672b6de6b77b0a2
	Jan 08 22:16:52 running-upgrade-680100 dockerd[2753]: time="2024-01-08T22:16:52.109778507Z" level=info msg="shim reaped" id=64f04967e22a7460b24035cb378c5cb0d27a3b027c493566db2d163a659fa47a
	Jan 08 22:16:52 running-upgrade-680100 dockerd[2753]: time="2024-01-08T22:16:52.113897520Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 08 22:16:52 running-upgrade-680100 dockerd[2753]: time="2024-01-08T22:16:52.118800535Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 08 22:16:52 running-upgrade-680100 dockerd[2753]: time="2024-01-08T22:16:52.118990836Z" level=warning msg="3a806fc937e35ca050ccc8d15a4902a14c751f1f03fa868c64272b6d4b1ac713 cleanup: failed to unmount IPC: umount /var/lib/docker/containers/3a806fc937e35ca050ccc8d15a4902a14c751f1f03fa868c64272b6d4b1ac713/mounts/shm, flags: 0x2: no such file or directory"
	Jan 08 22:16:52 running-upgrade-680100 dockerd[2753]: time="2024-01-08T22:16:52.133779882Z" level=info msg="shim reaped" id=a015b3b97fa8a20c2672e0d1f993e656dea04e1aed2b68f25c2674be2d76822e
	Jan 08 22:16:52 running-upgrade-680100 dockerd[2753]: time="2024-01-08T22:16:52.133798682Z" level=info msg="shim reaped" id=b76f003b30ff5dec2d22f44d781db593ac1028f132a911cfb331eec05bd75a14
	Jan 08 22:16:52 running-upgrade-680100 dockerd[2753]: time="2024-01-08T22:16:52.134501084Z" level=info msg="shim reaped" id=2bd2f07b15e627c89390c268b3a39368050ccc4d7052722b5f7efd07133abe04
	Jan 08 22:16:52 running-upgrade-680100 dockerd[2753]: time="2024-01-08T22:16:52.137625993Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 08 22:16:52 running-upgrade-680100 dockerd[2753]: time="2024-01-08T22:16:52.137964795Z" level=warning msg="64f04967e22a7460b24035cb378c5cb0d27a3b027c493566db2d163a659fa47a cleanup: failed to unmount IPC: umount /var/lib/docker/containers/64f04967e22a7460b24035cb378c5cb0d27a3b027c493566db2d163a659fa47a/mounts/shm, flags: 0x2: no such file or directory"
	Jan 08 22:16:52 running-upgrade-680100 dockerd[2753]: time="2024-01-08T22:16:52.151952738Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 08 22:16:52 running-upgrade-680100 dockerd[2753]: time="2024-01-08T22:16:52.152172638Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 08 22:16:52 running-upgrade-680100 dockerd[2753]: time="2024-01-08T22:16:52.152280839Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 08 22:16:52 running-upgrade-680100 dockerd[2753]: time="2024-01-08T22:16:52.156822953Z" level=info msg="shim reaped" id=5b04c76c5b72b0623c840cac5eb8f6d4d43a4e419301824d8277496c4848c682
	Jan 08 22:16:52 running-upgrade-680100 dockerd[2753]: time="2024-01-08T22:16:52.167143185Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 08 22:16:52 running-upgrade-680100 dockerd[2753]: time="2024-01-08T22:16:52.660386810Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/5f9792e14ebd7ea17983f6ca19a526242e0b8b3b7e9cd0ea59a60f33b3d32f0e/shim.sock" debug=false pid=10370
	Jan 08 22:16:52 running-upgrade-680100 dockerd[2753]: time="2024-01-08T22:16:52.694046415Z" level=info msg="shim reaped" id=1e677fc038a76937e4499fb992e5f25922103d050a25f44bdef5f5a694295d30
	Jan 08 22:16:52 running-upgrade-680100 dockerd[2753]: time="2024-01-08T22:16:52.704138746Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 08 22:16:52 running-upgrade-680100 dockerd[2753]: time="2024-01-08T22:16:52.704585347Z" level=warning msg="1e677fc038a76937e4499fb992e5f25922103d050a25f44bdef5f5a694295d30 cleanup: failed to unmount IPC: umount /var/lib/docker/containers/1e677fc038a76937e4499fb992e5f25922103d050a25f44bdef5f5a694295d30/mounts/shm, flags: 0x2: no such file or directory"
	Jan 08 22:16:52 running-upgrade-680100 dockerd[2753]: time="2024-01-08T22:16:52.853531008Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/16151f48b46869da3bd820f67eed06ea8fdb02cf1dda81674a8605eb2f0d7912/shim.sock" debug=false pid=10414
	Jan 08 22:16:53 running-upgrade-680100 dockerd[2753]: time="2024-01-08T22:16:53.011948995Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/82b6010651646e5ba1e966f3970fd180042139fac6a9027a5eacce56544556b1/shim.sock" debug=false pid=10468
	Jan 08 22:16:53 running-upgrade-680100 dockerd[2753]: time="2024-01-08T22:16:53.126997820Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/a0fce02b0410c95075869dfed9e33b33e769bbe7eb83ddad64c83928464f3d81/shim.sock" debug=false pid=10500
	Jan 08 22:16:53 running-upgrade-680100 dockerd[2753]: time="2024-01-08T22:16:53.718977290Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/dda254c6179a8b68ba4ca97bbe5e3e8ad7ad6c700f2cca3bb8a10876ded3a9ba/shim.sock" debug=false pid=10586
	Jan 08 22:16:54 running-upgrade-680100 dockerd[2753]: time="2024-01-08T22:16:54.360809504Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/7c2fecfed2c3536c77d661dd40bb892177423f960e557fe0783d55b1b99c0498/shim.sock" debug=false pid=10640
	Jan 08 22:16:56 running-upgrade-680100 dockerd[2753]: time="2024-01-08T22:16:56.261745774Z" level=info msg="shim reaped" id=dc26a8ed679e70b3f45b977523cd1acb0464ac985cdd678264d9ea47785d6972
	Jan 08 22:16:56 running-upgrade-680100 dockerd[2753]: time="2024-01-08T22:16:56.272185295Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 08 22:16:56 running-upgrade-680100 dockerd[2753]: time="2024-01-08T22:16:56.272478596Z" level=warning msg="dc26a8ed679e70b3f45b977523cd1acb0464ac985cdd678264d9ea47785d6972 cleanup: failed to unmount IPC: umount /var/lib/docker/containers/dc26a8ed679e70b3f45b977523cd1acb0464ac985cdd678264d9ea47785d6972/mounts/shm, flags: 0x2: no such file or directory"
	Jan 08 22:16:56 running-upgrade-680100 dockerd[2753]: time="2024-01-08T22:16:56.432407221Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/417cee3eebce6db7a75216376eb73a2e09ee4fed700845778e11ac9f1b1b8921/shim.sock" debug=false pid=10760
	Jan 08 22:16:56 running-upgrade-680100 dockerd[2753]: time="2024-01-08T22:16:56.483123324Z" level=info msg="shim reaped" id=073f462bd9a112252fd5243a3bcd19ebb25ca043781e792ecdbe676bd8119890
	Jan 08 22:16:56 running-upgrade-680100 dockerd[2753]: time="2024-01-08T22:16:56.487126532Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 08 22:16:56 running-upgrade-680100 dockerd[2753]: time="2024-01-08T22:16:56.489197636Z" level=warning msg="073f462bd9a112252fd5243a3bcd19ebb25ca043781e792ecdbe676bd8119890 cleanup: failed to unmount IPC: umount /var/lib/docker/containers/073f462bd9a112252fd5243a3bcd19ebb25ca043781e792ecdbe676bd8119890/mounts/shm, flags: 0x2: no such file or directory"
	Jan 08 22:16:56 running-upgrade-680100 dockerd[2753]: time="2024-01-08T22:16:56.825923420Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/e27b735f200c536cfd1858889ad7a2d5b85d42e0435792dd07384c85527166cb/shim.sock" debug=false pid=10838
	Jan 08 22:17:01 running-upgrade-680100 dockerd[2753]: time="2024-01-08T22:17:01.153010816Z" level=info msg="Container 3226d1aa7d70931ab1cadb9921a9a6214df9ff81fe3ed7c39f353910e859f9b8 failed to exit within 10 seconds of signal 15 - using the force"
	Jan 08 22:17:01 running-upgrade-680100 dockerd[2753]: time="2024-01-08T22:17:01.180569438Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/cbdffc70e4bb2375bebd23a26469c55c70d763ae2c32c8342667b7783ee62f62/shim.sock" debug=false pid=10913
	Jan 08 22:17:01 running-upgrade-680100 dockerd[2753]: time="2024-01-08T22:17:01.324476252Z" level=info msg="shim reaped" id=3226d1aa7d70931ab1cadb9921a9a6214df9ff81fe3ed7c39f353910e859f9b8
	Jan 08 22:17:01 running-upgrade-680100 dockerd[2753]: time="2024-01-08T22:17:01.332236558Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 08 22:17:01 running-upgrade-680100 dockerd[2753]: time="2024-01-08T22:17:01.332836458Z" level=warning msg="3226d1aa7d70931ab1cadb9921a9a6214df9ff81fe3ed7c39f353910e859f9b8 cleanup: failed to unmount IPC: umount /var/lib/docker/containers/3226d1aa7d70931ab1cadb9921a9a6214df9ff81fe3ed7c39f353910e859f9b8/mounts/shm, flags: 0x2: no such file or directory"
	Jan 08 22:17:01 running-upgrade-680100 dockerd[2753]: time="2024-01-08T22:17:01.413462622Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jan 08 22:17:01 running-upgrade-680100 dockerd[2753]: time="2024-01-08T22:17:01.414218423Z" level=info msg="Daemon shutdown complete"
	Jan 08 22:17:01 running-upgrade-680100 dockerd[2753]: time="2024-01-08T22:17:01.414261523Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jan 08 22:17:01 running-upgrade-680100 dockerd[2753]: time="2024-01-08T22:17:01.414285423Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Jan 08 22:17:01 running-upgrade-680100 dockerd[2753]: time="2024-01-08T22:17:01.446726349Z" level=warning msg="failed to get endpoint_count map for scope local: open : no such file or directory"
	Jan 08 22:17:01 running-upgrade-680100 dockerd[2753]: time="2024-01-08T22:17:01.453104054Z" level=warning msg="1ef9903c448f09800be67d82f63abe612d105bb6dc55da33982b9eab8513d39b cleanup: failed to unmount IPC: umount /var/lib/docker/containers/1ef9903c448f09800be67d82f63abe612d105bb6dc55da33982b9eab8513d39b/mounts/shm, flags: 0x2: no such file or directory"
	Jan 08 22:17:01 running-upgrade-680100 dockerd[2753]: time="2024-01-08T22:17:01.459626259Z" level=error msg="1ef9903c448f09800be67d82f63abe612d105bb6dc55da33982b9eab8513d39b cleanup: failed to delete container from containerd: grpc: the client connection is closing: unknown"
	Jan 08 22:17:01 running-upgrade-680100 dockerd[2753]: time="2024-01-08T22:17:01.459742559Z" level=error msg="Handler for POST /containers/1ef9903c448f09800be67d82f63abe612d105bb6dc55da33982b9eab8513d39b/start returned error: failed to update store for object type *libnetwork.endpoint: open : no such file or directory"
	Jan 08 22:17:01 running-upgrade-680100 dockerd[2753]: time="2024-01-08T22:17:01.479128975Z" level=warning msg="d3f7854e54f757d9faa1705c1e6f72ff0941d5a5bfb7d820adab609c57b0931e cleanup: failed to unmount IPC: umount /var/lib/docker/containers/d3f7854e54f757d9faa1705c1e6f72ff0941d5a5bfb7d820adab609c57b0931e/mounts/shm, flags: 0x2: no such file or directory"
	Jan 08 22:17:01 running-upgrade-680100 dockerd[2753]: time="2024-01-08T22:17:01.488865382Z" level=error msg="d3f7854e54f757d9faa1705c1e6f72ff0941d5a5bfb7d820adab609c57b0931e cleanup: failed to delete container from containerd: grpc: the client connection is closing: unknown"
	Jan 08 22:17:01 running-upgrade-680100 dockerd[2753]: time="2024-01-08T22:17:01.489049382Z" level=error msg="Handler for POST /containers/d3f7854e54f757d9faa1705c1e6f72ff0941d5a5bfb7d820adab609c57b0931e/start returned error: grpc: the client connection is closing: unknown"
	Jan 08 22:17:01 running-upgrade-680100 dockerd[2753]: time="2024-01-08T22:17:01.506927597Z" level=warning msg="failed to retrieve containerd version: rpc error: code = Canceled desc = grpc: the client connection is closing"
	Jan 08 22:17:01 running-upgrade-680100 dockerd[2753]: time="2024-01-08T22:17:01.508092198Z" level=warning msg="failed to get endpoint_count map for scope local: open : no such file or directory"
	Jan 08 22:17:01 running-upgrade-680100 dockerd[2753]: time="2024-01-08T22:17:01.922500126Z" level=warning msg="failed to get endpoint_count map for scope local: open : no such file or directory"
	Jan 08 22:17:01 running-upgrade-680100 dockerd[2753]: time="2024-01-08T22:17:01.928466931Z" level=warning msg="0e2f77bf34fe166b6d3ced2a5ae8cad28c16feeb114e28d48e080a1b731a83bc cleanup: failed to unmount IPC: umount /var/lib/docker/containers/0e2f77bf34fe166b6d3ced2a5ae8cad28c16feeb114e28d48e080a1b731a83bc/mounts/shm, flags: 0x2: no such file or directory"
	Jan 08 22:17:01 running-upgrade-680100 dockerd[2753]: time="2024-01-08T22:17:01.938826239Z" level=error msg="0e2f77bf34fe166b6d3ced2a5ae8cad28c16feeb114e28d48e080a1b731a83bc cleanup: failed to delete container from containerd: grpc: the client connection is closing: unknown"
	Jan 08 22:17:01 running-upgrade-680100 dockerd[2753]: time="2024-01-08T22:17:01.938866039Z" level=error msg="Handler for POST /containers/0e2f77bf34fe166b6d3ced2a5ae8cad28c16feeb114e28d48e080a1b731a83bc/start returned error: failed to update store for object type *libnetwork.endpoint: open : no such file or directory"
	Jan 08 22:17:02 running-upgrade-680100 systemd[1]: docker.service: Succeeded.
	Jan 08 22:17:02 running-upgrade-680100 systemd[1]: Stopped Docker Application Container Engine.
	Jan 08 22:17:02 running-upgrade-680100 systemd[1]: docker.service: Found left-over process 10370 (containerd-shim) in control group while starting unit. Ignoring.
	Jan 08 22:17:02 running-upgrade-680100 systemd[1]: This usually indicates unclean termination of a previous run, or service implementation deficiencies.
	Jan 08 22:17:02 running-upgrade-680100 systemd[1]: docker.service: Found left-over process 10414 (containerd-shim) in control group while starting unit. Ignoring.
	Jan 08 22:17:02 running-upgrade-680100 systemd[1]: This usually indicates unclean termination of a previous run, or service implementation deficiencies.
	Jan 08 22:17:02 running-upgrade-680100 systemd[1]: docker.service: Found left-over process 10468 (containerd-shim) in control group while starting unit. Ignoring.
	Jan 08 22:17:02 running-upgrade-680100 systemd[1]: This usually indicates unclean termination of a previous run, or service implementation deficiencies.
	Jan 08 22:17:02 running-upgrade-680100 systemd[1]: docker.service: Found left-over process 10500 (containerd-shim) in control group while starting unit. Ignoring.
	Jan 08 22:17:02 running-upgrade-680100 systemd[1]: This usually indicates unclean termination of a previous run, or service implementation deficiencies.
	Jan 08 22:17:02 running-upgrade-680100 systemd[1]: docker.service: Found left-over process 10586 (containerd-shim) in control group while starting unit. Ignoring.
	Jan 08 22:17:02 running-upgrade-680100 systemd[1]: This usually indicates unclean termination of a previous run, or service implementation deficiencies.
	Jan 08 22:17:02 running-upgrade-680100 systemd[1]: docker.service: Found left-over process 10640 (containerd-shim) in control group while starting unit. Ignoring.
	Jan 08 22:17:02 running-upgrade-680100 systemd[1]: This usually indicates unclean termination of a previous run, or service implementation deficiencies.
	Jan 08 22:17:02 running-upgrade-680100 systemd[1]: docker.service: Found left-over process 10760 (containerd-shim) in control group while starting unit. Ignoring.
	Jan 08 22:17:02 running-upgrade-680100 systemd[1]: This usually indicates unclean termination of a previous run, or service implementation deficiencies.
	Jan 08 22:17:02 running-upgrade-680100 systemd[1]: docker.service: Found left-over process 10838 (containerd-shim) in control group while starting unit. Ignoring.
	Jan 08 22:17:02 running-upgrade-680100 systemd[1]: This usually indicates unclean termination of a previous run, or service implementation deficiencies.
	Jan 08 22:17:02 running-upgrade-680100 systemd[1]: docker.service: Found left-over process 10913 (containerd-shim) in control group while starting unit. Ignoring.
	Jan 08 22:17:02 running-upgrade-680100 systemd[1]: This usually indicates unclean termination of a previous run, or service implementation deficiencies.
	Jan 08 22:17:02 running-upgrade-680100 systemd[1]: Starting Docker Application Container Engine...
	Jan 08 22:17:02 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:02.476314754Z" level=info msg="Starting up"
	Jan 08 22:17:02 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:02.479521356Z" level=info msg="libcontainerd: started new containerd process" pid=11013
	Jan 08 22:17:02 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:02.479708256Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	Jan 08 22:17:02 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:02.479861256Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	Jan 08 22:17:02 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:02.479944456Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/containerd/containerd.sock 0  <nil>}] <nil>}" module=grpc
	Jan 08 22:17:02 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:02.480071156Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Jan 08 22:17:02 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:02.523270380Z" level=info msg="starting containerd" revision=b34a5c8af56e510852c35414db4c1f4fa6172339 version=v1.2.10
	Jan 08 22:17:02 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:02.523765881Z" level=info msg="loading plugin "io.containerd.content.v1.content"..." type=io.containerd.content.v1
	Jan 08 22:17:02 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:02.524337181Z" level=info msg="loading plugin "io.containerd.snapshotter.v1.btrfs"..." type=io.containerd.snapshotter.v1
	Jan 08 22:17:02 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:02.524736681Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.btrfs" error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs must be a btrfs filesystem to be used with the btrfs snapshotter"
	Jan 08 22:17:02 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:02.524852681Z" level=info msg="loading plugin "io.containerd.snapshotter.v1.aufs"..." type=io.containerd.snapshotter.v1
	Jan 08 22:17:02 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:02.527030482Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.aufs" error="modprobe aufs failed: "modprobe: FATAL: Module aufs not found in directory /lib/modules/4.19.81\n": exit status 1"
	Jan 08 22:17:02 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:02.527137382Z" level=info msg="loading plugin "io.containerd.snapshotter.v1.native"..." type=io.containerd.snapshotter.v1
	Jan 08 22:17:02 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:02.527553283Z" level=info msg="loading plugin "io.containerd.snapshotter.v1.overlayfs"..." type=io.containerd.snapshotter.v1
	Jan 08 22:17:02 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:02.528190683Z" level=info msg="loading plugin "io.containerd.snapshotter.v1.zfs"..." type=io.containerd.snapshotter.v1
	Jan 08 22:17:02 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:02.528718383Z" level=info msg="skip loading plugin "io.containerd.snapshotter.v1.zfs"..." type=io.containerd.snapshotter.v1
	Jan 08 22:17:02 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:02.528820083Z" level=info msg="loading plugin "io.containerd.metadata.v1.bolt"..." type=io.containerd.metadata.v1
	Jan 08 22:17:02 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:02.528849583Z" level=warning msg="could not use snapshotter btrfs in metadata plugin" error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs must be a btrfs filesystem to be used with the btrfs snapshotter"
	Jan 08 22:17:02 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:02.528859183Z" level=warning msg="could not use snapshotter aufs in metadata plugin" error="modprobe aufs failed: "modprobe: FATAL: Module aufs not found in directory /lib/modules/4.19.81\n": exit status 1"
	Jan 08 22:17:02 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:02.528867283Z" level=warning msg="could not use snapshotter zfs in metadata plugin" error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin"
	Jan 08 22:17:02 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:02.529132384Z" level=info msg="loading plugin "io.containerd.differ.v1.walking"..." type=io.containerd.differ.v1
	Jan 08 22:17:02 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:02.529156684Z" level=info msg="loading plugin "io.containerd.gc.v1.scheduler"..." type=io.containerd.gc.v1
	Jan 08 22:17:02 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:02.529220884Z" level=info msg="loading plugin "io.containerd.service.v1.containers-service"..." type=io.containerd.service.v1
	Jan 08 22:17:02 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:02.529241084Z" level=info msg="loading plugin "io.containerd.service.v1.content-service"..." type=io.containerd.service.v1
	Jan 08 22:17:02 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:02.529568484Z" level=info msg="loading plugin "io.containerd.service.v1.diff-service"..." type=io.containerd.service.v1
	Jan 08 22:17:02 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:02.529589084Z" level=info msg="loading plugin "io.containerd.service.v1.images-service"..." type=io.containerd.service.v1
	Jan 08 22:17:02 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:02.529601284Z" level=info msg="loading plugin "io.containerd.service.v1.leases-service"..." type=io.containerd.service.v1
	Jan 08 22:17:02 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:02.529614184Z" level=info msg="loading plugin "io.containerd.service.v1.namespaces-service"..." type=io.containerd.service.v1
	Jan 08 22:17:02 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:02.529625584Z" level=info msg="loading plugin "io.containerd.service.v1.snapshots-service"..." type=io.containerd.service.v1
	Jan 08 22:17:02 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:02.529647784Z" level=info msg="loading plugin "io.containerd.runtime.v1.linux"..." type=io.containerd.runtime.v1
	Jan 08 22:17:02 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:02.561673202Z" level=info msg="loading plugin "io.containerd.runtime.v2.task"..." type=io.containerd.runtime.v2
	Jan 08 22:17:02 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:02.561858902Z" level=info msg="loading plugin "io.containerd.monitor.v1.cgroups"..." type=io.containerd.monitor.v1
	Jan 08 22:17:02 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:02.562547502Z" level=info msg="loading plugin "io.containerd.service.v1.tasks-service"..." type=io.containerd.service.v1
	Jan 08 22:17:02 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:02.563608403Z" level=info msg="loading plugin "io.containerd.internal.v1.restart"..." type=io.containerd.internal.v1
	Jan 08 22:17:02 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:02.563739603Z" level=info msg="loading plugin "io.containerd.grpc.v1.containers"..." type=io.containerd.grpc.v1
	Jan 08 22:17:02 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:02.563767203Z" level=info msg="loading plugin "io.containerd.grpc.v1.content"..." type=io.containerd.grpc.v1
	Jan 08 22:17:02 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:02.563786503Z" level=info msg="loading plugin "io.containerd.grpc.v1.diff"..." type=io.containerd.grpc.v1
	Jan 08 22:17:02 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:02.563800703Z" level=info msg="loading plugin "io.containerd.grpc.v1.events"..." type=io.containerd.grpc.v1
	Jan 08 22:17:02 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:02.563812703Z" level=info msg="loading plugin "io.containerd.grpc.v1.healthcheck"..." type=io.containerd.grpc.v1
	Jan 08 22:17:02 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:02.563825303Z" level=info msg="loading plugin "io.containerd.grpc.v1.images"..." type=io.containerd.grpc.v1
	Jan 08 22:17:02 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:02.563838203Z" level=info msg="loading plugin "io.containerd.grpc.v1.leases"..." type=io.containerd.grpc.v1
	Jan 08 22:17:02 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:02.563850603Z" level=info msg="loading plugin "io.containerd.grpc.v1.namespaces"..." type=io.containerd.grpc.v1
	Jan 08 22:17:02 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:02.563862303Z" level=info msg="loading plugin "io.containerd.internal.v1.opt"..." type=io.containerd.internal.v1
	Jan 08 22:17:02 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:02.563904503Z" level=info msg="loading plugin "io.containerd.grpc.v1.snapshots"..." type=io.containerd.grpc.v1
	Jan 08 22:17:02 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:02.563977503Z" level=info msg="loading plugin "io.containerd.grpc.v1.tasks"..." type=io.containerd.grpc.v1
	Jan 08 22:17:02 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:02.563999603Z" level=info msg="loading plugin "io.containerd.grpc.v1.version"..." type=io.containerd.grpc.v1
	Jan 08 22:17:02 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:02.564011703Z" level=info msg="loading plugin "io.containerd.grpc.v1.introspection"..." type=io.containerd.grpc.v1
	Jan 08 22:17:02 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:02.564166603Z" level=info msg=serving... address="/var/run/docker/containerd/containerd-debug.sock"
	Jan 08 22:17:02 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:02.564214503Z" level=info msg=serving... address="/var/run/docker/containerd/containerd.sock"
	Jan 08 22:17:02 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:02.564228503Z" level=info msg="containerd successfully booted in 0.043028s"
	Jan 08 22:17:02 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:02.579588912Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	Jan 08 22:17:02 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:02.579777912Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	Jan 08 22:17:02 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:02.579868912Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/containerd/containerd.sock 0  <nil>}] <nil>}" module=grpc
	Jan 08 22:17:02 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:02.579994312Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Jan 08 22:17:02 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:02.581719913Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	Jan 08 22:17:02 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:02.581760313Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	Jan 08 22:17:02 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:02.581778913Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/containerd/containerd.sock 0  <nil>}] <nil>}" module=grpc
	Jan 08 22:17:02 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:02.581795813Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Jan 08 22:17:02 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:02.589326617Z" level=info msg="[graphdriver] using prior storage driver: overlay2"
	Jan 08 22:17:02 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:02.668071461Z" level=warning msg="Your kernel does not support cgroup blkio weight"
	Jan 08 22:17:02 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:02.668297961Z" level=warning msg="Your kernel does not support cgroup blkio weight_device"
	Jan 08 22:17:02 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:02.668488661Z" level=warning msg="Your kernel does not support cgroup blkio throttle.read_bps_device"
	Jan 08 22:17:02 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:02.668557161Z" level=warning msg="Your kernel does not support cgroup blkio throttle.write_bps_device"
	Jan 08 22:17:02 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:02.668622861Z" level=warning msg="Your kernel does not support cgroup blkio throttle.read_iops_device"
	Jan 08 22:17:02 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:02.668734961Z" level=warning msg="Your kernel does not support cgroup blkio throttle.write_iops_device"
	Jan 08 22:17:02 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:02.668955962Z" level=info msg="Loading containers: start."
	Jan 08 22:17:03 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:03.621737549Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 08 22:17:03 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:03.633644653Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 08 22:17:03 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:03.659656261Z" level=warning msg="unmount task rootfs" error="no such file or directory" id=16151f48b46869da3bd820f67eed06ea8fdb02cf1dda81674a8605eb2f0d7912 path="/var/run/docker/containerd/daemon/io.containerd.runtime.v1.linux/moby/16151f48b46869da3bd820f67eed06ea8fdb02cf1dda81674a8605eb2f0d7912"
	Jan 08 22:17:03 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:03.661859062Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 08 22:17:03 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:03.663040362Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 08 22:17:03 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:03.683277769Z" level=warning msg="unmount task rootfs" error="no such file or directory" id=dda254c6179a8b68ba4ca97bbe5e3e8ad7ad6c700f2cca3bb8a10876ded3a9ba path="/var/run/docker/containerd/daemon/io.containerd.runtime.v1.linux/moby/dda254c6179a8b68ba4ca97bbe5e3e8ad7ad6c700f2cca3bb8a10876ded3a9ba"
	Jan 08 22:17:03 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:03.683757569Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 08 22:17:03 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:03.698315274Z" level=warning msg="unmount task rootfs" error="no such file or directory" id=5f9792e14ebd7ea17983f6ca19a526242e0b8b3b7e9cd0ea59a60f33b3d32f0e path="/var/run/docker/containerd/daemon/io.containerd.runtime.v1.linux/moby/5f9792e14ebd7ea17983f6ca19a526242e0b8b3b7e9cd0ea59a60f33b3d32f0e"
	Jan 08 22:17:03 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:03.698793374Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 08 22:17:03 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:03.727621083Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 08 22:17:03 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:03.727818483Z" level=warning msg="82b6010651646e5ba1e966f3970fd180042139fac6a9027a5eacce56544556b1 cleanup: failed to unmount IPC: umount /var/lib/docker/containers/82b6010651646e5ba1e966f3970fd180042139fac6a9027a5eacce56544556b1/mounts/shm, flags: 0x2: no such file or directory"
	Jan 08 22:17:03 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:03.738941087Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 08 22:17:03 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:03.739252887Z" level=warning msg="7c2fecfed2c3536c77d661dd40bb892177423f960e557fe0783d55b1b99c0498 cleanup: failed to unmount IPC: umount /var/lib/docker/containers/7c2fecfed2c3536c77d661dd40bb892177423f960e557fe0783d55b1b99c0498/mounts/shm, flags: 0x2: no such file or directory"
	Jan 08 22:17:03 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:03.763923195Z" level=warning msg="unmount task rootfs" error="no such file or directory" id=82b6010651646e5ba1e966f3970fd180042139fac6a9027a5eacce56544556b1 path="/var/run/docker/containerd/daemon/io.containerd.runtime.v1.linux/moby/82b6010651646e5ba1e966f3970fd180042139fac6a9027a5eacce56544556b1"
	Jan 08 22:17:03 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:03.764465195Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 08 22:17:03 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:03.764699495Z" level=warning msg="unmount task rootfs" error="no such file or directory" id=7c2fecfed2c3536c77d661dd40bb892177423f960e557fe0783d55b1b99c0498 path="/var/run/docker/containerd/daemon/io.containerd.runtime.v1.linux/moby/7c2fecfed2c3536c77d661dd40bb892177423f960e557fe0783d55b1b99c0498"
	Jan 08 22:17:03 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:03.765036695Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 08 22:17:03 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:03.769404297Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 08 22:17:03 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:03.777278699Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 08 22:17:03 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:03.785334802Z" level=warning msg="unmount task rootfs" error="no such file or directory" id=417cee3eebce6db7a75216376eb73a2e09ee4fed700845778e11ac9f1b1b8921 path="/var/run/docker/containerd/daemon/io.containerd.runtime.v1.linux/moby/417cee3eebce6db7a75216376eb73a2e09ee4fed700845778e11ac9f1b1b8921"
	Jan 08 22:17:03 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:03.786907403Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 08 22:17:03 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:03.787965603Z" level=warning msg="unmount task rootfs" error="no such file or directory" id=cbdffc70e4bb2375bebd23a26469c55c70d763ae2c32c8342667b7783ee62f62 path="/var/run/docker/containerd/daemon/io.containerd.runtime.v1.linux/moby/cbdffc70e4bb2375bebd23a26469c55c70d763ae2c32c8342667b7783ee62f62"
	Jan 08 22:17:03 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:03.788259503Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 08 22:17:03 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:03.959881559Z" level=info msg="Removing stale sandbox 54606fc5760a0bb636b945ce19ee35c68fd741f970a1e678918cc138989deae4 (cbdffc70e4bb2375bebd23a26469c55c70d763ae2c32c8342667b7783ee62f62)"
	Jan 08 22:17:03 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:03.962800560Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint b5c68db7ae78ff10cc41dfe06eda05218de51d41e157285de1ca25a9e387e73b 133ffc0a0140ef1291b8e8fe634ef34ad590720e58f721d5a7ad35dd46476dc1], retrying...."
	Jan 08 22:17:04 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:04.104902382Z" level=info msg="Removing stale sandbox a95e8ec129362068a46a406ed29d8ed4355910e2dc27784cded0d740bfbe6d29 (16151f48b46869da3bd820f67eed06ea8fdb02cf1dda81674a8605eb2f0d7912)"
	Jan 08 22:17:04 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:04.108259583Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint b5c68db7ae78ff10cc41dfe06eda05218de51d41e157285de1ca25a9e387e73b 2fc52b319f765d45c22aab8037e5877d402369f05e0ecb9e48098517aa473728], retrying...."
	Jan 08 22:17:04 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:04.225760994Z" level=info msg="Removing stale sandbox b54b42228f059b43e69c16db7d830db81ead0ff4fba183e94e670dab65a28266 (dda254c6179a8b68ba4ca97bbe5e3e8ad7ad6c700f2cca3bb8a10876ded3a9ba)"
	Jan 08 22:17:04 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:04.229169994Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint b5c68db7ae78ff10cc41dfe06eda05218de51d41e157285de1ca25a9e387e73b 835454e44cb891a45577311ffd395572fa8adc9996180a29bc87c2b0bf5cb868], retrying...."
	Jan 08 22:17:04 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:04.360880107Z" level=info msg="Removing stale sandbox b79e9428b2255fe394cd707ecf7d828f274765d335567723b5359e675777cb7d (417cee3eebce6db7a75216376eb73a2e09ee4fed700845778e11ac9f1b1b8921)"
	Jan 08 22:17:04 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:04.367590908Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint 81ea524f78b9c25368981085fd4903aefa279ec2e33eca681822225394f96a38 b7a7d01b8cacc2760eac317c3bb7c626d7c16439661ab4b04287c82c0a223cc4], retrying...."
	Jan 08 22:17:04 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:04.485583319Z" level=info msg="Removing stale sandbox cd2ee630010eda43b09e74c8f341ff654dcbeb803222b8b5134d28a7eddeb145 (5f9792e14ebd7ea17983f6ca19a526242e0b8b3b7e9cd0ea59a60f33b3d32f0e)"
	Jan 08 22:17:04 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:04.488726920Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint b5c68db7ae78ff10cc41dfe06eda05218de51d41e157285de1ca25a9e387e73b bcddfe03291e7e1d21e6b352595b4ff26071d3d479f14fbb8537659aaa146065], retrying...."
	Jan 08 22:17:04 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:04.505475321Z" level=info msg="There are old running containers, the network config will not take affect"
	Jan 08 22:17:04 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:04.524575523Z" level=info msg="Loading containers: done."
	Jan 08 22:17:04 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:04.553634126Z" level=info msg="Docker daemon" commit=633a0ea838 graphdriver(s)=overlay2 version=19.03.5
	Jan 08 22:17:04 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:04.553822626Z" level=info msg="Daemon has completed initialization"
	Jan 08 22:17:04 running-upgrade-680100 systemd[1]: Started Docker Application Container Engine.
	Jan 08 22:17:04 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:04.575522328Z" level=info msg="API listen on /var/run/docker.sock"
	Jan 08 22:17:04 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:04.575578228Z" level=info msg="API listen on [::]:2376"
	Jan 08 22:17:04 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:04.986662968Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/765cae7a806c5d98c7bb2ddc9617fc86ae05331f8304c42e52fcf3dc35579602/shim.sock" debug=false pid=11645
	Jan 08 22:17:04 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:04.989891668Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/fe952f39beabedbd5b1d0159905a6835e42486213b7e721f79fb76946f782047/shim.sock" debug=false pid=11646
	Jan 08 22:17:05 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:05.059671862Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/286c5f0415000e09d2d945079c18e13ca0958096a7695d2cc26f54c48653316a/shim.sock" debug=false pid=11682
	Jan 08 22:17:05 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:05.120597054Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/ec5e099e612bd239f4be4a67dc87fd947c5d8599e3f7f86eea178d77aabf89d6/shim.sock" debug=false pid=11688
	Jan 08 22:17:05 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:05.125914953Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/fc00af195f4d5abff7d4e8a867c40004abe64ac8586dff21ea21e8326feb8a56/shim.sock" debug=false pid=11700
	Jan 08 22:17:05 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:05.285943833Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 08 22:17:05 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:05.286319233Z" level=warning msg="a0fce02b0410c95075869dfed9e33b33e769bbe7eb83ddad64c83928464f3d81 cleanup: failed to unmount IPC: umount /var/lib/docker/containers/a0fce02b0410c95075869dfed9e33b33e769bbe7eb83ddad64c83928464f3d81/mounts/shm, flags: 0x2: no such file or directory"
	Jan 08 22:17:05 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:05.336798826Z" level=warning msg="unmount task rootfs" error="no such file or directory" id=a0fce02b0410c95075869dfed9e33b33e769bbe7eb83ddad64c83928464f3d81 path="/var/run/docker/containerd/daemon/io.containerd.runtime.v1.linux/moby/a0fce02b0410c95075869dfed9e33b33e769bbe7eb83ddad64c83928464f3d81"
	Jan 08 22:17:05 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:05.337514926Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 08 22:17:05 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:05.477500508Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/55a70babd955a14f8cf0dde5acd5c59e3c9f23969e7f85985f543a9a74251106/shim.sock" debug=false pid=11847
	Jan 08 22:17:05 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:05.649148986Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/f9c0d5cacfe92db986a709f69b556f690b9409e6284842a5d2e7182aea23e4cb/shim.sock" debug=false pid=11907
	Jan 08 22:17:05 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:05.784085869Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/04c2b20ab76e803ca209782707f610715502a095bd9de02b0d569c2e50f4f4ba/shim.sock" debug=false pid=11943
	Jan 08 22:17:05 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:05.945202248Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/9b649ed41c63dbbc6edaa8d404297c03b67d5cecbfe197112348c23aae87a45c/shim.sock" debug=false pid=11983
	Jan 08 22:17:08 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:08.378509431Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 08 22:17:08 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:08.379512030Z" level=warning msg="e27b735f200c536cfd1858889ad7a2d5b85d42e0435792dd07384c85527166cb cleanup: failed to unmount IPC: umount /var/lib/docker/containers/e27b735f200c536cfd1858889ad7a2d5b85d42e0435792dd07384c85527166cb/mounts/shm, flags: 0x2: no such file or directory"
	Jan 08 22:17:08 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:08.389208822Z" level=warning msg="unmount task rootfs" error="no such file or directory" id=e27b735f200c536cfd1858889ad7a2d5b85d42e0435792dd07384c85527166cb path="/var/run/docker/containerd/daemon/io.containerd.runtime.v1.linux/moby/e27b735f200c536cfd1858889ad7a2d5b85d42e0435792dd07384c85527166cb"
	Jan 08 22:17:08 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:08.389522722Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 08 22:17:08 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:08.526934514Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/3032a2b99f2906facc0a8658d26a6a37931d453297f774b554911a55c400328e/shim.sock" debug=false pid=12130
	Jan 08 22:17:09 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:09.013768631Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/70ecf5f9d8d8336740535e876507144f03c7e98cfa80c26267045209dd200ad2/shim.sock" debug=false pid=12193
	Jan 08 22:17:09 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:09.587288861Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/d037da605157aca849e39deba5e54b1e98fe5338c701befa8207cfc96de9de3d/shim.sock" debug=false pid=12263
	Jan 08 22:17:09 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:09.975549675Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/4db7689dc25eaa0929acbd62e6f654b3693bec31604e0b7122dd4cc23d2da29e/shim.sock" debug=false pid=12356
	Jan 08 22:17:10 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:10.988758062Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/531a9db4529c4313654f78f4cba8f8fb39560cda67a2573901321cae465e18b1/shim.sock" debug=false pid=12409
	Jan 08 22:17:11 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:11.303909922Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/ded3fdde774b13af952974a90dd94e04c7166aeb9d709e25b36029391dd117bf/shim.sock" debug=false pid=12456
	Jan 08 22:17:16 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:16.533674166Z" level=info msg="shim reaped" id=f9c0d5cacfe92db986a709f69b556f690b9409e6284842a5d2e7182aea23e4cb
	Jan 08 22:17:16 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:16.544263041Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 08 22:17:16 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:16.544443840Z" level=warning msg="f9c0d5cacfe92db986a709f69b556f690b9409e6284842a5d2e7182aea23e4cb cleanup: failed to unmount IPC: umount /var/lib/docker/containers/f9c0d5cacfe92db986a709f69b556f690b9409e6284842a5d2e7182aea23e4cb/mounts/shm, flags: 0x2: no such file or directory"
	Jan 08 22:17:17 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:17.842679191Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/122d29bda1f506f812663fcd8d27e8ee8050bc254abf52cd661c16155e83e714/shim.sock" debug=false pid=12580
	Jan 08 22:17:22 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:22.897944695Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/b42d8c701c28a43f390c0b957d7e0f885f3a8f4a1f07b941f146a270c4a2389a/shim.sock" debug=false pid=12681
	Jan 08 22:17:23 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:23.852278240Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/57f933937b30561281c9f3575ca0dcb382d05d1ae34f8236d376da9398965fc3/shim.sock" debug=false pid=12734
	Jan 08 22:17:27 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:27.922522803Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/1d8f4eebfd72c8a74160f56a85987babd8c2f119ddefba80fabc493667ff1d67/shim.sock" debug=false pid=12824
	Jan 08 22:17:33 running-upgrade-680100 systemd[1]: Stopping Docker Application Container Engine...
	Jan 08 22:17:33 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:33.215062177Z" level=info msg="Processing signal 'terminated'"
	Jan 08 22:17:34 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:34.241683688Z" level=info msg="shim reaped" id=55a70babd955a14f8cf0dde5acd5c59e3c9f23969e7f85985f543a9a74251106
	Jan 08 22:17:34 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:34.252507033Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 08 22:17:34 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:34.255265893Z" level=info msg="shim reaped" id=286c5f0415000e09d2d945079c18e13ca0958096a7695d2cc26f54c48653316a
	Jan 08 22:17:34 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:34.258871242Z" level=info msg="shim reaped" id=57f933937b30561281c9f3575ca0dcb382d05d1ae34f8236d376da9398965fc3
	Jan 08 22:17:34 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:34.261546803Z" level=info msg="shim reaped" id=3032a2b99f2906facc0a8658d26a6a37931d453297f774b554911a55c400328e
	Jan 08 22:17:34 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:34.264369463Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 08 22:17:34 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:34.280011539Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 08 22:17:34 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:34.286115052Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 08 22:17:34 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:34.286496646Z" level=warning msg="57f933937b30561281c9f3575ca0dcb382d05d1ae34f8236d376da9398965fc3 cleanup: failed to unmount IPC: umount /var/lib/docker/containers/57f933937b30561281c9f3575ca0dcb382d05d1ae34f8236d376da9398965fc3/mounts/shm, flags: 0x2: no such file or directory"
	Jan 08 22:17:34 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:34.321918340Z" level=info msg="shim reaped" id=fe952f39beabedbd5b1d0159905a6835e42486213b7e721f79fb76946f782047
	Jan 08 22:17:34 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:34.322932125Z" level=info msg="shim reaped" id=765cae7a806c5d98c7bb2ddc9617fc86ae05331f8304c42e52fcf3dc35579602
	Jan 08 22:17:34 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:34.329496731Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 08 22:17:34 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:34.329833026Z" level=info msg="shim reaped" id=fc00af195f4d5abff7d4e8a867c40004abe64ac8586dff21ea21e8326feb8a56
	Jan 08 22:17:34 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:34.331417304Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 08 22:17:34 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:34.341259663Z" level=info msg="shim reaped" id=1d8f4eebfd72c8a74160f56a85987babd8c2f119ddefba80fabc493667ff1d67
	Jan 08 22:17:34 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:34.348950153Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 08 22:17:34 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:34.352386404Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 08 22:17:34 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:34.360976681Z" level=warning msg="1d8f4eebfd72c8a74160f56a85987babd8c2f119ddefba80fabc493667ff1d67 cleanup: failed to unmount IPC: umount /var/lib/docker/containers/1d8f4eebfd72c8a74160f56a85987babd8c2f119ddefba80fabc493667ff1d67/mounts/shm, flags: 0x2: no such file or directory"
	Jan 08 22:17:34 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:34.361181978Z" level=info msg="shim reaped" id=d037da605157aca849e39deba5e54b1e98fe5338c701befa8207cfc96de9de3d
	Jan 08 22:17:34 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:34.368743770Z" level=info msg="shim reaped" id=531a9db4529c4313654f78f4cba8f8fb39560cda67a2573901321cae465e18b1
	Jan 08 22:17:34 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:34.376378060Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 08 22:17:34 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:34.376867453Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 08 22:17:34 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:34.408117206Z" level=info msg="shim reaped" id=ded3fdde774b13af952974a90dd94e04c7166aeb9d709e25b36029391dd117bf
	Jan 08 22:17:34 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:34.410409874Z" level=info msg="shim reaped" id=ec5e099e612bd239f4be4a67dc87fd947c5d8599e3f7f86eea178d77aabf89d6
	Jan 08 22:17:34 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:34.417539672Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 08 22:17:34 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:34.417700869Z" level=warning msg="ded3fdde774b13af952974a90dd94e04c7166aeb9d709e25b36029391dd117bf cleanup: failed to unmount IPC: umount /var/lib/docker/containers/ded3fdde774b13af952974a90dd94e04c7166aeb9d709e25b36029391dd117bf/mounts/shm, flags: 0x2: no such file or directory"
	Jan 08 22:17:34 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:34.460759953Z" level=info msg="shim reaped" id=122d29bda1f506f812663fcd8d27e8ee8050bc254abf52cd661c16155e83e714
	Jan 08 22:17:34 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:34.463110819Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 08 22:17:34 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:34.472996478Z" level=warning msg="122d29bda1f506f812663fcd8d27e8ee8050bc254abf52cd661c16155e83e714 cleanup: failed to unmount IPC: umount /var/lib/docker/containers/122d29bda1f506f812663fcd8d27e8ee8050bc254abf52cd661c16155e83e714/mounts/shm, flags: 0x2: no such file or directory"
	Jan 08 22:17:34 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:34.486887579Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 08 22:17:34 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:34.718397367Z" level=info msg="shim reaped" id=b42d8c701c28a43f390c0b957d7e0f885f3a8f4a1f07b941f146a270c4a2389a
	Jan 08 22:17:34 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:34.719536351Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 08 22:17:34 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:34.719658549Z" level=warning msg="b42d8c701c28a43f390c0b957d7e0f885f3a8f4a1f07b941f146a270c4a2389a cleanup: failed to unmount IPC: umount /var/lib/docker/containers/b42d8c701c28a43f390c0b957d7e0f885f3a8f4a1f07b941f146a270c4a2389a/mounts/shm, flags: 0x2: no such file or directory"
	Jan 08 22:17:34 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:34.870956184Z" level=info msg="shim reaped" id=9b649ed41c63dbbc6edaa8d404297c03b67d5cecbfe197112348c23aae87a45c
	Jan 08 22:17:34 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:34.881193438Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 08 22:17:34 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:34.881703730Z" level=warning msg="9b649ed41c63dbbc6edaa8d404297c03b67d5cecbfe197112348c23aae87a45c cleanup: failed to unmount IPC: umount /var/lib/docker/containers/9b649ed41c63dbbc6edaa8d404297c03b67d5cecbfe197112348c23aae87a45c/mounts/shm, flags: 0x2: no such file or directory"
	Jan 08 22:17:35 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:35.003017294Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/9dde7dccf20c01f2a124dc6e1e1cf61eb182d9db9302219a07111d295a7af5be/shim.sock" debug=false pid=13696
	Jan 08 22:17:35 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:35.275588494Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/b5dbb07377cb3f18866a6683bc12764d5af0a31978b38faccb1a16664f48f2b2/shim.sock" debug=false pid=13737
	Jan 08 22:17:35 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:35.826805408Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/8dbb003a9d171ffff34197955c6ace3ee0f819d227c3106fbb3912fbac737a18/shim.sock" debug=false pid=13799
	Jan 08 22:17:38 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:38.597725361Z" level=info msg="shim reaped" id=4db7689dc25eaa0929acbd62e6f654b3693bec31604e0b7122dd4cc23d2da29e
	Jan 08 22:17:38 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:38.606095141Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 08 22:17:38 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:38.606274739Z" level=warning msg="4db7689dc25eaa0929acbd62e6f654b3693bec31604e0b7122dd4cc23d2da29e cleanup: failed to unmount IPC: umount /var/lib/docker/containers/4db7689dc25eaa0929acbd62e6f654b3693bec31604e0b7122dd4cc23d2da29e/mounts/shm, flags: 0x2: no such file or directory"
	Jan 08 22:17:38 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:38.667052469Z" level=info msg="shim reaped" id=70ecf5f9d8d8336740535e876507144f03c7e98cfa80c26267045209dd200ad2
	Jan 08 22:17:38 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:38.700088496Z" level=warning msg="70ecf5f9d8d8336740535e876507144f03c7e98cfa80c26267045209dd200ad2 cleanup: failed to unmount IPC: umount /var/lib/docker/containers/70ecf5f9d8d8336740535e876507144f03c7e98cfa80c26267045209dd200ad2/mounts/shm, flags: 0x2: no such file or directory"
	Jan 08 22:17:38 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:38.700099996Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 08 22:17:39 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:39.455629786Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/46ca216b863b19f48bbcfee5ec9ca95fa4088f8f77d16bc185f5053700853a2b/shim.sock" debug=false pid=13910
	Jan 08 22:17:41 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:41.004647222Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/9461c83ba4133d94316c71f59168caab37c41a27e421328f86ff46510def85f1/shim.sock" debug=false pid=13964
	Jan 08 22:17:41 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:41.514167232Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/97a808a305928677851d9083961a5500927b3a129ddb5e927f2324511c7a36dd/shim.sock" debug=false pid=14027
	Jan 08 22:17:42 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:42.756904951Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/38c13b445fc98d5a630bd5b9b98e3b0f2cf89216d905bcbe5e033990b6a1a8d1/shim.sock" debug=false pid=14071
	Jan 08 22:17:43 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:43.577159415Z" level=info msg="Container 04c2b20ab76e803ca209782707f610715502a095bd9de02b0d569c2e50f4f4ba failed to exit within 10 seconds of signal 15 - using the force"
	Jan 08 22:17:43 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:43.751927114Z" level=info msg="shim reaped" id=04c2b20ab76e803ca209782707f610715502a095bd9de02b0d569c2e50f4f4ba
	Jan 08 22:17:43 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:43.759642804Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 08 22:17:43 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:43.759671303Z" level=warning msg="04c2b20ab76e803ca209782707f610715502a095bd9de02b0d569c2e50f4f4ba cleanup: failed to unmount IPC: umount /var/lib/docker/containers/04c2b20ab76e803ca209782707f610715502a095bd9de02b0d569c2e50f4f4ba/mounts/shm, flags: 0x2: no such file or directory"
	Jan 08 22:17:44 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:44.134243044Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jan 08 22:17:44 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:44.135368028Z" level=info msg="Daemon shutdown complete"
	Jan 08 22:17:44 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:44.135434727Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Jan 08 22:17:44 running-upgrade-680100 dockerd[11005]: time="2024-01-08T22:17:44.135719023Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jan 08 22:17:45 running-upgrade-680100 systemd[1]: docker.service: Succeeded.
	Jan 08 22:17:45 running-upgrade-680100 systemd[1]: Stopped Docker Application Container Engine.
	Jan 08 22:17:45 running-upgrade-680100 systemd[1]: docker.service: Found left-over process 13696 (containerd-shim) in control group while starting unit. Ignoring.
	Jan 08 22:17:45 running-upgrade-680100 systemd[1]: This usually indicates unclean termination of a previous run, or service implementation deficiencies.
	Jan 08 22:17:45 running-upgrade-680100 systemd[1]: docker.service: Found left-over process 13737 (containerd-shim) in control group while starting unit. Ignoring.
	Jan 08 22:17:45 running-upgrade-680100 systemd[1]: This usually indicates unclean termination of a previous run, or service implementation deficiencies.
	Jan 08 22:17:45 running-upgrade-680100 systemd[1]: docker.service: Found left-over process 13799 (containerd-shim) in control group while starting unit. Ignoring.
	Jan 08 22:17:45 running-upgrade-680100 systemd[1]: This usually indicates unclean termination of a previous run, or service implementation deficiencies.
	Jan 08 22:17:45 running-upgrade-680100 systemd[1]: docker.service: Found left-over process 13910 (containerd-shim) in control group while starting unit. Ignoring.
	Jan 08 22:17:45 running-upgrade-680100 systemd[1]: This usually indicates unclean termination of a previous run, or service implementation deficiencies.
	Jan 08 22:17:45 running-upgrade-680100 systemd[1]: docker.service: Found left-over process 13964 (containerd-shim) in control group while starting unit. Ignoring.
	Jan 08 22:17:45 running-upgrade-680100 systemd[1]: This usually indicates unclean termination of a previous run, or service implementation deficiencies.
	Jan 08 22:17:45 running-upgrade-680100 systemd[1]: docker.service: Found left-over process 14027 (containerd-shim) in control group while starting unit. Ignoring.
	Jan 08 22:17:45 running-upgrade-680100 systemd[1]: This usually indicates unclean termination of a previous run, or service implementation deficiencies.
	Jan 08 22:17:45 running-upgrade-680100 systemd[1]: docker.service: Found left-over process 14071 (containerd-shim) in control group while starting unit. Ignoring.
	Jan 08 22:17:45 running-upgrade-680100 systemd[1]: This usually indicates unclean termination of a previous run, or service implementation deficiencies.
	Jan 08 22:17:45 running-upgrade-680100 systemd[1]: Starting Docker Application Container Engine...
	Jan 08 22:17:45 running-upgrade-680100 dockerd[14178]: time="2024-01-08T22:17:45.201158078Z" level=info msg="Starting up"
	Jan 08 22:17:45 running-upgrade-680100 dockerd[14178]: time="2024-01-08T22:17:45.204354033Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	Jan 08 22:17:45 running-upgrade-680100 dockerd[14178]: time="2024-01-08T22:17:45.204421432Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	Jan 08 22:17:45 running-upgrade-680100 dockerd[14178]: time="2024-01-08T22:17:45.204447831Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock 0  <nil>}] <nil>}" module=grpc
	Jan 08 22:17:45 running-upgrade-680100 dockerd[14178]: time="2024-01-08T22:17:45.204465531Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Jan 08 22:17:45 running-upgrade-680100 dockerd[14178]: time="2024-01-08T22:17:45.204812426Z" level=warning msg="grpc: addrConn.createTransport failed to connect to {unix:///run/containerd/containerd.sock 0  <nil>}. Err :connection error: desc = \"transport: Error while dialing dial unix /run/containerd/containerd.sock: connect: connection refused\". Reconnecting..." module=grpc
	Jan 08 22:17:45 running-upgrade-680100 dockerd[14178]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": all SubConns are in TransientFailure, latest connection error: connection error: desc = "transport: Error while dialing dial unix /run/containerd/containerd.sock: connect: connection refused": unavailable
	Jan 08 22:17:45 running-upgrade-680100 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jan 08 22:17:45 running-upgrade-680100 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jan 08 22:17:45 running-upgrade-680100 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0108 22:17:45.306600   10304 out.go:239] * 
	* 
	W0108 22:17:45.308130   10304 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0108 22:17:45.355524   10304 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:145: upgrade from v1.6.2 to HEAD failed: out/minikube-windows-amd64.exe start -p running-upgrade-680100 --memory=2200 --alsologtostderr -v=1 --driver=hyperv: exit status 90
panic.go:523: *** TestRunningBinaryUpgrade FAILED at 2024-01-08 22:17:45.9368113 +0000 UTC m=+7627.806114201
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p running-upgrade-680100 -n running-upgrade-680100
E0108 22:17:52.257748    3008 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-084500\client.crt: The system cannot find the path specified.
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p running-upgrade-680100 -n running-upgrade-680100: exit status 6 (12.9474561s)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	W0108 22:17:46.057516    2568 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	E0108 22:17:58.820229    2568 status.go:415] kubeconfig endpoint: extract IP: "running-upgrade-680100" does not appear in C:\Users\jenkins.minikube7\minikube-integration\kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "running-upgrade-680100" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:175: Cleaning up "running-upgrade-680100" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p running-upgrade-680100
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p running-upgrade-680100: (46.8219507s)
--- FAIL: TestRunningBinaryUpgrade (708.39s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (312.57s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-windows-amd64.exe start -p NoKubernetes-152000 --driver=hyperv
no_kubernetes_test.go:95: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p NoKubernetes-152000 --driver=hyperv: exit status 1 (4m59.7241852s)

                                                
                                                
-- stdout --
	* [NoKubernetes-152000] minikube v1.32.0 on Microsoft Windows 10 Enterprise N 10.0.19045.3803 Build 19045.3803
	  - KUBECONFIG=C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube7\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=17907
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the hyperv driver based on user configuration
	* Starting control plane node NoKubernetes-152000 in cluster NoKubernetes-152000
	* Creating hyperv VM (CPUs=2, Memory=6000MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.28.4 on Docker 24.0.7 ...

                                                
                                                
-- /stdout --
** stderr ** 
	W0108 22:06:57.770563    2092 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
no_kubernetes_test.go:97: failed to start minikube with args: "out/minikube-windows-amd64.exe start -p NoKubernetes-152000 --driver=hyperv" : exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p NoKubernetes-152000 -n NoKubernetes-152000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p NoKubernetes-152000 -n NoKubernetes-152000: exit status 6 (12.8399837s)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	W0108 22:11:57.479256    8936 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	E0108 22:12:10.122015    8936 status.go:415] kubeconfig endpoint: extract IP: "NoKubernetes-152000" does not appear in C:\Users\jenkins.minikube7\minikube-integration\kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "NoKubernetes-152000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestNoKubernetes/serial/StartWithK8s (312.57s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (505.39s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:196: (dbg) Run:  C:\Users\jenkins.minikube7\AppData\Local\Temp\minikube-v1.6.2.2534004794.exe start -p stopped-upgrade-266300 --memory=2200 --vm-driver=hyperv
version_upgrade_test.go:196: (dbg) Done: C:\Users\jenkins.minikube7\AppData\Local\Temp\minikube-v1.6.2.2534004794.exe start -p stopped-upgrade-266300 --memory=2200 --vm-driver=hyperv: (4m24.1248001s)
version_upgrade_test.go:205: (dbg) Run:  C:\Users\jenkins.minikube7\AppData\Local\Temp\minikube-v1.6.2.2534004794.exe -p stopped-upgrade-266300 stop
E0108 22:30:48.033183    3008 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ingress-addon-legacy-054400\client.crt: The system cannot find the path specified.
version_upgrade_test.go:205: (dbg) Done: C:\Users\jenkins.minikube7\AppData\Local\Temp\minikube-v1.6.2.2534004794.exe -p stopped-upgrade-266300 stop: (27.2204341s)
version_upgrade_test.go:211: (dbg) Run:  out/minikube-windows-amd64.exe start -p stopped-upgrade-266300 --memory=2200 --alsologtostderr -v=1 --driver=hyperv
version_upgrade_test.go:211: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p stopped-upgrade-266300 --memory=2200 --alsologtostderr -v=1 --driver=hyperv: exit status 90 (3m33.8114561s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-266300] minikube v1.32.0 on Microsoft Windows 10 Enterprise N 10.0.19045.3803 Build 19045.3803
	  - KUBECONFIG=C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube7\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=17907
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	* Using the hyperv driver based on existing profile
	* Starting control plane node stopped-upgrade-266300 in cluster stopped-upgrade-266300
	* Restarting existing hyperv VM for "stopped-upgrade-266300" ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0108 22:30:55.419984    7060 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I0108 22:30:55.503020    7060 out.go:296] Setting OutFile to fd 1756 ...
	I0108 22:30:55.503020    7060 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 22:30:55.503020    7060 out.go:309] Setting ErrFile to fd 1760...
	I0108 22:30:55.503020    7060 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 22:30:55.529021    7060 out.go:303] Setting JSON to false
	I0108 22:30:55.534030    7060 start.go:128] hostinfo: {"hostname":"minikube7","uptime":30997,"bootTime":1704722057,"procs":210,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.3803 Build 19045.3803","kernelVersion":"10.0.19045.3803 Build 19045.3803","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f66ed2ea-6c04-4a6b-8eea-b2fb0953e990"}
	W0108 22:30:55.534030    7060 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0108 22:30:55.612012    7060 out.go:177] * [stopped-upgrade-266300] minikube v1.32.0 on Microsoft Windows 10 Enterprise N 10.0.19045.3803 Build 19045.3803
	I0108 22:30:55.614102    7060 notify.go:220] Checking for updates...
	I0108 22:30:55.661294    7060 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	I0108 22:30:55.662239    7060 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0108 22:30:55.709310    7060 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube7\minikube-integration\.minikube
	I0108 22:30:55.709310    7060 out.go:177]   - MINIKUBE_LOCATION=17907
	I0108 22:30:55.755866    7060 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0108 22:30:55.758382    7060 config.go:182] Loaded profile config "stopped-upgrade-266300": Driver=, ContainerRuntime=docker, KubernetesVersion=v1.17.0
	I0108 22:30:55.758382    7060 start_flags.go:694] config upgrade: Driver=hyperv
	I0108 22:30:55.758382    7060 start_flags.go:706] config upgrade: KicBaseImage=gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c
	I0108 22:30:55.758382    7060 profile.go:148] Saving config to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\stopped-upgrade-266300\config.json ...
	I0108 22:30:55.864769    7060 out.go:177] * Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	I0108 22:30:55.865772    7060 driver.go:392] Setting default libvirt URI to qemu:///system
	I0108 22:31:01.707242    7060 out.go:177] * Using the hyperv driver based on existing profile
	I0108 22:31:01.708872    7060 start.go:298] selected driver: hyperv
	I0108 22:31:01.708930    7060 start.go:902] validating driver "hyperv" against &{Name:stopped-upgrade-266300 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.6.0.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:2200 CPUs:2 DiskSize:20000 VMDriver:hyperv Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.17.0
ClusterName: Namespace: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:minikube IP:172.29.99.183 Port:8443 KubernetesVersion:v1.17.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[] StartHostTimeout:0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:
SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:0s GPUs:}
	I0108 22:31:01.709267    7060 start.go:913] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0108 22:31:01.762794    7060 cni.go:84] Creating CNI manager for ""
	I0108 22:31:01.762921    7060 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0108 22:31:01.762921    7060 start_flags.go:323] config:
	{Name:stopped-upgrade-266300 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.6.0.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:2200 CPUs:2 DiskSize:20000 VMDriver:hyperv Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.17.0 ClusterName: Namespace: APIServerName:minikubeCA APIServerNames:[] APIServerIPs
:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:minikube IP:172.29.99.183 Port:8443 KubernetesVersion:v1.17.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[] StartHostTimeout:0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:0s GPUs:}
	I0108 22:31:01.763385    7060 iso.go:125] acquiring lock: {Name:mk1869c5b5033c1301af33a7fa364ec43dd63efe Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 22:31:01.814041    7060 out.go:177] * Starting control plane node stopped-upgrade-266300 in cluster stopped-upgrade-266300
	I0108 22:31:01.815220    7060 preload.go:132] Checking if preload exists for k8s version v1.17.0 and runtime docker
	W0108 22:31:01.861952    7060 preload.go:115] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.17.0/preloaded-images-k8s-v18-v1.17.0-docker-overlay2-amd64.tar.lz4 status code: 404
	I0108 22:31:01.862875    7060 profile.go:148] Saving config to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\stopped-upgrade-266300\config.json ...
	I0108 22:31:01.863006    7060 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver:v1.17.0 -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver_v1.17.0
	I0108 22:31:01.863064    7060 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns:1.6.5 -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns_1.6.5
	I0108 22:31:01.863114    7060 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy:v1.17.0 -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.17.0
	I0108 22:31:01.863179    7060 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager:v1.17.0 -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager_v1.17.0
	I0108 22:31:01.863064    7060 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner:v5 -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner_v5
	I0108 22:31:01.863179    7060 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler:v1.17.0 -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler_v1.17.0
	I0108 22:31:01.863064    7060 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause:3.1 -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause_3.1
	I0108 22:31:01.863064    7060 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd:3.4.3-0 -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.4.3-0
	I0108 22:31:01.867468    7060 start.go:365] acquiring machines lock for stopped-upgrade-266300: {Name:mk814f158b6187cc9297257c36fdbe0d2871c950 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0108 22:31:02.060198    7060 cache.go:107] acquiring lock: {Name:mkc6e9060bea9211e4f8126ac5de344442cb8c23 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 22:31:02.060198    7060 cache.go:107] acquiring lock: {Name:mk43c24b3570a50e54ec9f1dc43aba5ea2e54859 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 22:31:02.060198    7060 cache.go:107] acquiring lock: {Name:mk945c9573a262bf2c410f3ec338c9e4cbac7ce3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 22:31:02.060198    7060 cache.go:107] acquiring lock: {Name:mkeac0ccf1d6f0e0eb0c19801602a218964c6025 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 22:31:02.060198    7060 cache.go:107] acquiring lock: {Name:mke680978131adbec647605a81bab7c783de93d6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 22:31:02.060198    7060 cache.go:107] acquiring lock: {Name:mk3a663ba67028a054dd5a6e96ba367c56e950d4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 22:31:02.060198    7060 cache.go:107] acquiring lock: {Name:mk1869bccfa4db5e538bd31af28e9c95a48df16c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 22:31:02.060198    7060 cache.go:107] acquiring lock: {Name:mk6522f86f404131d1768d0de0ce775513ec42e5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 22:31:02.060198    7060 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.17.0 exists
	I0108 22:31:02.060198    7060 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.4.3-0 exists
	I0108 22:31:02.060198    7060 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.17.0" -> "C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\kube-proxy_v1.17.0" took 197.0571ms
	I0108 22:31:02.060198    7060 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.17.0 -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.17.0 succeeded
	I0108 22:31:02.060746    7060 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns_1.6.5 exists
	I0108 22:31:02.060841    7060 cache.go:96] cache image "registry.k8s.io/coredns:1.6.5" -> "C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\coredns_1.6.5" took 197.7266ms
	I0108 22:31:02.060841    7060 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler_v1.17.0 exists
	I0108 22:31:02.060924    7060 cache.go:80] save to tar file registry.k8s.io/coredns:1.6.5 -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns_1.6.5 succeeded
	I0108 22:31:02.060746    7060 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager_v1.17.0 exists
	I0108 22:31:02.060924    7060 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver_v1.17.0 exists
	I0108 22:31:02.060198    7060 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner_v5 exists
	I0108 22:31:02.060841    7060 cache.go:96] cache image "registry.k8s.io/etcd:3.4.3-0" -> "C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\etcd_3.4.3-0" took 197.2426ms
	I0108 22:31:02.061455    7060 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.17.0" -> "C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\kube-controller-manager_v1.17.0" took 198.2758ms
	I0108 22:31:02.061519    7060 cache.go:80] save to tar file registry.k8s.io/etcd:3.4.3-0 -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.4.3-0 succeeded
	I0108 22:31:02.061169    7060 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.17.0" -> "C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\kube-scheduler_v1.17.0" took 197.8519ms
	I0108 22:31:02.061519    7060 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.17.0" -> "C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\kube-apiserver_v1.17.0" took 198.3908ms
	I0108 22:31:02.061596    7060 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.17.0 -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler_v1.17.0 succeeded
	I0108 22:31:02.061519    7060 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.17.0 -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager_v1.17.0 succeeded
	I0108 22:31:02.061519    7060 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\cache\\images\\amd64\\gcr.io\\k8s-minikube\\storage-provisioner_v5" took 198.2481ms
	I0108 22:31:02.061758    7060 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner_v5 succeeded
	I0108 22:31:02.061596    7060 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.17.0 -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver_v1.17.0 succeeded
	I0108 22:31:02.061123    7060 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause_3.1 exists
	I0108 22:31:02.064832    7060 cache.go:96] cache image "registry.k8s.io/pause:3.1" -> "C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\pause_3.1" took 201.2334ms
	I0108 22:31:02.064832    7060 cache.go:80] save to tar file registry.k8s.io/pause:3.1 -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause_3.1 succeeded
	I0108 22:31:02.064832    7060 cache.go:87] Successfully saved all images to host disk.
	I0108 22:32:20.753946    7060 start.go:369] acquired machines lock for "stopped-upgrade-266300" in 1m18.8858557s
	I0108 22:32:20.754175    7060 start.go:96] Skipping create...Using existing machine configuration
	I0108 22:32:20.754213    7060 fix.go:54] fixHost starting: minikube
	I0108 22:32:20.754948    7060 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM stopped-upgrade-266300 ).state
	I0108 22:32:23.009823    7060 main.go:141] libmachine: [stdout =====>] : Off
	
	I0108 22:32:23.009823    7060 main.go:141] libmachine: [stderr =====>] : 
	I0108 22:32:23.009823    7060 fix.go:102] recreateIfNeeded on stopped-upgrade-266300: state=Stopped err=<nil>
	W0108 22:32:23.010146    7060 fix.go:128] unexpected machine state, will restart: <nil>
	I0108 22:32:23.011371    7060 out.go:177] * Restarting existing hyperv VM for "stopped-upgrade-266300" ...
	I0108 22:32:23.011949    7060 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM stopped-upgrade-266300
	I0108 22:32:26.168828    7060 main.go:141] libmachine: [stdout =====>] : 
	I0108 22:32:26.168897    7060 main.go:141] libmachine: [stderr =====>] : 
	I0108 22:32:26.168897    7060 main.go:141] libmachine: Waiting for host to start...
	I0108 22:32:26.168981    7060 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM stopped-upgrade-266300 ).state
	I0108 22:32:28.981117    7060 main.go:141] libmachine: [stdout =====>] : Running
	
	I0108 22:32:28.981117    7060 main.go:141] libmachine: [stderr =====>] : 
	I0108 22:32:28.981312    7060 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM stopped-upgrade-266300 ).networkadapters[0]).ipaddresses[0]
	I0108 22:32:31.891607    7060 main.go:141] libmachine: [stdout =====>] : 
	I0108 22:32:31.891670    7060 main.go:141] libmachine: [stderr =====>] : 
	I0108 22:32:32.907170    7060 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM stopped-upgrade-266300 ).state
	I0108 22:32:35.130447    7060 main.go:141] libmachine: [stdout =====>] : Running
	
	I0108 22:32:35.130500    7060 main.go:141] libmachine: [stderr =====>] : 
	I0108 22:32:35.130718    7060 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM stopped-upgrade-266300 ).networkadapters[0]).ipaddresses[0]
	I0108 22:32:38.008200    7060 main.go:141] libmachine: [stdout =====>] : 
	I0108 22:32:38.008268    7060 main.go:141] libmachine: [stderr =====>] : 
	I0108 22:32:39.017478    7060 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM stopped-upgrade-266300 ).state
	I0108 22:32:41.324240    7060 main.go:141] libmachine: [stdout =====>] : Running
	
	I0108 22:32:41.324240    7060 main.go:141] libmachine: [stderr =====>] : 
	I0108 22:32:41.324240    7060 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM stopped-upgrade-266300 ).networkadapters[0]).ipaddresses[0]
	I0108 22:32:43.913654    7060 main.go:141] libmachine: [stdout =====>] : 
	I0108 22:32:43.913654    7060 main.go:141] libmachine: [stderr =====>] : 
	I0108 22:32:44.926336    7060 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM stopped-upgrade-266300 ).state
	I0108 22:32:47.192946    7060 main.go:141] libmachine: [stdout =====>] : Running
	
	I0108 22:32:47.192946    7060 main.go:141] libmachine: [stderr =====>] : 
	I0108 22:32:47.193190    7060 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM stopped-upgrade-266300 ).networkadapters[0]).ipaddresses[0]
	I0108 22:32:49.823450    7060 main.go:141] libmachine: [stdout =====>] : 
	I0108 22:32:49.823450    7060 main.go:141] libmachine: [stderr =====>] : 
	I0108 22:32:50.832814    7060 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM stopped-upgrade-266300 ).state
	I0108 22:32:53.131052    7060 main.go:141] libmachine: [stdout =====>] : Running
	
	I0108 22:32:53.131052    7060 main.go:141] libmachine: [stderr =====>] : 
	I0108 22:32:53.131156    7060 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM stopped-upgrade-266300 ).networkadapters[0]).ipaddresses[0]
	I0108 22:32:55.645879    7060 main.go:141] libmachine: [stdout =====>] : 
	I0108 22:32:55.646026    7060 main.go:141] libmachine: [stderr =====>] : 
	I0108 22:32:56.657345    7060 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM stopped-upgrade-266300 ).state
	I0108 22:32:58.971184    7060 main.go:141] libmachine: [stdout =====>] : Running
	
	I0108 22:32:58.971423    7060 main.go:141] libmachine: [stderr =====>] : 
	I0108 22:32:58.971423    7060 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM stopped-upgrade-266300 ).networkadapters[0]).ipaddresses[0]
	I0108 22:33:01.558802    7060 main.go:141] libmachine: [stdout =====>] : 
	I0108 22:33:01.558944    7060 main.go:141] libmachine: [stderr =====>] : 
	I0108 22:33:02.567806    7060 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM stopped-upgrade-266300 ).state
	I0108 22:33:04.946754    7060 main.go:141] libmachine: [stdout =====>] : Running
	
	I0108 22:33:04.946886    7060 main.go:141] libmachine: [stderr =====>] : 
	I0108 22:33:04.947039    7060 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM stopped-upgrade-266300 ).networkadapters[0]).ipaddresses[0]
	I0108 22:33:07.769152    7060 main.go:141] libmachine: [stdout =====>] : 
	I0108 22:33:07.769221    7060 main.go:141] libmachine: [stderr =====>] : 
	I0108 22:33:08.783682    7060 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM stopped-upgrade-266300 ).state
	I0108 22:33:11.170297    7060 main.go:141] libmachine: [stdout =====>] : Running
	
	I0108 22:33:11.170335    7060 main.go:141] libmachine: [stderr =====>] : 
	I0108 22:33:11.170431    7060 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM stopped-upgrade-266300 ).networkadapters[0]).ipaddresses[0]
	I0108 22:33:13.878072    7060 main.go:141] libmachine: [stdout =====>] : 172.29.99.183
	
	I0108 22:33:13.878072    7060 main.go:141] libmachine: [stderr =====>] : 
	I0108 22:33:13.880970    7060 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM stopped-upgrade-266300 ).state
	I0108 22:33:16.536747    7060 main.go:141] libmachine: [stdout =====>] : Running
	
	I0108 22:33:16.536833    7060 main.go:141] libmachine: [stderr =====>] : 
	I0108 22:33:16.537076    7060 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM stopped-upgrade-266300 ).networkadapters[0]).ipaddresses[0]
	I0108 22:33:19.479037    7060 main.go:141] libmachine: [stdout =====>] : 172.29.99.183
	
	I0108 22:33:19.479290    7060 main.go:141] libmachine: [stderr =====>] : 
	I0108 22:33:19.479427    7060 profile.go:148] Saving config to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\stopped-upgrade-266300\config.json ...
	I0108 22:33:19.481873    7060 machine.go:88] provisioning docker machine ...
	I0108 22:33:19.481873    7060 buildroot.go:166] provisioning hostname "stopped-upgrade-266300"
	I0108 22:33:19.481985    7060 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM stopped-upgrade-266300 ).state
	I0108 22:33:21.998960    7060 main.go:141] libmachine: [stdout =====>] : Running
	
	I0108 22:33:21.998960    7060 main.go:141] libmachine: [stderr =====>] : 
	I0108 22:33:21.998960    7060 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM stopped-upgrade-266300 ).networkadapters[0]).ipaddresses[0]
	I0108 22:33:24.692900    7060 main.go:141] libmachine: [stdout =====>] : 172.29.99.183
	
	I0108 22:33:24.692900    7060 main.go:141] libmachine: [stderr =====>] : 
	I0108 22:33:24.699564    7060 main.go:141] libmachine: Using SSH client type: native
	I0108 22:33:24.700628    7060 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4a6120] 0x4a8c60 <nil>  [] 0s} 172.29.99.183 22 <nil> <nil>}
	I0108 22:33:24.700727    7060 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-266300 && echo "stopped-upgrade-266300" | sudo tee /etc/hostname
	I0108 22:33:24.848561    7060 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-266300
	
	I0108 22:33:24.848561    7060 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM stopped-upgrade-266300 ).state
	I0108 22:33:27.427122    7060 main.go:141] libmachine: [stdout =====>] : Running
	
	I0108 22:33:27.427122    7060 main.go:141] libmachine: [stderr =====>] : 
	I0108 22:33:27.427296    7060 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM stopped-upgrade-266300 ).networkadapters[0]).ipaddresses[0]
	I0108 22:33:30.240366    7060 main.go:141] libmachine: [stdout =====>] : 172.29.99.183
	
	I0108 22:33:30.240757    7060 main.go:141] libmachine: [stderr =====>] : 
	I0108 22:33:30.248852    7060 main.go:141] libmachine: Using SSH client type: native
	I0108 22:33:30.249510    7060 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4a6120] 0x4a8c60 <nil>  [] 0s} 172.29.99.183 22 <nil> <nil>}
	I0108 22:33:30.249510    7060 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-266300' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-266300/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-266300' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0108 22:33:30.397500    7060 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0108 22:33:30.397500    7060 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube7\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube7\minikube-integration\.minikube}
	I0108 22:33:30.397500    7060 buildroot.go:174] setting up certificates
	I0108 22:33:30.397500    7060 provision.go:83] configureAuth start
	I0108 22:33:30.397623    7060 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM stopped-upgrade-266300 ).state
	I0108 22:33:33.404835    7060 main.go:141] libmachine: [stdout =====>] : Running
	
	I0108 22:33:33.404947    7060 main.go:141] libmachine: [stderr =====>] : 
	I0108 22:33:33.404947    7060 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM stopped-upgrade-266300 ).networkadapters[0]).ipaddresses[0]
	I0108 22:33:36.128530    7060 main.go:141] libmachine: [stdout =====>] : 172.29.99.183
	
	I0108 22:33:36.128745    7060 main.go:141] libmachine: [stderr =====>] : 
	I0108 22:33:36.128745    7060 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM stopped-upgrade-266300 ).state
	I0108 22:33:38.271276    7060 main.go:141] libmachine: [stdout =====>] : Running
	
	I0108 22:33:38.271471    7060 main.go:141] libmachine: [stderr =====>] : 
	I0108 22:33:38.271579    7060 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM stopped-upgrade-266300 ).networkadapters[0]).ipaddresses[0]
	I0108 22:33:40.830968    7060 main.go:141] libmachine: [stdout =====>] : 172.29.99.183
	
	I0108 22:33:40.830968    7060 main.go:141] libmachine: [stderr =====>] : 
	I0108 22:33:40.830968    7060 provision.go:138] copyHostCerts
	I0108 22:33:40.831529    7060 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/key.pem, removing ...
	I0108 22:33:40.831600    7060 exec_runner.go:203] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\key.pem
	I0108 22:33:40.832122    7060 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/key.pem (1679 bytes)
	I0108 22:33:40.833984    7060 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/ca.pem, removing ...
	I0108 22:33:40.834058    7060 exec_runner.go:203] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.pem
	I0108 22:33:40.834337    7060 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0108 22:33:40.835697    7060 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/cert.pem, removing ...
	I0108 22:33:40.835697    7060 exec_runner.go:203] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cert.pem
	I0108 22:33:40.835912    7060 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0108 22:33:40.836881    7060 provision.go:112] generating server cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.stopped-upgrade-266300 san=[172.29.99.183 172.29.99.183 localhost 127.0.0.1 minikube stopped-upgrade-266300]
	I0108 22:33:41.098886    7060 provision.go:172] copyRemoteCerts
	I0108 22:33:41.113514    7060 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0108 22:33:41.113589    7060 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM stopped-upgrade-266300 ).state
	I0108 22:33:43.250154    7060 main.go:141] libmachine: [stdout =====>] : Running
	
	I0108 22:33:43.250289    7060 main.go:141] libmachine: [stderr =====>] : 
	I0108 22:33:43.250601    7060 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM stopped-upgrade-266300 ).networkadapters[0]).ipaddresses[0]
	I0108 22:33:45.786693    7060 main.go:141] libmachine: [stdout =====>] : 172.29.99.183
	
	I0108 22:33:45.786693    7060 main.go:141] libmachine: [stderr =====>] : 
	I0108 22:33:45.787378    7060 sshutil.go:53] new ssh client: &{IP:172.29.99.183 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\stopped-upgrade-266300\id_rsa Username:docker}
	I0108 22:33:45.889020    7060 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.7754076s)
	I0108 22:33:45.889529    7060 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0108 22:33:45.908391    7060 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1241 bytes)
	I0108 22:33:45.928293    7060 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0108 22:33:45.951775    7060 provision.go:86] duration metric: configureAuth took 15.5541983s
	I0108 22:33:45.951775    7060 buildroot.go:189] setting minikube options for container-runtime
	I0108 22:33:45.952532    7060 config.go:182] Loaded profile config "stopped-upgrade-266300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.17.0
	I0108 22:33:45.952532    7060 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM stopped-upgrade-266300 ).state
	I0108 22:33:48.129994    7060 main.go:141] libmachine: [stdout =====>] : Running
	
	I0108 22:33:48.129994    7060 main.go:141] libmachine: [stderr =====>] : 
	I0108 22:33:48.130152    7060 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM stopped-upgrade-266300 ).networkadapters[0]).ipaddresses[0]
	I0108 22:33:50.615807    7060 main.go:141] libmachine: [stdout =====>] : 172.29.99.183
	
	I0108 22:33:50.615807    7060 main.go:141] libmachine: [stderr =====>] : 
	I0108 22:33:50.622406    7060 main.go:141] libmachine: Using SSH client type: native
	I0108 22:33:50.623164    7060 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4a6120] 0x4a8c60 <nil>  [] 0s} 172.29.99.183 22 <nil> <nil>}
	I0108 22:33:50.623164    7060 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0108 22:33:50.748031    7060 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0108 22:33:50.748031    7060 buildroot.go:70] root file system type: tmpfs
	I0108 22:33:50.748561    7060 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0108 22:33:50.748561    7060 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM stopped-upgrade-266300 ).state
	I0108 22:33:52.889892    7060 main.go:141] libmachine: [stdout =====>] : Running
	
	I0108 22:33:52.890002    7060 main.go:141] libmachine: [stderr =====>] : 
	I0108 22:33:52.890103    7060 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM stopped-upgrade-266300 ).networkadapters[0]).ipaddresses[0]
	I0108 22:33:55.540349    7060 main.go:141] libmachine: [stdout =====>] : 172.29.99.183
	
	I0108 22:33:55.540506    7060 main.go:141] libmachine: [stderr =====>] : 
	I0108 22:33:55.546161    7060 main.go:141] libmachine: Using SSH client type: native
	I0108 22:33:55.546832    7060 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4a6120] 0x4a8c60 <nil>  [] 0s} 172.29.99.183 22 <nil> <nil>}
	I0108 22:33:55.546832    7060 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0108 22:33:55.698882    7060 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0108 22:33:55.698882    7060 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM stopped-upgrade-266300 ).state
	I0108 22:33:57.846069    7060 main.go:141] libmachine: [stdout =====>] : Running
	
	I0108 22:33:57.846069    7060 main.go:141] libmachine: [stderr =====>] : 
	I0108 22:33:57.846177    7060 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM stopped-upgrade-266300 ).networkadapters[0]).ipaddresses[0]
	I0108 22:34:00.451396    7060 main.go:141] libmachine: [stdout =====>] : 172.29.99.183
	
	I0108 22:34:00.451760    7060 main.go:141] libmachine: [stderr =====>] : 
	I0108 22:34:00.457336    7060 main.go:141] libmachine: Using SSH client type: native
	I0108 22:34:00.458095    7060 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4a6120] 0x4a8c60 <nil>  [] 0s} 172.29.99.183 22 <nil> <nil>}
	I0108 22:34:00.458095    7060 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0108 22:34:01.579887    7060 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0108 22:34:01.579887    7060 machine.go:91] provisioned docker machine in 42.097809s
	I0108 22:34:01.580858    7060 start.go:300] post-start starting for "stopped-upgrade-266300" (driver="hyperv")
	I0108 22:34:01.580858    7060 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0108 22:34:01.594146    7060 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0108 22:34:01.594146    7060 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM stopped-upgrade-266300 ).state
	I0108 22:34:03.733740    7060 main.go:141] libmachine: [stdout =====>] : Running
	
	I0108 22:34:03.733821    7060 main.go:141] libmachine: [stderr =====>] : 
	I0108 22:34:03.733821    7060 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM stopped-upgrade-266300 ).networkadapters[0]).ipaddresses[0]
	I0108 22:34:06.265299    7060 main.go:141] libmachine: [stdout =====>] : 172.29.99.183
	
	I0108 22:34:06.265299    7060 main.go:141] libmachine: [stderr =====>] : 
	I0108 22:34:06.266085    7060 sshutil.go:53] new ssh client: &{IP:172.29.99.183 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\stopped-upgrade-266300\id_rsa Username:docker}
	I0108 22:34:06.368579    7060 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.7743229s)
	I0108 22:34:06.384945    7060 ssh_runner.go:195] Run: cat /etc/os-release
	I0108 22:34:06.391588    7060 info.go:137] Remote host: Buildroot 2019.02.7
	I0108 22:34:06.391676    7060 filesync.go:126] Scanning C:\Users\jenkins.minikube7\minikube-integration\.minikube\addons for local assets ...
	I0108 22:34:06.392347    7060 filesync.go:126] Scanning C:\Users\jenkins.minikube7\minikube-integration\.minikube\files for local assets ...
	I0108 22:34:06.393484    7060 filesync.go:149] local asset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\30082.pem -> 30082.pem in /etc/ssl/certs
	I0108 22:34:06.410820    7060 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0108 22:34:06.419684    7060 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\30082.pem --> /etc/ssl/certs/30082.pem (1708 bytes)
	I0108 22:34:06.437928    7060 start.go:303] post-start completed in 4.8570467s
	I0108 22:34:06.438035    7060 fix.go:56] fixHost completed within 1m45.6832053s
	I0108 22:34:06.438035    7060 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM stopped-upgrade-266300 ).state
	I0108 22:34:08.569322    7060 main.go:141] libmachine: [stdout =====>] : Running
	
	I0108 22:34:08.569322    7060 main.go:141] libmachine: [stderr =====>] : 
	I0108 22:34:08.569437    7060 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM stopped-upgrade-266300 ).networkadapters[0]).ipaddresses[0]
	I0108 22:34:11.122919    7060 main.go:141] libmachine: [stdout =====>] : 172.29.99.183
	
	I0108 22:34:11.122919    7060 main.go:141] libmachine: [stderr =====>] : 
	I0108 22:34:11.128040    7060 main.go:141] libmachine: Using SSH client type: native
	I0108 22:34:11.128941    7060 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4a6120] 0x4a8c60 <nil>  [] 0s} 172.29.99.183 22 <nil> <nil>}
	I0108 22:34:11.128941    7060 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0108 22:34:11.254716    7060 main.go:141] libmachine: SSH cmd err, output: <nil>: 1704753251.252484863
	
	I0108 22:34:11.254716    7060 fix.go:206] guest clock: 1704753251.252484863
	I0108 22:34:11.254716    7060 fix.go:219] Guest: 2024-01-08 22:34:11.252484863 +0000 UTC Remote: 2024-01-08 22:34:06.4380354 +0000 UTC m=+191.121521901 (delta=4.814449463s)
	I0108 22:34:11.254879    7060 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM stopped-upgrade-266300 ).state
	I0108 22:34:13.369818    7060 main.go:141] libmachine: [stdout =====>] : Running
	
	I0108 22:34:13.369818    7060 main.go:141] libmachine: [stderr =====>] : 
	I0108 22:34:13.369923    7060 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM stopped-upgrade-266300 ).networkadapters[0]).ipaddresses[0]
	I0108 22:34:15.862533    7060 main.go:141] libmachine: [stdout =====>] : 172.29.99.183
	
	I0108 22:34:15.862533    7060 main.go:141] libmachine: [stderr =====>] : 
	I0108 22:34:15.868547    7060 main.go:141] libmachine: Using SSH client type: native
	I0108 22:34:15.869367    7060 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4a6120] 0x4a8c60 <nil>  [] 0s} 172.29.99.183 22 <nil> <nil>}
	I0108 22:34:15.869367    7060 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1704753251
	I0108 22:34:15.997865    7060 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Jan  8 22:34:11 UTC 2024
	
	I0108 22:34:15.998398    7060 fix.go:226] clock set: Mon Jan  8 22:34:11 UTC 2024
	 (err=<nil>)
	I0108 22:34:15.998436    7060 start.go:83] releasing machines lock for "stopped-upgrade-266300", held for 1m55.2438527s
	I0108 22:34:15.998734    7060 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM stopped-upgrade-266300 ).state
	I0108 22:34:18.328551    7060 main.go:141] libmachine: [stdout =====>] : Running
	
	I0108 22:34:18.328551    7060 main.go:141] libmachine: [stderr =====>] : 
	I0108 22:34:18.328551    7060 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM stopped-upgrade-266300 ).networkadapters[0]).ipaddresses[0]
	I0108 22:34:21.171955    7060 main.go:141] libmachine: [stdout =====>] : 172.29.99.183
	
	I0108 22:34:21.172038    7060 main.go:141] libmachine: [stderr =====>] : 
	I0108 22:34:21.176361    7060 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0108 22:34:21.176435    7060 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM stopped-upgrade-266300 ).state
	I0108 22:34:21.197879    7060 ssh_runner.go:195] Run: cat /version.json
	I0108 22:34:21.198879    7060 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM stopped-upgrade-266300 ).state
	I0108 22:34:23.652207    7060 main.go:141] libmachine: [stdout =====>] : Running
	
	I0108 22:34:23.652207    7060 main.go:141] libmachine: [stderr =====>] : 
	I0108 22:34:23.652287    7060 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM stopped-upgrade-266300 ).networkadapters[0]).ipaddresses[0]
	I0108 22:34:23.671150    7060 main.go:141] libmachine: [stdout =====>] : Running
	
	I0108 22:34:23.671150    7060 main.go:141] libmachine: [stderr =====>] : 
	I0108 22:34:23.671150    7060 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM stopped-upgrade-266300 ).networkadapters[0]).ipaddresses[0]
	I0108 22:34:26.534333    7060 main.go:141] libmachine: [stdout =====>] : 172.29.99.183
	
	I0108 22:34:26.534333    7060 main.go:141] libmachine: [stderr =====>] : 
	I0108 22:34:26.534333    7060 sshutil.go:53] new ssh client: &{IP:172.29.99.183 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\stopped-upgrade-266300\id_rsa Username:docker}
	I0108 22:34:26.596465    7060 main.go:141] libmachine: [stdout =====>] : 172.29.99.183
	
	I0108 22:34:26.596657    7060 main.go:141] libmachine: [stderr =====>] : 
	I0108 22:34:26.597454    7060 sshutil.go:53] new ssh client: &{IP:172.29.99.183 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\stopped-upgrade-266300\id_rsa Username:docker}
	I0108 22:34:26.626277    7060 ssh_runner.go:235] Completed: cat /version.json: (5.428276s)
	W0108 22:34:26.626389    7060 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0108 22:34:26.646490    7060 ssh_runner.go:195] Run: systemctl --version
	I0108 22:34:26.773597    7060 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (5.5971188s)
	I0108 22:34:26.792022    7060 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0108 22:34:26.802947    7060 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0108 22:34:26.817374    7060 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0108 22:34:26.842013    7060 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0108 22:34:26.849481    7060 cni.go:305] no active bridge cni configs found in "/etc/cni/net.d" - nothing to configure
	I0108 22:34:26.849603    7060 start.go:475] detecting cgroup driver to use...
	I0108 22:34:26.849845    7060 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0108 22:34:26.881740    7060 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.1"|' /etc/containerd/config.toml"
	I0108 22:34:26.905008    7060 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0108 22:34:26.912833    7060 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0108 22:34:26.927005    7060 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0108 22:34:26.957065    7060 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0108 22:34:26.978098    7060 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0108 22:34:27.006812    7060 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0108 22:34:27.030025    7060 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0108 22:34:27.054635    7060 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0108 22:34:27.081399    7060 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0108 22:34:27.106558    7060 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0108 22:34:27.125792    7060 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0108 22:34:27.238799    7060 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0108 22:34:27.260412    7060 start.go:475] detecting cgroup driver to use...
	I0108 22:34:27.278761    7060 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0108 22:34:27.304755    7060 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0108 22:34:27.332569    7060 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0108 22:34:27.373329    7060 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0108 22:34:27.404391    7060 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0108 22:34:27.418698    7060 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0108 22:34:27.451960    7060 ssh_runner.go:195] Run: which cri-dockerd
	I0108 22:34:27.474680    7060 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0108 22:34:27.482751    7060 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0108 22:34:27.510137    7060 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0108 22:34:27.631674    7060 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0108 22:34:27.741829    7060 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0108 22:34:27.742105    7060 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0108 22:34:27.769181    7060 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0108 22:34:27.932845    7060 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0108 22:34:29.017128    7060 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.0837451s)
	I0108 22:34:29.033923    7060 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0108 22:34:29.059336    7060 out.go:177] 
	W0108 22:34:29.062817    7060 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	-- Logs begin at Mon 2024-01-08 22:33:06 UTC, end at Mon 2024-01-08 22:34:29 UTC. --
	Jan 08 22:34:00 stopped-upgrade-266300 systemd[1]: Starting Docker Application Container Engine...
	Jan 08 22:34:00 stopped-upgrade-266300 dockerd[2494]: time="2024-01-08T22:34:00.892574275Z" level=info msg="Starting up"
	Jan 08 22:34:00 stopped-upgrade-266300 dockerd[2494]: time="2024-01-08T22:34:00.895362565Z" level=info msg="libcontainerd: started new containerd process" pid=2501
	Jan 08 22:34:00 stopped-upgrade-266300 dockerd[2494]: time="2024-01-08T22:34:00.895590964Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	Jan 08 22:34:00 stopped-upgrade-266300 dockerd[2494]: time="2024-01-08T22:34:00.895682864Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	Jan 08 22:34:00 stopped-upgrade-266300 dockerd[2494]: time="2024-01-08T22:34:00.895765463Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/containerd/containerd.sock 0  <nil>}] <nil>}" module=grpc
	Jan 08 22:34:00 stopped-upgrade-266300 dockerd[2494]: time="2024-01-08T22:34:00.895869263Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Jan 08 22:34:00 stopped-upgrade-266300 dockerd[2494]: time="2024-01-08T22:34:00.935948514Z" level=info msg="starting containerd" revision=b34a5c8af56e510852c35414db4c1f4fa6172339 version=v1.2.10
	Jan 08 22:34:00 stopped-upgrade-266300 dockerd[2494]: time="2024-01-08T22:34:00.936354012Z" level=info msg="loading plugin "io.containerd.content.v1.content"..." type=io.containerd.content.v1
	Jan 08 22:34:00 stopped-upgrade-266300 dockerd[2494]: time="2024-01-08T22:34:00.936833610Z" level=info msg="loading plugin "io.containerd.snapshotter.v1.btrfs"..." type=io.containerd.snapshotter.v1
	Jan 08 22:34:00 stopped-upgrade-266300 dockerd[2494]: time="2024-01-08T22:34:00.937100909Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.btrfs" error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs must be a btrfs filesystem to be used with the btrfs snapshotter"
	Jan 08 22:34:00 stopped-upgrade-266300 dockerd[2494]: time="2024-01-08T22:34:00.937128009Z" level=info msg="loading plugin "io.containerd.snapshotter.v1.aufs"..." type=io.containerd.snapshotter.v1
	Jan 08 22:34:00 stopped-upgrade-266300 dockerd[2494]: time="2024-01-08T22:34:00.938773103Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.aufs" error="modprobe aufs failed: "modprobe: FATAL: Module aufs not found in directory /lib/modules/4.19.81\n": exit status 1"
	Jan 08 22:34:00 stopped-upgrade-266300 dockerd[2494]: time="2024-01-08T22:34:00.938867103Z" level=info msg="loading plugin "io.containerd.snapshotter.v1.native"..." type=io.containerd.snapshotter.v1
	Jan 08 22:34:00 stopped-upgrade-266300 dockerd[2494]: time="2024-01-08T22:34:00.939367401Z" level=info msg="loading plugin "io.containerd.snapshotter.v1.overlayfs"..." type=io.containerd.snapshotter.v1
	Jan 08 22:34:00 stopped-upgrade-266300 dockerd[2494]: time="2024-01-08T22:34:00.940316397Z" level=info msg="loading plugin "io.containerd.snapshotter.v1.zfs"..." type=io.containerd.snapshotter.v1
	Jan 08 22:34:00 stopped-upgrade-266300 dockerd[2494]: time="2024-01-08T22:34:00.940599596Z" level=info msg="skip loading plugin "io.containerd.snapshotter.v1.zfs"..." type=io.containerd.snapshotter.v1
	Jan 08 22:34:00 stopped-upgrade-266300 dockerd[2494]: time="2024-01-08T22:34:00.940673696Z" level=info msg="loading plugin "io.containerd.metadata.v1.bolt"..." type=io.containerd.metadata.v1
	Jan 08 22:34:00 stopped-upgrade-266300 dockerd[2494]: time="2024-01-08T22:34:00.940750096Z" level=warning msg="could not use snapshotter btrfs in metadata plugin" error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs must be a btrfs filesystem to be used with the btrfs snapshotter"
	Jan 08 22:34:00 stopped-upgrade-266300 dockerd[2494]: time="2024-01-08T22:34:00.940764996Z" level=warning msg="could not use snapshotter aufs in metadata plugin" error="modprobe aufs failed: "modprobe: FATAL: Module aufs not found in directory /lib/modules/4.19.81\n": exit status 1"
	Jan 08 22:34:00 stopped-upgrade-266300 dockerd[2494]: time="2024-01-08T22:34:00.940772796Z" level=warning msg="could not use snapshotter zfs in metadata plugin" error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin"
	Jan 08 22:34:00 stopped-upgrade-266300 dockerd[2494]: time="2024-01-08T22:34:00.944280583Z" level=info msg="loading plugin "io.containerd.differ.v1.walking"..." type=io.containerd.differ.v1
	Jan 08 22:34:00 stopped-upgrade-266300 dockerd[2494]: time="2024-01-08T22:34:00.944425582Z" level=info msg="loading plugin "io.containerd.gc.v1.scheduler"..." type=io.containerd.gc.v1
	Jan 08 22:34:00 stopped-upgrade-266300 dockerd[2494]: time="2024-01-08T22:34:00.944461882Z" level=info msg="loading plugin "io.containerd.service.v1.containers-service"..." type=io.containerd.service.v1
	Jan 08 22:34:00 stopped-upgrade-266300 dockerd[2494]: time="2024-01-08T22:34:00.944474282Z" level=info msg="loading plugin "io.containerd.service.v1.content-service"..." type=io.containerd.service.v1
	Jan 08 22:34:00 stopped-upgrade-266300 dockerd[2494]: time="2024-01-08T22:34:00.944484382Z" level=info msg="loading plugin "io.containerd.service.v1.diff-service"..." type=io.containerd.service.v1
	Jan 08 22:34:00 stopped-upgrade-266300 dockerd[2494]: time="2024-01-08T22:34:00.944495082Z" level=info msg="loading plugin "io.containerd.service.v1.images-service"..." type=io.containerd.service.v1
	Jan 08 22:34:00 stopped-upgrade-266300 dockerd[2494]: time="2024-01-08T22:34:00.944505082Z" level=info msg="loading plugin "io.containerd.service.v1.leases-service"..." type=io.containerd.service.v1
	Jan 08 22:34:00 stopped-upgrade-266300 dockerd[2494]: time="2024-01-08T22:34:00.944567081Z" level=info msg="loading plugin "io.containerd.service.v1.namespaces-service"..." type=io.containerd.service.v1
	Jan 08 22:34:00 stopped-upgrade-266300 dockerd[2494]: time="2024-01-08T22:34:00.944589281Z" level=info msg="loading plugin "io.containerd.service.v1.snapshots-service"..." type=io.containerd.service.v1
	Jan 08 22:34:00 stopped-upgrade-266300 dockerd[2494]: time="2024-01-08T22:34:00.944600681Z" level=info msg="loading plugin "io.containerd.runtime.v1.linux"..." type=io.containerd.runtime.v1
	Jan 08 22:34:00 stopped-upgrade-266300 dockerd[2494]: time="2024-01-08T22:34:00.944741381Z" level=info msg="loading plugin "io.containerd.runtime.v2.task"..." type=io.containerd.runtime.v2
	Jan 08 22:34:00 stopped-upgrade-266300 dockerd[2494]: time="2024-01-08T22:34:00.944884480Z" level=info msg="loading plugin "io.containerd.monitor.v1.cgroups"..." type=io.containerd.monitor.v1
	Jan 08 22:34:00 stopped-upgrade-266300 dockerd[2494]: time="2024-01-08T22:34:00.945385678Z" level=info msg="loading plugin "io.containerd.service.v1.tasks-service"..." type=io.containerd.service.v1
	Jan 08 22:34:00 stopped-upgrade-266300 dockerd[2494]: time="2024-01-08T22:34:00.945473278Z" level=info msg="loading plugin "io.containerd.internal.v1.restart"..." type=io.containerd.internal.v1
	Jan 08 22:34:00 stopped-upgrade-266300 dockerd[2494]: time="2024-01-08T22:34:00.945506278Z" level=info msg="loading plugin "io.containerd.grpc.v1.containers"..." type=io.containerd.grpc.v1
	Jan 08 22:34:00 stopped-upgrade-266300 dockerd[2494]: time="2024-01-08T22:34:00.945517878Z" level=info msg="loading plugin "io.containerd.grpc.v1.content"..." type=io.containerd.grpc.v1
	Jan 08 22:34:00 stopped-upgrade-266300 dockerd[2494]: time="2024-01-08T22:34:00.945527378Z" level=info msg="loading plugin "io.containerd.grpc.v1.diff"..." type=io.containerd.grpc.v1
	Jan 08 22:34:00 stopped-upgrade-266300 dockerd[2494]: time="2024-01-08T22:34:00.945536378Z" level=info msg="loading plugin "io.containerd.grpc.v1.events"..." type=io.containerd.grpc.v1
	Jan 08 22:34:00 stopped-upgrade-266300 dockerd[2494]: time="2024-01-08T22:34:00.945544878Z" level=info msg="loading plugin "io.containerd.grpc.v1.healthcheck"..." type=io.containerd.grpc.v1
	Jan 08 22:34:00 stopped-upgrade-266300 dockerd[2494]: time="2024-01-08T22:34:00.945554378Z" level=info msg="loading plugin "io.containerd.grpc.v1.images"..." type=io.containerd.grpc.v1
	Jan 08 22:34:00 stopped-upgrade-266300 dockerd[2494]: time="2024-01-08T22:34:00.945563678Z" level=info msg="loading plugin "io.containerd.grpc.v1.leases"..." type=io.containerd.grpc.v1
	Jan 08 22:34:00 stopped-upgrade-266300 dockerd[2494]: time="2024-01-08T22:34:00.945572578Z" level=info msg="loading plugin "io.containerd.grpc.v1.namespaces"..." type=io.containerd.grpc.v1
	Jan 08 22:34:00 stopped-upgrade-266300 dockerd[2494]: time="2024-01-08T22:34:00.945581478Z" level=info msg="loading plugin "io.containerd.internal.v1.opt"..." type=io.containerd.internal.v1
	Jan 08 22:34:00 stopped-upgrade-266300 dockerd[2494]: time="2024-01-08T22:34:00.945626477Z" level=info msg="loading plugin "io.containerd.grpc.v1.snapshots"..." type=io.containerd.grpc.v1
	Jan 08 22:34:00 stopped-upgrade-266300 dockerd[2494]: time="2024-01-08T22:34:00.945639477Z" level=info msg="loading plugin "io.containerd.grpc.v1.tasks"..." type=io.containerd.grpc.v1
	Jan 08 22:34:00 stopped-upgrade-266300 dockerd[2494]: time="2024-01-08T22:34:00.945701977Z" level=info msg="loading plugin "io.containerd.grpc.v1.version"..." type=io.containerd.grpc.v1
	Jan 08 22:34:00 stopped-upgrade-266300 dockerd[2494]: time="2024-01-08T22:34:00.945723877Z" level=info msg="loading plugin "io.containerd.grpc.v1.introspection"..." type=io.containerd.grpc.v1
	Jan 08 22:34:00 stopped-upgrade-266300 dockerd[2494]: time="2024-01-08T22:34:00.945827677Z" level=info msg=serving... address="/var/run/docker/containerd/containerd-debug.sock"
	Jan 08 22:34:00 stopped-upgrade-266300 dockerd[2494]: time="2024-01-08T22:34:00.946013876Z" level=info msg=serving... address="/var/run/docker/containerd/containerd.sock"
	Jan 08 22:34:00 stopped-upgrade-266300 dockerd[2494]: time="2024-01-08T22:34:00.946027676Z" level=info msg="containerd successfully booted in 0.012877s"
	Jan 08 22:34:00 stopped-upgrade-266300 dockerd[2494]: time="2024-01-08T22:34:00.955931939Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	Jan 08 22:34:00 stopped-upgrade-266300 dockerd[2494]: time="2024-01-08T22:34:00.955961639Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	Jan 08 22:34:00 stopped-upgrade-266300 dockerd[2494]: time="2024-01-08T22:34:00.955980339Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/containerd/containerd.sock 0  <nil>}] <nil>}" module=grpc
	Jan 08 22:34:00 stopped-upgrade-266300 dockerd[2494]: time="2024-01-08T22:34:00.955990639Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Jan 08 22:34:00 stopped-upgrade-266300 dockerd[2494]: time="2024-01-08T22:34:00.957483733Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	Jan 08 22:34:00 stopped-upgrade-266300 dockerd[2494]: time="2024-01-08T22:34:00.957529833Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	Jan 08 22:34:00 stopped-upgrade-266300 dockerd[2494]: time="2024-01-08T22:34:00.957551133Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/containerd/containerd.sock 0  <nil>}] <nil>}" module=grpc
	Jan 08 22:34:00 stopped-upgrade-266300 dockerd[2494]: time="2024-01-08T22:34:00.957570233Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Jan 08 22:34:00 stopped-upgrade-266300 dockerd[2494]: time="2024-01-08T22:34:00.973186975Z" level=info msg="[graphdriver] using prior storage driver: overlay2"
	Jan 08 22:34:01 stopped-upgrade-266300 dockerd[2494]: time="2024-01-08T22:34:01.128461425Z" level=warning msg="Your kernel does not support cgroup blkio weight"
	Jan 08 22:34:01 stopped-upgrade-266300 dockerd[2494]: time="2024-01-08T22:34:01.128619925Z" level=warning msg="Your kernel does not support cgroup blkio weight_device"
	Jan 08 22:34:01 stopped-upgrade-266300 dockerd[2494]: time="2024-01-08T22:34:01.128634025Z" level=warning msg="Your kernel does not support cgroup blkio throttle.read_bps_device"
	Jan 08 22:34:01 stopped-upgrade-266300 dockerd[2494]: time="2024-01-08T22:34:01.128641025Z" level=warning msg="Your kernel does not support cgroup blkio throttle.write_bps_device"
	Jan 08 22:34:01 stopped-upgrade-266300 dockerd[2494]: time="2024-01-08T22:34:01.128648925Z" level=warning msg="Your kernel does not support cgroup blkio throttle.read_iops_device"
	Jan 08 22:34:01 stopped-upgrade-266300 dockerd[2494]: time="2024-01-08T22:34:01.128656125Z" level=warning msg="Your kernel does not support cgroup blkio throttle.write_iops_device"
	Jan 08 22:34:01 stopped-upgrade-266300 dockerd[2494]: time="2024-01-08T22:34:01.128944524Z" level=info msg="Loading containers: start."
	Jan 08 22:34:01 stopped-upgrade-266300 dockerd[2494]: time="2024-01-08T22:34:01.430008971Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Jan 08 22:34:01 stopped-upgrade-266300 dockerd[2494]: time="2024-01-08T22:34:01.513402879Z" level=info msg="Loading containers: done."
	Jan 08 22:34:01 stopped-upgrade-266300 dockerd[2494]: time="2024-01-08T22:34:01.542568177Z" level=info msg="Docker daemon" commit=633a0ea838 graphdriver(s)=overlay2 version=19.03.5
	Jan 08 22:34:01 stopped-upgrade-266300 dockerd[2494]: time="2024-01-08T22:34:01.547875259Z" level=info msg="Daemon has completed initialization"
	Jan 08 22:34:01 stopped-upgrade-266300 dockerd[2494]: time="2024-01-08T22:34:01.577191756Z" level=info msg="API listen on /var/run/docker.sock"
	Jan 08 22:34:01 stopped-upgrade-266300 dockerd[2494]: time="2024-01-08T22:34:01.577231656Z" level=info msg="API listen on [::]:2376"
	Jan 08 22:34:01 stopped-upgrade-266300 systemd[1]: Started Docker Application Container Engine.
	Jan 08 22:34:27 stopped-upgrade-266300 systemd[1]: Stopping Docker Application Container Engine...
	Jan 08 22:34:27 stopped-upgrade-266300 dockerd[2494]: time="2024-01-08T22:34:27.945593210Z" level=info msg="Processing signal 'terminated'"
	Jan 08 22:34:27 stopped-upgrade-266300 dockerd[2494]: time="2024-01-08T22:34:27.946705310Z" level=info msg="Daemon shutdown complete"
	Jan 08 22:34:27 stopped-upgrade-266300 dockerd[2494]: time="2024-01-08T22:34:27.946748710Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=moby
	Jan 08 22:34:27 stopped-upgrade-266300 dockerd[2494]: time="2024-01-08T22:34:27.946780110Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Jan 08 22:34:27 stopped-upgrade-266300 dockerd[2494]: time="2024-01-08T22:34:27.946896210Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jan 08 22:34:28 stopped-upgrade-266300 systemd[1]: docker.service: Succeeded.
	Jan 08 22:34:28 stopped-upgrade-266300 systemd[1]: Stopped Docker Application Container Engine.
	Jan 08 22:34:28 stopped-upgrade-266300 systemd[1]: Starting Docker Application Container Engine...
	Jan 08 22:34:29 stopped-upgrade-266300 dockerd[2936]: time="2024-01-08T22:34:29.009373510Z" level=info msg="Starting up"
	Jan 08 22:34:29 stopped-upgrade-266300 dockerd[2936]: time="2024-01-08T22:34:29.012778510Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	Jan 08 22:34:29 stopped-upgrade-266300 dockerd[2936]: time="2024-01-08T22:34:29.012890110Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	Jan 08 22:34:29 stopped-upgrade-266300 dockerd[2936]: time="2024-01-08T22:34:29.012983510Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock 0  <nil>}] <nil>}" module=grpc
	Jan 08 22:34:29 stopped-upgrade-266300 dockerd[2936]: time="2024-01-08T22:34:29.013015310Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Jan 08 22:34:29 stopped-upgrade-266300 dockerd[2936]: time="2024-01-08T22:34:29.013407110Z" level=warning msg="grpc: addrConn.createTransport failed to connect to {unix:///run/containerd/containerd.sock 0  <nil>}. Err :connection error: desc = \"transport: Error while dialing dial unix /run/containerd/containerd.sock: connect: connection refused\". Reconnecting..." module=grpc
	Jan 08 22:34:29 stopped-upgrade-266300 dockerd[2936]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": all SubConns are in TransientFailure, latest connection error: connection error: desc = "transport: Error while dialing dial unix /run/containerd/containerd.sock: connect: connection refused": unavailable
	Jan 08 22:34:29 stopped-upgrade-266300 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jan 08 22:34:29 stopped-upgrade-266300 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jan 08 22:34:29 stopped-upgrade-266300 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	-- Logs begin at Mon 2024-01-08 22:33:06 UTC, end at Mon 2024-01-08 22:34:29 UTC. --
	Jan 08 22:34:00 stopped-upgrade-266300 systemd[1]: Starting Docker Application Container Engine...
	Jan 08 22:34:00 stopped-upgrade-266300 dockerd[2494]: time="2024-01-08T22:34:00.892574275Z" level=info msg="Starting up"
	Jan 08 22:34:00 stopped-upgrade-266300 dockerd[2494]: time="2024-01-08T22:34:00.895362565Z" level=info msg="libcontainerd: started new containerd process" pid=2501
	Jan 08 22:34:00 stopped-upgrade-266300 dockerd[2494]: time="2024-01-08T22:34:00.895590964Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	Jan 08 22:34:00 stopped-upgrade-266300 dockerd[2494]: time="2024-01-08T22:34:00.895682864Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	Jan 08 22:34:00 stopped-upgrade-266300 dockerd[2494]: time="2024-01-08T22:34:00.895765463Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/containerd/containerd.sock 0  <nil>}] <nil>}" module=grpc
	Jan 08 22:34:00 stopped-upgrade-266300 dockerd[2494]: time="2024-01-08T22:34:00.895869263Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Jan 08 22:34:00 stopped-upgrade-266300 dockerd[2494]: time="2024-01-08T22:34:00.935948514Z" level=info msg="starting containerd" revision=b34a5c8af56e510852c35414db4c1f4fa6172339 version=v1.2.10
	Jan 08 22:34:00 stopped-upgrade-266300 dockerd[2494]: time="2024-01-08T22:34:00.936354012Z" level=info msg="loading plugin "io.containerd.content.v1.content"..." type=io.containerd.content.v1
	Jan 08 22:34:00 stopped-upgrade-266300 dockerd[2494]: time="2024-01-08T22:34:00.936833610Z" level=info msg="loading plugin "io.containerd.snapshotter.v1.btrfs"..." type=io.containerd.snapshotter.v1
	Jan 08 22:34:00 stopped-upgrade-266300 dockerd[2494]: time="2024-01-08T22:34:00.937100909Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.btrfs" error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs must be a btrfs filesystem to be used with the btrfs snapshotter"
	Jan 08 22:34:00 stopped-upgrade-266300 dockerd[2494]: time="2024-01-08T22:34:00.937128009Z" level=info msg="loading plugin "io.containerd.snapshotter.v1.aufs"..." type=io.containerd.snapshotter.v1
	Jan 08 22:34:00 stopped-upgrade-266300 dockerd[2494]: time="2024-01-08T22:34:00.938773103Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.aufs" error="modprobe aufs failed: "modprobe: FATAL: Module aufs not found in directory /lib/modules/4.19.81\n": exit status 1"
	Jan 08 22:34:00 stopped-upgrade-266300 dockerd[2494]: time="2024-01-08T22:34:00.938867103Z" level=info msg="loading plugin "io.containerd.snapshotter.v1.native"..." type=io.containerd.snapshotter.v1
	Jan 08 22:34:00 stopped-upgrade-266300 dockerd[2494]: time="2024-01-08T22:34:00.939367401Z" level=info msg="loading plugin "io.containerd.snapshotter.v1.overlayfs"..." type=io.containerd.snapshotter.v1
	Jan 08 22:34:00 stopped-upgrade-266300 dockerd[2494]: time="2024-01-08T22:34:00.940316397Z" level=info msg="loading plugin "io.containerd.snapshotter.v1.zfs"..." type=io.containerd.snapshotter.v1
	Jan 08 22:34:00 stopped-upgrade-266300 dockerd[2494]: time="2024-01-08T22:34:00.940599596Z" level=info msg="skip loading plugin "io.containerd.snapshotter.v1.zfs"..." type=io.containerd.snapshotter.v1
	Jan 08 22:34:00 stopped-upgrade-266300 dockerd[2494]: time="2024-01-08T22:34:00.940673696Z" level=info msg="loading plugin "io.containerd.metadata.v1.bolt"..." type=io.containerd.metadata.v1
	Jan 08 22:34:00 stopped-upgrade-266300 dockerd[2494]: time="2024-01-08T22:34:00.940750096Z" level=warning msg="could not use snapshotter btrfs in metadata plugin" error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs must be a btrfs filesystem to be used with the btrfs snapshotter"
	Jan 08 22:34:00 stopped-upgrade-266300 dockerd[2494]: time="2024-01-08T22:34:00.940764996Z" level=warning msg="could not use snapshotter aufs in metadata plugin" error="modprobe aufs failed: "modprobe: FATAL: Module aufs not found in directory /lib/modules/4.19.81\n": exit status 1"
	Jan 08 22:34:00 stopped-upgrade-266300 dockerd[2494]: time="2024-01-08T22:34:00.940772796Z" level=warning msg="could not use snapshotter zfs in metadata plugin" error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin"
	Jan 08 22:34:00 stopped-upgrade-266300 dockerd[2494]: time="2024-01-08T22:34:00.944280583Z" level=info msg="loading plugin "io.containerd.differ.v1.walking"..." type=io.containerd.differ.v1
	Jan 08 22:34:00 stopped-upgrade-266300 dockerd[2494]: time="2024-01-08T22:34:00.944425582Z" level=info msg="loading plugin "io.containerd.gc.v1.scheduler"..." type=io.containerd.gc.v1
	Jan 08 22:34:00 stopped-upgrade-266300 dockerd[2494]: time="2024-01-08T22:34:00.944461882Z" level=info msg="loading plugin "io.containerd.service.v1.containers-service"..." type=io.containerd.service.v1
	Jan 08 22:34:00 stopped-upgrade-266300 dockerd[2494]: time="2024-01-08T22:34:00.944474282Z" level=info msg="loading plugin "io.containerd.service.v1.content-service"..." type=io.containerd.service.v1
	Jan 08 22:34:00 stopped-upgrade-266300 dockerd[2494]: time="2024-01-08T22:34:00.944484382Z" level=info msg="loading plugin "io.containerd.service.v1.diff-service"..." type=io.containerd.service.v1
	Jan 08 22:34:00 stopped-upgrade-266300 dockerd[2494]: time="2024-01-08T22:34:00.944495082Z" level=info msg="loading plugin "io.containerd.service.v1.images-service"..." type=io.containerd.service.v1
	Jan 08 22:34:00 stopped-upgrade-266300 dockerd[2494]: time="2024-01-08T22:34:00.944505082Z" level=info msg="loading plugin "io.containerd.service.v1.leases-service"..." type=io.containerd.service.v1
	Jan 08 22:34:00 stopped-upgrade-266300 dockerd[2494]: time="2024-01-08T22:34:00.944567081Z" level=info msg="loading plugin "io.containerd.service.v1.namespaces-service"..." type=io.containerd.service.v1
	Jan 08 22:34:00 stopped-upgrade-266300 dockerd[2494]: time="2024-01-08T22:34:00.944589281Z" level=info msg="loading plugin "io.containerd.service.v1.snapshots-service"..." type=io.containerd.service.v1
	Jan 08 22:34:00 stopped-upgrade-266300 dockerd[2494]: time="2024-01-08T22:34:00.944600681Z" level=info msg="loading plugin "io.containerd.runtime.v1.linux"..." type=io.containerd.runtime.v1
	Jan 08 22:34:00 stopped-upgrade-266300 dockerd[2494]: time="2024-01-08T22:34:00.944741381Z" level=info msg="loading plugin "io.containerd.runtime.v2.task"..." type=io.containerd.runtime.v2
	Jan 08 22:34:00 stopped-upgrade-266300 dockerd[2494]: time="2024-01-08T22:34:00.944884480Z" level=info msg="loading plugin "io.containerd.monitor.v1.cgroups"..." type=io.containerd.monitor.v1
	Jan 08 22:34:00 stopped-upgrade-266300 dockerd[2494]: time="2024-01-08T22:34:00.945385678Z" level=info msg="loading plugin "io.containerd.service.v1.tasks-service"..." type=io.containerd.service.v1
	Jan 08 22:34:00 stopped-upgrade-266300 dockerd[2494]: time="2024-01-08T22:34:00.945473278Z" level=info msg="loading plugin "io.containerd.internal.v1.restart"..." type=io.containerd.internal.v1
	Jan 08 22:34:00 stopped-upgrade-266300 dockerd[2494]: time="2024-01-08T22:34:00.945506278Z" level=info msg="loading plugin "io.containerd.grpc.v1.containers"..." type=io.containerd.grpc.v1
	Jan 08 22:34:00 stopped-upgrade-266300 dockerd[2494]: time="2024-01-08T22:34:00.945517878Z" level=info msg="loading plugin "io.containerd.grpc.v1.content"..." type=io.containerd.grpc.v1
	Jan 08 22:34:00 stopped-upgrade-266300 dockerd[2494]: time="2024-01-08T22:34:00.945527378Z" level=info msg="loading plugin "io.containerd.grpc.v1.diff"..." type=io.containerd.grpc.v1
	Jan 08 22:34:00 stopped-upgrade-266300 dockerd[2494]: time="2024-01-08T22:34:00.945536378Z" level=info msg="loading plugin "io.containerd.grpc.v1.events"..." type=io.containerd.grpc.v1
	Jan 08 22:34:00 stopped-upgrade-266300 dockerd[2494]: time="2024-01-08T22:34:00.945544878Z" level=info msg="loading plugin "io.containerd.grpc.v1.healthcheck"..." type=io.containerd.grpc.v1
	Jan 08 22:34:00 stopped-upgrade-266300 dockerd[2494]: time="2024-01-08T22:34:00.945554378Z" level=info msg="loading plugin "io.containerd.grpc.v1.images"..." type=io.containerd.grpc.v1
	Jan 08 22:34:00 stopped-upgrade-266300 dockerd[2494]: time="2024-01-08T22:34:00.945563678Z" level=info msg="loading plugin "io.containerd.grpc.v1.leases"..." type=io.containerd.grpc.v1
	Jan 08 22:34:00 stopped-upgrade-266300 dockerd[2494]: time="2024-01-08T22:34:00.945572578Z" level=info msg="loading plugin "io.containerd.grpc.v1.namespaces"..." type=io.containerd.grpc.v1
	Jan 08 22:34:00 stopped-upgrade-266300 dockerd[2494]: time="2024-01-08T22:34:00.945581478Z" level=info msg="loading plugin "io.containerd.internal.v1.opt"..." type=io.containerd.internal.v1
	Jan 08 22:34:00 stopped-upgrade-266300 dockerd[2494]: time="2024-01-08T22:34:00.945626477Z" level=info msg="loading plugin "io.containerd.grpc.v1.snapshots"..." type=io.containerd.grpc.v1
	Jan 08 22:34:00 stopped-upgrade-266300 dockerd[2494]: time="2024-01-08T22:34:00.945639477Z" level=info msg="loading plugin "io.containerd.grpc.v1.tasks"..." type=io.containerd.grpc.v1
	Jan 08 22:34:00 stopped-upgrade-266300 dockerd[2494]: time="2024-01-08T22:34:00.945701977Z" level=info msg="loading plugin "io.containerd.grpc.v1.version"..." type=io.containerd.grpc.v1
	Jan 08 22:34:00 stopped-upgrade-266300 dockerd[2494]: time="2024-01-08T22:34:00.945723877Z" level=info msg="loading plugin "io.containerd.grpc.v1.introspection"..." type=io.containerd.grpc.v1
	Jan 08 22:34:00 stopped-upgrade-266300 dockerd[2494]: time="2024-01-08T22:34:00.945827677Z" level=info msg=serving... address="/var/run/docker/containerd/containerd-debug.sock"
	Jan 08 22:34:00 stopped-upgrade-266300 dockerd[2494]: time="2024-01-08T22:34:00.946013876Z" level=info msg=serving... address="/var/run/docker/containerd/containerd.sock"
	Jan 08 22:34:00 stopped-upgrade-266300 dockerd[2494]: time="2024-01-08T22:34:00.946027676Z" level=info msg="containerd successfully booted in 0.012877s"
	Jan 08 22:34:00 stopped-upgrade-266300 dockerd[2494]: time="2024-01-08T22:34:00.955931939Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	Jan 08 22:34:00 stopped-upgrade-266300 dockerd[2494]: time="2024-01-08T22:34:00.955961639Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	Jan 08 22:34:00 stopped-upgrade-266300 dockerd[2494]: time="2024-01-08T22:34:00.955980339Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/containerd/containerd.sock 0  <nil>}] <nil>}" module=grpc
	Jan 08 22:34:00 stopped-upgrade-266300 dockerd[2494]: time="2024-01-08T22:34:00.955990639Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Jan 08 22:34:00 stopped-upgrade-266300 dockerd[2494]: time="2024-01-08T22:34:00.957483733Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	Jan 08 22:34:00 stopped-upgrade-266300 dockerd[2494]: time="2024-01-08T22:34:00.957529833Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	Jan 08 22:34:00 stopped-upgrade-266300 dockerd[2494]: time="2024-01-08T22:34:00.957551133Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/containerd/containerd.sock 0  <nil>}] <nil>}" module=grpc
	Jan 08 22:34:00 stopped-upgrade-266300 dockerd[2494]: time="2024-01-08T22:34:00.957570233Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Jan 08 22:34:00 stopped-upgrade-266300 dockerd[2494]: time="2024-01-08T22:34:00.973186975Z" level=info msg="[graphdriver] using prior storage driver: overlay2"
	Jan 08 22:34:01 stopped-upgrade-266300 dockerd[2494]: time="2024-01-08T22:34:01.128461425Z" level=warning msg="Your kernel does not support cgroup blkio weight"
	Jan 08 22:34:01 stopped-upgrade-266300 dockerd[2494]: time="2024-01-08T22:34:01.128619925Z" level=warning msg="Your kernel does not support cgroup blkio weight_device"
	Jan 08 22:34:01 stopped-upgrade-266300 dockerd[2494]: time="2024-01-08T22:34:01.128634025Z" level=warning msg="Your kernel does not support cgroup blkio throttle.read_bps_device"
	Jan 08 22:34:01 stopped-upgrade-266300 dockerd[2494]: time="2024-01-08T22:34:01.128641025Z" level=warning msg="Your kernel does not support cgroup blkio throttle.write_bps_device"
	Jan 08 22:34:01 stopped-upgrade-266300 dockerd[2494]: time="2024-01-08T22:34:01.128648925Z" level=warning msg="Your kernel does not support cgroup blkio throttle.read_iops_device"
	Jan 08 22:34:01 stopped-upgrade-266300 dockerd[2494]: time="2024-01-08T22:34:01.128656125Z" level=warning msg="Your kernel does not support cgroup blkio throttle.write_iops_device"
	Jan 08 22:34:01 stopped-upgrade-266300 dockerd[2494]: time="2024-01-08T22:34:01.128944524Z" level=info msg="Loading containers: start."
	Jan 08 22:34:01 stopped-upgrade-266300 dockerd[2494]: time="2024-01-08T22:34:01.430008971Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Jan 08 22:34:01 stopped-upgrade-266300 dockerd[2494]: time="2024-01-08T22:34:01.513402879Z" level=info msg="Loading containers: done."
	Jan 08 22:34:01 stopped-upgrade-266300 dockerd[2494]: time="2024-01-08T22:34:01.542568177Z" level=info msg="Docker daemon" commit=633a0ea838 graphdriver(s)=overlay2 version=19.03.5
	Jan 08 22:34:01 stopped-upgrade-266300 dockerd[2494]: time="2024-01-08T22:34:01.547875259Z" level=info msg="Daemon has completed initialization"
	Jan 08 22:34:01 stopped-upgrade-266300 dockerd[2494]: time="2024-01-08T22:34:01.577191756Z" level=info msg="API listen on /var/run/docker.sock"
	Jan 08 22:34:01 stopped-upgrade-266300 dockerd[2494]: time="2024-01-08T22:34:01.577231656Z" level=info msg="API listen on [::]:2376"
	Jan 08 22:34:01 stopped-upgrade-266300 systemd[1]: Started Docker Application Container Engine.
	Jan 08 22:34:27 stopped-upgrade-266300 systemd[1]: Stopping Docker Application Container Engine...
	Jan 08 22:34:27 stopped-upgrade-266300 dockerd[2494]: time="2024-01-08T22:34:27.945593210Z" level=info msg="Processing signal 'terminated'"
	Jan 08 22:34:27 stopped-upgrade-266300 dockerd[2494]: time="2024-01-08T22:34:27.946705310Z" level=info msg="Daemon shutdown complete"
	Jan 08 22:34:27 stopped-upgrade-266300 dockerd[2494]: time="2024-01-08T22:34:27.946748710Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=moby
	Jan 08 22:34:27 stopped-upgrade-266300 dockerd[2494]: time="2024-01-08T22:34:27.946780110Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Jan 08 22:34:27 stopped-upgrade-266300 dockerd[2494]: time="2024-01-08T22:34:27.946896210Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jan 08 22:34:28 stopped-upgrade-266300 systemd[1]: docker.service: Succeeded.
	Jan 08 22:34:28 stopped-upgrade-266300 systemd[1]: Stopped Docker Application Container Engine.
	Jan 08 22:34:28 stopped-upgrade-266300 systemd[1]: Starting Docker Application Container Engine...
	Jan 08 22:34:29 stopped-upgrade-266300 dockerd[2936]: time="2024-01-08T22:34:29.009373510Z" level=info msg="Starting up"
	Jan 08 22:34:29 stopped-upgrade-266300 dockerd[2936]: time="2024-01-08T22:34:29.012778510Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	Jan 08 22:34:29 stopped-upgrade-266300 dockerd[2936]: time="2024-01-08T22:34:29.012890110Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	Jan 08 22:34:29 stopped-upgrade-266300 dockerd[2936]: time="2024-01-08T22:34:29.012983510Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock 0  <nil>}] <nil>}" module=grpc
	Jan 08 22:34:29 stopped-upgrade-266300 dockerd[2936]: time="2024-01-08T22:34:29.013015310Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Jan 08 22:34:29 stopped-upgrade-266300 dockerd[2936]: time="2024-01-08T22:34:29.013407110Z" level=warning msg="grpc: addrConn.createTransport failed to connect to {unix:///run/containerd/containerd.sock 0  <nil>}. Err :connection error: desc = \"transport: Error while dialing dial unix /run/containerd/containerd.sock: connect: connection refused\". Reconnecting..." module=grpc
	Jan 08 22:34:29 stopped-upgrade-266300 dockerd[2936]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": all SubConns are in TransientFailure, latest connection error: connection error: desc = "transport: Error while dialing dial unix /run/containerd/containerd.sock: connect: connection refused": unavailable
	Jan 08 22:34:29 stopped-upgrade-266300 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jan 08 22:34:29 stopped-upgrade-266300 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jan 08 22:34:29 stopped-upgrade-266300 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0108 22:34:29.063244    7060 out.go:239] * 
	* 
	W0108 22:34:29.064289    7060 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0108 22:34:29.065814    7060 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:213: upgrade from v1.6.2 to HEAD failed: out/minikube-windows-amd64.exe start -p stopped-upgrade-266300 --memory=2200 --alsologtostderr -v=1 --driver=hyperv: exit status 90
--- FAIL: TestStoppedBinaryUpgrade/Upgrade (505.39s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (10800.658s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-windows-amd64.exe start -p no-preload-768400 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=hyperv --kubernetes-version=v1.29.0-rc.2
panic: test timed out after 3h0m0s
running tests:
	TestNetworkPlugins (1h3m41s)
	TestNetworkPlugins/group (1h3m41s)
	TestStartStop (51m53s)
	TestStartStop/group (51m53s)
	TestStartStop/group/default-k8s-diff-port (33s)
	TestStartStop/group/default-k8s-diff-port/serial (33s)
	TestStartStop/group/default-k8s-diff-port/serial/FirstStart (33s)
	TestStartStop/group/embed-certs (3m54s)
	TestStartStop/group/embed-certs/serial (3m54s)
	TestStartStop/group/embed-certs/serial/FirstStart (3m54s)
	TestStartStop/group/no-preload (6m50s)
	TestStartStop/group/no-preload/serial (6m50s)
	TestStartStop/group/no-preload/serial/SecondStart (18s)
	TestStartStop/group/old-k8s-version (9m6s)
	TestStartStop/group/old-k8s-version/serial (9m6s)
	TestStartStop/group/old-k8s-version/serial/SecondStart (2m53s)

                                                
                                                
goroutine 3275 [running]:
testing.(*M).startAlarm.func1()
	/usr/local/go/src/testing/testing.go:2259 +0x3b9
created by time.goFunc
	/usr/local/go/src/time/sleep.go:176 +0x2d

                                                
                                                
goroutine 1 [chan receive, 36 minutes]:
testing.tRunner.func1()
	/usr/local/go/src/testing/testing.go:1561 +0x489
testing.tRunner(0xc000482340, 0xc000c0fb80)
	/usr/local/go/src/testing/testing.go:1601 +0x138
testing.runTests(0xc00044bf40?, {0x51a9d80, 0x2a, 0x2a}, {0xc000c0fbe8?, 0xfbbfe5?, 0x51cba20?})
	/usr/local/go/src/testing/testing.go:2052 +0x445
testing.(*M).Run(0xc00044bf40)
	/usr/local/go/src/testing/testing.go:1925 +0x636
k8s.io/minikube/test/integration.TestMain(0xc00009bef0?)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/main_test.go:62 +0x88
main.main()
	_testmain.go:131 +0x1c6

                                                
                                                
goroutine 11 [select]:
go.opencensus.io/stats/view.(*worker).start(0xc000154300)
	/var/lib/jenkins/go/pkg/mod/go.opencensus.io@v0.24.0/stats/view/worker.go:292 +0x9f
created by go.opencensus.io/stats/view.init.0 in goroutine 1
	/var/lib/jenkins/go/pkg/mod/go.opencensus.io@v0.24.0/stats/view/worker.go:34 +0x8d

                                                
                                                
goroutine 2053 [chan receive, 5 minutes]:
testing.(*T).Run(0xc0023904e0, {0x3002031?, 0x0?}, 0xc000c21200)
	/usr/local/go/src/testing/testing.go:1649 +0x3c8
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc0023904e0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:130 +0xaf8
testing.tRunner(0xc0023904e0, 0xc0021263c0)
	/usr/local/go/src/testing/testing.go:1595 +0xff
created by testing.(*T).Run in goroutine 1983
	/usr/local/go/src/testing/testing.go:1648 +0x3ad

                                                
                                                
goroutine 38 [select]:
k8s.io/klog/v2.(*flushDaemon).run.func1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/klog/v2@v2.110.1/klog.go:1157 +0x111
created by k8s.io/klog/v2.(*flushDaemon).run in goroutine 37
	/var/lib/jenkins/go/pkg/mod/k8s.io/klog/v2@v2.110.1/klog.go:1153 +0x171

                                                
                                                
goroutine 3033 [chan receive, 9 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc00225f6c0, 0xc000106240)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 3028
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/transport/cache.go:122 +0x594

                                                
                                                
goroutine 902 [select, 3 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/poll.go:297 +0x1c5
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 901
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/poll.go:280 +0xc5

                                                
                                                
goroutine 2613 [sync.Cond.Wait, 5 minutes]:
sync.runtime_notifyListWait(0xc00225f190, 0x13)
	/usr/local/go/src/runtime/sema.go:527 +0x15d
sync.(*Cond).Wait(0x3ea7c10?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc00295d5c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/util/workqueue/queue.go:200 +0x99
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc00225f1c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc0026b5f88?, {0x3eabfc0, 0xc0028863c0}, 0x1, 0xc000106240)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0x0?, 0x3b9aca00, 0x0, 0xd0?, 0xf4821c?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(0xc0026b5fd0?, 0x108dfc7?, 0xc000069080?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/backoff.go:161 +0x1e
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2589
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 1814 [chan receive, 65 minutes]:
testing.(*T).Run(0xc0021feea0, {0x3000b57?, 0xf7806d?}, 0xc002210018)
	/usr/local/go/src/testing/testing.go:1649 +0x3c8
k8s.io/minikube/test/integration.TestNetworkPlugins(0xc0021feea0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:52 +0xd4
testing.tRunner(0xc0021feea0, 0x3a59698)
	/usr/local/go/src/testing/testing.go:1595 +0xff
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1648 +0x3ad

                                                
                                                
goroutine 1404 [chan receive, 136 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc000579d80, 0xc000106240)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 1402
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/transport/cache.go:122 +0x594

                                                
                                                
goroutine 3200 [syscall, locked to thread]:
syscall.SyscallN(0x51fa900?, {0xc0021d3c28?, 0x0?, 0x4068760?})
	/usr/local/go/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall6(0xc0021d3c80?, 0xf1e656?, 0x5226c40?, 0xc0021d3ce8?, 0xf113bd?, 0x1bd1e540108?, 0xc00070504d?, 0xc0021d3ce0?)
	/usr/local/go/src/runtime/syscall_windows.go:488 +0x4a
syscall.readFile(0x0?, {0xc00264a20d?, 0x5f3, 0x800?}, 0x0?, 0x800000?)
	/usr/local/go/src/syscall/zsyscall_windows.go:1024 +0x8e
syscall.ReadFile(...)
	/usr/local/go/src/syscall/syscall_windows.go:438
syscall.Read(0xc0004f4280?, {0xc00264a20d?, 0x0?, 0xc00264a000?})
	/usr/local/go/src/syscall/syscall_windows.go:417 +0x2d
internal/poll.(*FD).Read(0xc0004f4280, {0xc00264a20d, 0x5f3, 0x5f3})
	/usr/local/go/src/internal/poll/fd_windows.go:422 +0x1c5
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc000d04e60, {0xc00264a20d?, 0xc0021d3e68?, 0xc0021d3e68?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc0008e4330, {0x3eaac60, 0xc000d04e60})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x3eaace0, 0xc0008e4330}, {0x3eaac60, 0xc000d04e60}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:416 +0x147
io.Copy(...)
	/usr/local/go/src/io/io.go:389
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:560 +0x34
os/exec.(*Cmd).Start.func2(0xc0026aa7e0?)
	/usr/local/go/src/os/exec/exec.go:717 +0x2c
created by os/exec.(*Cmd).Start in goroutine 3199
	/usr/local/go/src/os/exec/exec.go:716 +0xa75

                                                
                                                
goroutine 135 [chan receive, 173 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc0009225c0, 0xc000106240)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 147
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/transport/cache.go:122 +0x594

                                                
                                                
goroutine 3141 [sync.Cond.Wait, 5 minutes]:
sync.runtime_notifyListWait(0xc002127510, 0x0)
	/usr/local/go/src/runtime/sema.go:527 +0x15d
sync.(*Cond).Wait(0x3ea7c10?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc000d4a6c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/util/workqueue/queue.go:200 +0x99
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc002127540)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc000c03f88?, {0x3eabfc0, 0xc000024570}, 0x1, 0xc000106240)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0x0?, 0x3b9aca00, 0x0, 0xd0?, 0xf4821c?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(0xc000c03fd0?, 0x108dfc7?, 0xc0029b03c0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/backoff.go:161 +0x1e
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3155
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 134 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc0007c2ae0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/util/workqueue/delaying_queue.go:276 +0x305
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 147
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/util/workqueue/delaying_queue.go:113 +0x21f

                                                
                                                
goroutine 3149 [chan receive, 5 minutes]:
testing.(*T).Run(0xc0025ef860, {0x300b49d?, 0xc002319e00?}, 0xc000c21280)
	/usr/local/go/src/testing/testing.go:1649 +0x3c8
k8s.io/minikube/test/integration.TestStartStop.func1.1.1(0xc0025ef860)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:155 +0x25e
testing.tRunner(0xc0025ef860, 0xc000c21200)
	/usr/local/go/src/testing/testing.go:1595 +0xff
created by testing.(*T).Run in goroutine 2053
	/usr/local/go/src/testing/testing.go:1648 +0x3ad

                                                
                                                
goroutine 1983 [chan receive, 52 minutes]:
testing.tRunner.func1()
	/usr/local/go/src/testing/testing.go:1561 +0x489
testing.tRunner(0xc00283f6c0, 0x3a598b8)
	/usr/local/go/src/testing/testing.go:1601 +0x138
created by testing.(*T).Run in goroutine 1874
	/usr/local/go/src/testing/testing.go:1648 +0x3ad

                                                
                                                
goroutine 3100 [chan receive]:
testing.(*T).Run(0xc000483a00, {0x300d64e?, 0xc000d63e00?}, 0xc000c20b80)
	/usr/local/go/src/testing/testing.go:1649 +0x3c8
k8s.io/minikube/test/integration.TestStartStop.func1.1.1(0xc000483a00)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:155 +0x25e
testing.tRunner(0xc000483a00, 0xc000069400)
	/usr/local/go/src/testing/testing.go:1595 +0xff
created by testing.(*T).Run in goroutine 2051
	/usr/local/go/src/testing/testing.go:1648 +0x3ad

                                                
                                                
goroutine 2589 [chan receive, 20 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc00225f1c0, 0xc000106240)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 2545
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/transport/cache.go:122 +0x594

                                                
                                                
goroutine 888 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc00276f9e0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/util/workqueue/delaying_queue.go:276 +0x305
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 816
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/util/workqueue/delaying_queue.go:113 +0x21f

                                                
                                                
goroutine 1984 [chan receive, 9 minutes]:
testing.(*T).Run(0xc00283f860, {0x3002031?, 0x0?}, 0xc000c20500)
	/usr/local/go/src/testing/testing.go:1649 +0x3c8
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc00283f860)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:130 +0xaf8
testing.tRunner(0xc00283f860, 0xc002126180)
	/usr/local/go/src/testing/testing.go:1595 +0xff
created by testing.(*T).Run in goroutine 1983
	/usr/local/go/src/testing/testing.go:1648 +0x3ad

                                                
                                                
goroutine 157 [sync.Cond.Wait]:
sync.runtime_notifyListWait(0xc000922590, 0x3c)
	/usr/local/go/src/runtime/sema.go:527 +0x15d
sync.(*Cond).Wait(0x3ea7c10?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc0007c29c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/util/workqueue/queue.go:200 +0x99
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc0009225c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xd?, {0x3eabfc0, 0xc0008e57a0}, 0x1, 0xc000106240)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0x1049180?, 0x3b9aca00, 0x0, 0x20?, 0xc00001bf80?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(0x104a085?, 0xc000504820?, 0xc000d12200?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/backoff.go:161 +0x1e
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 135
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 158 [select, 3 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x3ece760, 0xc000106240}, 0xc00000df50, 0x0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/wait.go:205 +0xd7
k8s.io/apimachinery/pkg/util/wait.poll({0x3ece760, 0xc000106240}, 0x0?, 0x0?, 0x0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x3ece760?, 0xc000106240?}, 0x0?, 0x0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x0?, 0x0?, 0x0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 135
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 159 [select, 3 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/poll.go:297 +0x1c5
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 158
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/poll.go:280 +0xc5

                                                
                                                
goroutine 3042 [select, 3 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/poll.go:297 +0x1c5
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 2945
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/poll.go:280 +0xc5

                                                
                                                
goroutine 1985 [chan receive, 52 minutes]:
testing.(*testContext).waitParallel(0xc000675360)
	/usr/local/go/src/testing/testing.go:1715 +0xac
testing.(*T).Parallel(0xc00283fa00)
	/usr/local/go/src/testing/testing.go:1404 +0x219
k8s.io/minikube/test/integration.MaybeParallel(0xc00283fa00)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc00283fa00)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:94 +0x45
testing.tRunner(0xc00283fa00, 0xc0021261c0)
	/usr/local/go/src/testing/testing.go:1595 +0xff
created by testing.(*T).Run in goroutine 1983
	/usr/local/go/src/testing/testing.go:1648 +0x3ad

                                                
                                                
goroutine 2356 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc002d0b140)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/util/workqueue/delaying_queue.go:276 +0x305
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 2350
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/util/workqueue/delaying_queue.go:113 +0x21f

                                                
                                                
goroutine 2225 [sync.Cond.Wait, 5 minutes]:
sync.runtime_notifyListWait(0xc000c17290, 0x16)
	/usr/local/go/src/runtime/sema.go:527 +0x15d
sync.(*Cond).Wait(0x3ea7c10?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc0007c33e0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/util/workqueue/queue.go:200 +0x99
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc000c172c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc0022d9f48?, {0x3eabfc0, 0xc002988b70}, 0x1, 0xc000106240)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc002b02780?, 0x3b9aca00, 0x0, 0xd0?, 0xf4821c?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(0x1486c40?, 0xc0001f3198?, 0xc0022d9fb8?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/backoff.go:161 +0x1e
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2250
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 1411 [select, 5 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/poll.go:297 +0x1c5
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 1410
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/poll.go:280 +0xc5

                                                
                                                
goroutine 686 [IO wait, 163 minutes]:
internal/poll.runtime_pollWait(0x1bd63dee228, 0x72)
	/usr/local/go/src/runtime/netpoll.go:343 +0x85
internal/poll.(*pollDesc).wait(0x0?, 0x0?, 0x0)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.execIO(0xc0020d9418, 0xc0024d9bb8)
	/usr/local/go/src/internal/poll/fd_windows.go:175 +0xe6
internal/poll.(*FD).acceptOne(0xc0020d9400, 0x3f4, {0xc000c76000?, 0xc0005a0000?, 0x3a5a158?}, 0xc0024d9cc8?)
	/usr/local/go/src/internal/poll/fd_windows.go:944 +0x67
internal/poll.(*FD).Accept(0xc0020d9400, 0xc0024d9d90)
	/usr/local/go/src/internal/poll/fd_windows.go:978 +0x1bc
net.(*netFD).accept(0xc0020d9400)
	/usr/local/go/src/net/fd_windows.go:166 +0x54
net.(*TCPListener).accept(0xc0004d22e0)
	/usr/local/go/src/net/tcpsock_posix.go:152 +0x1e
net.(*TCPListener).Accept(0xc0004d22e0)
	/usr/local/go/src/net/tcpsock.go:315 +0x30
net/http.(*Server).Serve(0xc0006501e0, {0x3ec21e0, 0xc0004d22e0})
	/usr/local/go/src/net/http/server.go:3056 +0x364
net/http.(*Server).ListenAndServe(0xc0006501e0)
	/usr/local/go/src/net/http/server.go:2985 +0x71
k8s.io/minikube/test/integration.startHTTPProxy.func1(0xd?, 0xc0007849c0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/functional_test.go:2212 +0x18
created by k8s.io/minikube/test/integration.startHTTPProxy in goroutine 683
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/functional_test.go:2211 +0x13a

                                                
                                                
goroutine 3177 [select, 5 minutes]:
os/exec.(*Cmd).watchCtx(0xc000871080, 0xc0029b1800)
	/usr/local/go/src/os/exec/exec.go:757 +0xb5
created by os/exec.(*Cmd).Start in goroutine 3150
	/usr/local/go/src/os/exec/exec.go:743 +0xa34

                                                
                                                
goroutine 2168 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc00276f0e0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/util/workqueue/delaying_queue.go:276 +0x305
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 2145
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/util/workqueue/delaying_queue.go:113 +0x21f

                                                
                                                
goroutine 3189 [syscall, locked to thread]:
syscall.SyscallN(0x51f9700?, {0xc002377c28?, 0x3a37323a32322038?, 0x3f5ad18?})
	/usr/local/go/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall6(0xc002377c80?, 0xf1e656?, 0xc0024dd380?, 0xc002377ce8?, 0xf11265?, 0xf485dc?, 0xc0024dd380?, 0xc002377ce0?)
	/usr/local/go/src/runtime/syscall_windows.go:488 +0x4a
syscall.readFile(0x0?, {0xc00054b2a4?, 0x55c, 0xfb7fbf?}, 0x0?, 0x800000?)
	/usr/local/go/src/syscall/zsyscall_windows.go:1024 +0x8e
syscall.ReadFile(...)
	/usr/local/go/src/syscall/syscall_windows.go:438
syscall.Read(0xc000658280?, {0xc00054b2a4?, 0x0?, 0xc00054b000?})
	/usr/local/go/src/syscall/syscall_windows.go:417 +0x2d
internal/poll.(*FD).Read(0xc000658280, {0xc00054b2a4, 0x55c, 0x55c})
	/usr/local/go/src/internal/poll/fd_windows.go:422 +0x1c5
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc00072a150, {0xc00054b2a4?, 0xc002377e68?, 0xc002377e68?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc0028861e0, {0x3eaac60, 0xc00072a150})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x3eaace0, 0xc0028861e0}, {0x3eaac60, 0xc00072a150}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:416 +0x147
io.Copy(...)
	/usr/local/go/src/io/io.go:389
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:560 +0x34
os/exec.(*Cmd).Start.func2(0xc000c0b490?)
	/usr/local/go/src/os/exec/exec.go:717 +0x2c
created by os/exec.(*Cmd).Start in goroutine 3188
	/usr/local/go/src/os/exec/exec.go:716 +0xa75

                                                
                                                
goroutine 1874 [chan receive, 52 minutes]:
testing.(*T).Run(0xc0021ffa00, {0x3000b57?, 0x1ae2514d25bc?}, 0x3a598b8)
	/usr/local/go/src/testing/testing.go:1649 +0x3c8
k8s.io/minikube/test/integration.TestStartStop(0xc0021ff860?)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:46 +0x35
testing.tRunner(0xc0021ffa00, 0x3a596e0)
	/usr/local/go/src/testing/testing.go:1595 +0xff
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1648 +0x3ad

                                                
                                                
goroutine 2258 [select, 5 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x3ece760, 0xc000106240}, 0xc0021d1f50, 0xc0021795b8?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/wait.go:205 +0xd7
k8s.io/apimachinery/pkg/util/wait.poll({0x3ece760, 0xc000106240}, 0x1?, 0x1?, 0xc0021d1fb8?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x3ece760?, 0xc000106240?}, 0x1?, 0x0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc0021d1fd0?, 0x108dfc7?, 0xc0020d64b0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2250
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 3143 [select, 5 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/poll.go:297 +0x1c5
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 3142
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/poll.go:280 +0xc5

                                                
                                                
goroutine 901 [select, 3 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x3ece760, 0xc000106240}, 0xc00284df50, 0xc000987fd8?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/wait.go:205 +0xd7
k8s.io/apimachinery/pkg/util/wait.poll({0x3ece760, 0xc000106240}, 0x0?, 0x0?, 0x0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x3ece760?, 0xc000106240?}, 0x0?, 0x0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x108df65?, 0xc0008702c0?, 0xc0008e2f00?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 889
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 1862 [chan receive]:
testing.(*testContext).waitParallel(0xc000675360)
	/usr/local/go/src/testing/testing.go:1715 +0xac
testing.tRunner.func1()
	/usr/local/go/src/testing/testing.go:1571 +0x53c
testing.tRunner(0xc000705860, 0xc002210018)
	/usr/local/go/src/testing/testing.go:1601 +0x138
created by testing.(*T).Run in goroutine 1814
	/usr/local/go/src/testing/testing.go:1648 +0x3ad

                                                
                                                
goroutine 3271 [syscall, locked to thread]:
syscall.SyscallN(0x7ffbad7a4de0?, {0xc00214fa80?, 0x3?, 0x0?})
	/usr/local/go/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall(0xa?, 0xc00214fb98?, 0xc00214fa88?, 0xc00214fbb8?, 0x100c00214fb80?)
	/usr/local/go/src/runtime/syscall_windows.go:482 +0x35
syscall.WaitForSingleObject(0xc000d05368?, 0xffffffff)
	/usr/local/go/src/syscall/zsyscall_windows.go:1145 +0x5d
os.(*Process).wait(0xc000cf02d0)
	/usr/local/go/src/os/exec_windows.go:18 +0x55
os.(*Process).Wait(...)
	/usr/local/go/src/os/exec.go:134
os/exec.(*Cmd).Wait(0xc002a92000)
	/usr/local/go/src/os/exec/exec.go:890 +0x45
os/exec.(*Cmd).Run(0xc0024c9ba0?)
	/usr/local/go/src/os/exec/exec.go:590 +0x2d
k8s.io/minikube/test/integration.Run(0xc0024c9ba0, 0xc002a92000)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:103 +0x1ed
k8s.io/minikube/test/integration.validateSecondStart({0x3ece5a0, 0xc0004407e0}, 0xc0024c9ba0, {0xc0023b8030, 0x11}, {0x659c80dc?, 0xc022ea33b0?}, {0x1e57d5274cc8?, 0xc002137f60?}, {0xc00073de00, ...})
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:256 +0xe5
k8s.io/minikube/test/integration.TestStartStop.func1.1.1.1(0xc0024ca900?)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:156 +0x66
testing.tRunner(0xc0024c9ba0, 0xc000c20b80)
	/usr/local/go/src/testing/testing.go:1595 +0xff
created by testing.(*T).Run in goroutine 3100
	/usr/local/go/src/testing/testing.go:1648 +0x3ad

                                                
                                                
goroutine 1237 [chan send, 146 minutes]:
os/exec.(*Cmd).watchCtx(0xc00237a2c0, 0xc000055560)
	/usr/local/go/src/os/exec/exec.go:782 +0x3ef
created by os/exec.(*Cmd).Start in goroutine 657
	/usr/local/go/src/os/exec/exec.go:743 +0xa34

                                                
                                                
goroutine 3142 [select, 5 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x3ece760, 0xc000106240}, 0xc002317f50, 0xc002178d18?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/wait.go:205 +0xd7
k8s.io/apimachinery/pkg/util/wait.poll({0x3ece760, 0xc000106240}, 0x0?, 0x0?, 0x0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x3ece760?, 0xc000106240?}, 0x0?, 0x0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x108df65?, 0xc0025ce580?, 0xc00286f200?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3155
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 2875 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc0021799e0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/util/workqueue/delaying_queue.go:276 +0x305
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 2882
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/util/workqueue/delaying_queue.go:113 +0x21f

                                                
                                                
goroutine 1410 [select, 5 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x3ece760, 0xc000106240}, 0xc00228bf50, 0xc000c78418?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/wait.go:205 +0xd7
k8s.io/apimachinery/pkg/util/wait.poll({0x3ece760, 0xc000106240}, 0x1?, 0x1?, 0xc00228bfb8?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x3ece760?, 0xc000106240?}, 0x1?, 0x0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc00228bfd0?, 0x108dfc7?, 0xc0029b0600?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 1404
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 889 [chan receive, 152 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc000c17900, 0xc000106240)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 816
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/transport/cache.go:122 +0x594

                                                
                                                
goroutine 900 [sync.Cond.Wait, 3 minutes]:
sync.runtime_notifyListWait(0xc000c178d0, 0x36)
	/usr/local/go/src/runtime/sema.go:527 +0x15d
sync.(*Cond).Wait(0x3ea7c10?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc00276f8c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/util/workqueue/queue.go:200 +0x99
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc000c17900)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc002289f88?, {0x3eabfc0, 0xc0022a5650}, 0x1, 0xc000106240)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0x0?, 0x3b9aca00, 0x0, 0xd0?, 0xf4821c?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(0xc002289fd0?, 0x108dfc7?, 0xc000c16f40?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/backoff.go:161 +0x1e
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 889
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 2973 [chan receive, 3 minutes]:
testing.(*T).Run(0xc0007856c0, {0x300d64e?, 0xc00284be00?}, 0xc00265c180)
	/usr/local/go/src/testing/testing.go:1649 +0x3c8
k8s.io/minikube/test/integration.TestStartStop.func1.1.1(0xc0007856c0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:155 +0x25e
testing.tRunner(0xc0007856c0, 0xc000c20500)
	/usr/local/go/src/testing/testing.go:1595 +0xff
created by testing.(*T).Run in goroutine 1984
	/usr/local/go/src/testing/testing.go:1648 +0x3ad

                                                
                                                
goroutine 1403 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc000c789c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/util/workqueue/delaying_queue.go:276 +0x305
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 1402
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/util/workqueue/delaying_queue.go:113 +0x21f

                                                
                                                
goroutine 3154 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc000d4a7e0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/util/workqueue/delaying_queue.go:276 +0x305
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 3073
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/util/workqueue/delaying_queue.go:113 +0x21f

                                                
                                                
goroutine 3266 [select]:
os/exec.(*Cmd).watchCtx(0xc00019de40, 0xc0008d06c0)
	/usr/local/go/src/os/exec/exec.go:757 +0xb5
created by os/exec.(*Cmd).Start in goroutine 3199
	/usr/local/go/src/os/exec/exec.go:743 +0xa34

                                                
                                                
goroutine 979 [chan send, 152 minutes]:
os/exec.(*Cmd).watchCtx(0xc0024ab4a0, 0xc0024cbda0)
	/usr/local/go/src/os/exec/exec.go:782 +0x3ef
created by os/exec.(*Cmd).Start in goroutine 978
	/usr/local/go/src/os/exec/exec.go:743 +0xa34

                                                
                                                
goroutine 2197 [sync.Cond.Wait]:
sync.runtime_notifyListWait(0xc002746550, 0x17)
	/usr/local/go/src/runtime/sema.go:527 +0x15d
sync.(*Cond).Wait(0x3ea7c10?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc00276efc0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/util/workqueue/queue.go:200 +0x99
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc002746580)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc0026b1f88?, {0x3eabfc0, 0xc002216000}, 0x1, 0xc000106240)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0x0?, 0x3b9aca00, 0x0, 0xd0?, 0xf4821c?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(0xc0026b1fd0?, 0x108dfc7?, 0xc00286eba0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/backoff.go:161 +0x1e
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2169
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 2886 [sync.Cond.Wait]:
sync.runtime_notifyListWait(0xc000922fd0, 0x10)
	/usr/local/go/src/runtime/sema.go:527 +0x15d
sync.(*Cond).Wait(0x3ea7c10?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc0021798c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/util/workqueue/queue.go:200 +0x99
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc000923000)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc000d63f48?, {0x3eabfc0, 0xc002259560}, 0x1, 0xc000106240)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc0023b7f20?, 0x3b9aca00, 0x0, 0xd0?, 0xf4821c?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(0x108df65?, 0xc00235e420?, 0xc0026aa0c0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/backoff.go:161 +0x1e
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2876
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 1361 [sync.Cond.Wait, 5 minutes]:
sync.runtime_notifyListWait(0xc000579d50, 0x31)
	/usr/local/go/src/runtime/sema.go:527 +0x15d
sync.(*Cond).Wait(0x3ea7c10?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc000c788a0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/util/workqueue/queue.go:200 +0x99
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc000579d80)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc00261df90?, {0x3eabfc0, 0xc0029082a0}, 0x1, 0xc000106240)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc0029b04e0?, 0x3b9aca00, 0x0, 0xd0?, 0xf4821c?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(0x108df65?, 0xc000c2ec60?, 0xc0026aa1e0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/backoff.go:161 +0x1e
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 1404
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 2051 [chan receive, 7 minutes]:
testing.(*T).Run(0xc00283fd40, {0x3002031?, 0x0?}, 0xc000069400)
	/usr/local/go/src/testing/testing.go:1649 +0x3c8
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc00283fd40)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:130 +0xaf8
testing.tRunner(0xc00283fd40, 0xc002126240)
	/usr/local/go/src/testing/testing.go:1595 +0xff
created by testing.(*T).Run in goroutine 1983
	/usr/local/go/src/testing/testing.go:1648 +0x3ad

                                                
                                                
goroutine 3188 [syscall, 3 minutes, locked to thread]:
syscall.SyscallN(0x7ffbad7a4de0?, {0xc002555a80?, 0x3?, 0x0?})
	/usr/local/go/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall(0xd?, 0xc002555b98?, 0xc002555a88?, 0xc002555bb8?, 0x100c002555b80?)
	/usr/local/go/src/runtime/syscall_windows.go:482 +0x35
syscall.WaitForSingleObject(0xc00072a138?, 0xffffffff)
	/usr/local/go/src/syscall/zsyscall_windows.go:1145 +0x5d
os.(*Process).wait(0xc000955140)
	/usr/local/go/src/os/exec_windows.go:18 +0x55
os.(*Process).Wait(...)
	/usr/local/go/src/os/exec.go:134
os/exec.(*Cmd).Wait(0xc000d17ce0)
	/usr/local/go/src/os/exec/exec.go:890 +0x45
os/exec.(*Cmd).Run(0xc000505520?)
	/usr/local/go/src/os/exec/exec.go:590 +0x2d
k8s.io/minikube/test/integration.Run(0xc000505520, 0xc000d17ce0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:103 +0x1ed
k8s.io/minikube/test/integration.validateSecondStart({0x3ece5a0, 0xc0006860e0}, 0xc000505520, {0xc00281e498, 0x16}, {0x659c8041?, 0xc009d8c9e0?}, {0x1e33a568de6c?, 0xc00001bf60?}, {0xc0008fe0c0, ...})
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:256 +0xe5
k8s.io/minikube/test/integration.TestStartStop.func1.1.1.1(0xc002179020?)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:156 +0x66
testing.tRunner(0xc000505520, 0xc00265c180)
	/usr/local/go/src/testing/testing.go:1595 +0xff
created by testing.(*T).Run in goroutine 2973
	/usr/local/go/src/testing/testing.go:1648 +0x3ad

                                                
                                                
goroutine 3199 [syscall, locked to thread]:
syscall.SyscallN(0x7ffbad7a4de0?, {0xc002153ab0?, 0x3?, 0x0?})
	/usr/local/go/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall(0xa?, 0xc002137bc8?, 0xc002137ab8?, 0xc002137be8?, 0x100c002137bb0?)
	/usr/local/go/src/runtime/syscall_windows.go:482 +0x35
syscall.WaitForSingleObject(0xc000d04e38?, 0xffffffff)
	/usr/local/go/src/syscall/zsyscall_windows.go:1145 +0x5d
os.(*Process).wait(0xc002ac4ba0)
	/usr/local/go/src/os/exec_windows.go:18 +0x55
os.(*Process).Wait(...)
	/usr/local/go/src/os/exec.go:134
os/exec.(*Cmd).Wait(0xc00019de40)
	/usr/local/go/src/os/exec/exec.go:890 +0x45
os/exec.(*Cmd).Run(0xc0024c9520?)
	/usr/local/go/src/os/exec/exec.go:590 +0x2d
k8s.io/minikube/test/integration.Run(0xc0024c9520, 0xc00019de40)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:103 +0x1ed
k8s.io/minikube/test/integration.validateFirstStart({0x3ece5a0?, 0xc000478000?}, 0xc0024c9520, {0xc0028840a0?, 0xf7806d?}, {0x659c80cd?, 0xc00fed04cc?}, {0x1e5444196124?, 0xc002137f60?}, {0xc0008e8000, ...})
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:186 +0xc5
k8s.io/minikube/test/integration.TestStartStop.func1.1.1.1(0xc0024ca900?)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:156 +0x66
testing.tRunner(0xc0024c9520, 0xc000c20980)
	/usr/local/go/src/testing/testing.go:1595 +0xff
created by testing.(*T).Run in goroutine 3198
	/usr/local/go/src/testing/testing.go:1648 +0x3ad

                                                
                                                
goroutine 3032 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc00295d2c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/util/workqueue/delaying_queue.go:276 +0x305
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 3028
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/util/workqueue/delaying_queue.go:113 +0x21f

                                                
                                                
goroutine 2198 [select]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x3ece760, 0xc000106240}, 0xc002379f50, 0x34313a6f672e6e69?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/wait.go:205 +0xd7
k8s.io/apimachinery/pkg/util/wait.poll({0x3ece760, 0xc000106240}, 0x33?, 0x3030393520202020?, 0x3a6f672e78696620?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x3ece760?, 0xc000106240?}, 0x90a3e6c696e3c3d?, 0x3232203830313057?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x657078656e75205d?, 0x63616d2064657463?, 0x61747320656e6968?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2169
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 2614 [select, 5 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x3ece760, 0xc000106240}, 0xc000d79f50, 0xc002178dd8?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/wait.go:205 +0xd7
k8s.io/apimachinery/pkg/util/wait.poll({0x3ece760, 0xc000106240}, 0x1?, 0x1?, 0xc000d79fb8?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x3ece760?, 0xc000106240?}, 0x0?, 0x0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc000d79fd0?, 0x108dfc7?, 0xc000054b40?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2589
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 2169 [chan receive, 32 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc002746580, 0xc000106240)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 2145
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/transport/cache.go:122 +0x594

                                                
                                                
goroutine 2876 [chan receive, 13 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc000923000, 0xc000106240)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 2882
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/transport/cache.go:122 +0x594

                                                
                                                
goroutine 2250 [chan receive, 30 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc000c172c0, 0xc000106240)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 2248
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/transport/cache.go:122 +0x594

                                                
                                                
goroutine 2199 [select]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/poll.go:297 +0x1c5
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 2198
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/poll.go:280 +0xc5

                                                
                                                
goroutine 2755 [select, 5 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/poll.go:297 +0x1c5
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 2754
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/poll.go:280 +0xc5

                                                
                                                
goroutine 2050 [chan receive]:
testing.(*T).Run(0xc00283fba0, {0x3002031?, 0x0?}, 0xc000c20900)
	/usr/local/go/src/testing/testing.go:1649 +0x3c8
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc00283fba0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:130 +0xaf8
testing.tRunner(0xc00283fba0, 0xc002126200)
	/usr/local/go/src/testing/testing.go:1595 +0xff
created by testing.(*T).Run in goroutine 1983
	/usr/local/go/src/testing/testing.go:1648 +0x3ad

                                                
                                                
goroutine 3272 [syscall, locked to thread]:
syscall.SyscallN(0x1e?, {0xc002789c28?, 0xc002789c80?, 0xf11265?})
	/usr/local/go/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall6(0xc002646400?, 0xc0022aa590?, 0x0?, 0xc002789ce8?, 0xf11265?, 0xc000c20780?, 0x8?, 0x8?)
	/usr/local/go/src/runtime/syscall_windows.go:488 +0x4a
syscall.readFile(0x0?, {0xc00002d5ea?, 0x216, 0xfb7fbf?}, 0x0?, 0x800000?)
	/usr/local/go/src/syscall/zsyscall_windows.go:1024 +0x8e
syscall.ReadFile(...)
	/usr/local/go/src/syscall/syscall_windows.go:438
syscall.Read(0xc000658c80?, {0xc00002d5ea?, 0x0?, 0xc00002d400?})
	/usr/local/go/src/syscall/syscall_windows.go:417 +0x2d
internal/poll.(*FD).Read(0xc000658c80, {0xc00002d5ea, 0x216, 0x216})
	/usr/local/go/src/internal/poll/fd_windows.go:422 +0x1c5
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc000d05388, {0xc00002d5ea?, 0xc0009876e0?, 0xc002789e68?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc0008e4720, {0x3eaac60, 0xc000d05388})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x3eaace0, 0xc0008e4720}, {0x3eaac60, 0xc000d05388}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:416 +0x147
io.Copy(...)
	/usr/local/go/src/io/io.go:389
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:560 +0x34
os/exec.(*Cmd).Start.func2(0xc000054d20?)
	/usr/local/go/src/os/exec/exec.go:717 +0x2c
created by os/exec.(*Cmd).Start in goroutine 3271
	/usr/local/go/src/os/exec/exec.go:716 +0xa75

                                                
                                                
goroutine 2259 [select, 5 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/poll.go:297 +0x1c5
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 2258
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/poll.go:280 +0xc5

                                                
                                                
goroutine 3176 [runnable, locked to thread]:
syscall.SyscallN(0x1bd1e54c4b0?, {0xc0021e1c28?, 0x51e6b00?, 0x4068760?})
	/usr/local/go/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall6(0xc0021e1c80?, 0xf1e656?, 0x5226c40?, 0xc0021e1ce8?, 0xf113bd?, 0x0?, 0x20000?, 0xc000000000?)
	/usr/local/go/src/runtime/syscall_windows.go:488 +0x4a
syscall.readFile(0x0?, {0xc0021af29a?, 0x6d66, 0xfb7fbf?}, 0x0?, 0x800000?)
	/usr/local/go/src/syscall/zsyscall_windows.go:1024 +0x8e
syscall.ReadFile(...)
	/usr/local/go/src/syscall/syscall_windows.go:438
syscall.Read(0xc000468500?, {0xc0021af29a?, 0xd260?, 0xd260?})
	/usr/local/go/src/syscall/syscall_windows.go:417 +0x2d
internal/poll.(*FD).Read(0xc000468500, {0xc0021af29a, 0x6d66, 0x6d66})
	/usr/local/go/src/internal/poll/fd_windows.go:422 +0x1c5
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc0007d8228, {0xc0021af29a?, 0xc0021e1e68?, 0xc0021e1e68?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc002851020, {0x3eaac60, 0xc0007d8228})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x3eaace0, 0xc002851020}, {0x3eaac60, 0xc0007d8228}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:416 +0x147
io.Copy(...)
	/usr/local/go/src/io/io.go:389
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:560 +0x34
os/exec.(*Cmd).Start.func2(0xc00096e240?)
	/usr/local/go/src/os/exec/exec.go:717 +0x2c
created by os/exec.(*Cmd).Start in goroutine 3150
	/usr/local/go/src/os/exec/exec.go:716 +0xa75

                                                
                                                
goroutine 3150 [syscall, 5 minutes, locked to thread]:
syscall.SyscallN(0x7ffbad7a4de0?, {0xc002551ab0?, 0x3?, 0x0?})
	/usr/local/go/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall(0xa?, 0xc002547bc8?, 0xc002547ab8?, 0xc002547be8?, 0x100c002547bb0?)
	/usr/local/go/src/runtime/syscall_windows.go:482 +0x35
syscall.WaitForSingleObject(0xc0007d8200?, 0xffffffff)
	/usr/local/go/src/syscall/zsyscall_windows.go:1145 +0x5d
os.(*Process).wait(0xc000a5d4a0)
	/usr/local/go/src/os/exec_windows.go:18 +0x55
os.(*Process).Wait(...)
	/usr/local/go/src/os/exec.go:134
os/exec.(*Cmd).Wait(0xc000871080)
	/usr/local/go/src/os/exec/exec.go:890 +0x45
os/exec.(*Cmd).Run(0xc0025efa00?)
	/usr/local/go/src/os/exec/exec.go:590 +0x2d
k8s.io/minikube/test/integration.Run(0xc0025efa00, 0xc000871080)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:103 +0x1ed
k8s.io/minikube/test/integration.validateFirstStart({0x3ece5a0?, 0xc000441730?}, 0xc0025efa00, {0xc0025d0510?, 0xf7806d?}, {0x659c8004?, 0xc01cd10a44?}, {0x1e25848371d4?, 0xc002547f60?}, {0xc0008e9100, ...})
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:186 +0xc5
k8s.io/minikube/test/integration.TestStartStop.func1.1.1.1(0xc002b03980?)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:156 +0x66
testing.tRunner(0xc0025efa00, 0xc000c21280)
	/usr/local/go/src/testing/testing.go:1595 +0xff
created by testing.(*T).Run in goroutine 3149
	/usr/local/go/src/testing/testing.go:1648 +0x3ad

                                                
                                                
goroutine 2249 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc0007c3560)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/util/workqueue/delaying_queue.go:276 +0x305
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 2248
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/util/workqueue/delaying_queue.go:113 +0x21f

                                                
                                                
goroutine 3190 [syscall, locked to thread]:
syscall.SyscallN(0x51fd480?, {0xc0053f1c28?, 0xc0053f1c50?, 0x4068760?})
	/usr/local/go/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall6(0xc0053f1c80?, 0xf1e656?, 0x5226c40?, 0xc0053f1ce8?, 0xf113bd?, 0x1bd1e540eb8?, 0xc000616387?, 0xc0053f1ce0?)
	/usr/local/go/src/runtime/syscall_windows.go:488 +0x4a
syscall.readFile(0x0?, {0xc00216fde9?, 0x217, 0xfb7fbf?}, 0x0?, 0x800000?)
	/usr/local/go/src/syscall/zsyscall_windows.go:1024 +0x8e
syscall.ReadFile(...)
	/usr/local/go/src/syscall/syscall_windows.go:438
syscall.Read(0xc000658780?, {0xc00216fde9?, 0x0?, 0xc002168000?})
	/usr/local/go/src/syscall/syscall_windows.go:417 +0x2d
internal/poll.(*FD).Read(0xc000658780, {0xc00216fde9, 0x217, 0x217})
	/usr/local/go/src/internal/poll/fd_windows.go:422 +0x1c5
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc00072a180, {0xc00216fde9?, 0xc0053f1e68?, 0xc0053f1e68?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc002886210, {0x3eaac60, 0xc00072a180})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x3eaace0, 0xc002886210}, {0x3eaac60, 0xc00072a180}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:416 +0x147
io.Copy(...)
	/usr/local/go/src/io/io.go:389
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:560 +0x34
os/exec.(*Cmd).Start.func2(0x0?)
	/usr/local/go/src/os/exec/exec.go:717 +0x2c
created by os/exec.(*Cmd).Start in goroutine 3188
	/usr/local/go/src/os/exec/exec.go:716 +0xa75

                                                
                                                
goroutine 2736 [chan receive, 15 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc0023f0bc0, 0xc000106240)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 2734
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/transport/cache.go:122 +0x594

                                                
                                                
goroutine 3191 [select, 3 minutes]:
os/exec.(*Cmd).watchCtx(0xc000d17ce0, 0xc0026aa2a0)
	/usr/local/go/src/os/exec/exec.go:757 +0xb5
created by os/exec.(*Cmd).Start in goroutine 3188
	/usr/local/go/src/os/exec/exec.go:743 +0xa34

                                                
                                                
goroutine 2588 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc00295d6e0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/util/workqueue/delaying_queue.go:276 +0x305
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 2545
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/util/workqueue/delaying_queue.go:113 +0x21f

                                                
                                                
goroutine 2944 [sync.Cond.Wait, 3 minutes]:
sync.runtime_notifyListWait(0xc00225f690, 0x1)
	/usr/local/go/src/runtime/sema.go:527 +0x15d
sync.(*Cond).Wait(0x3ea7c10?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc00295d1a0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/util/workqueue/queue.go:200 +0x99
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc00225f6c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc002505f88?, {0x3eabfc0, 0xc0034fc3c0}, 0x1, 0xc000106240)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc0000555c0?, 0x3b9aca00, 0x0, 0xd0?, 0xf4821c?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(0x108df65?, 0xc0024aa420?, 0xc0029b1080?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/backoff.go:161 +0x1e
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3033
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 3274 [select]:
os/exec.(*Cmd).watchCtx(0xc002a92000, 0xc0008d0de0)
	/usr/local/go/src/os/exec/exec.go:757 +0xb5
created by os/exec.(*Cmd).Start in goroutine 3271
	/usr/local/go/src/os/exec/exec.go:743 +0xa34

                                                
                                                
goroutine 3273 [syscall, locked to thread]:
syscall.SyscallN(0x51fd480?, {0xc0053efc28?, 0x0?, 0x4068760?})
	/usr/local/go/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall6(0xc0053efc80?, 0xf1e656?, 0x5226c40?, 0xc0053efce8?, 0xf113bd?, 0x1bd1e540eb8?, 0xc0025ee887?, 0xc0053efce0?)
	/usr/local/go/src/runtime/syscall_windows.go:488 +0x4a
syscall.readFile(0x0?, {0xc0022c6526?, 0x3ada, 0xfb7fbf?}, 0x0?, 0x800000?)
	/usr/local/go/src/syscall/zsyscall_windows.go:1024 +0x8e
syscall.ReadFile(...)
	/usr/local/go/src/syscall/syscall_windows.go:438
syscall.Read(0xc000659180?, {0xc0022c6526?, 0x0?, 0xc0022c2000?})
	/usr/local/go/src/syscall/syscall_windows.go:417 +0x2d
internal/poll.(*FD).Read(0xc000659180, {0xc0022c6526, 0x3ada, 0x3ada})
	/usr/local/go/src/internal/poll/fd_windows.go:422 +0x1c5
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc000d05458, {0xc0022c6526?, 0xc0053efe68?, 0xc0053efe68?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc0008e4750, {0x3eaac60, 0xc000d05458})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x3eaace0, 0xc0008e4750}, {0x3eaac60, 0xc000d05458}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:416 +0x147
io.Copy(...)
	/usr/local/go/src/io/io.go:389
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:560 +0x34
os/exec.(*Cmd).Start.func2(0xc0008d0c00?)
	/usr/local/go/src/os/exec/exec.go:717 +0x2c
created by os/exec.(*Cmd).Start in goroutine 3271
	/usr/local/go/src/os/exec/exec.go:716 +0xa75

                                                
                                                
goroutine 2357 [chan receive, 28 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc0023f0fc0, 0xc000106240)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 2350
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/transport/cache.go:122 +0x594

                                                
                                                
goroutine 2386 [sync.Cond.Wait]:
sync.runtime_notifyListWait(0xc0023f0f90, 0x16)
	/usr/local/go/src/runtime/sema.go:527 +0x15d
sync.(*Cond).Wait(0x3ea7c10?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc002d0b020)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/util/workqueue/queue.go:200 +0x99
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc0023f0fc0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0x0?, {0x3eabfc0, 0xc002954180}, 0x1, 0xc000106240)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0x0?, 0x3b9aca00, 0x0, 0x0?, 0x0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(0x0?, 0x0?, 0x0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/backoff.go:161 +0x1e
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2357
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 2387 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x3ece760, 0xc000106240}, 0xc000d6bf50, 0xc002206700?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/wait.go:205 +0xd7
k8s.io/apimachinery/pkg/util/wait.poll({0x3ece760, 0xc000106240}, 0xe0?, 0xc000d6bfc8?, 0xfe633f?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x3ece760?, 0xc000106240?}, 0x2b21e40?, 0xc0022948b0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc000d6bfd0?, 0x148ec25?, 0xc0027472c0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2357
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 2388 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/poll.go:297 +0x1c5
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 2387
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/poll.go:280 +0xc5

                                                
                                                
goroutine 2615 [select, 5 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/poll.go:297 +0x1c5
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 2614
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/poll.go:280 +0xc5

                                                
                                                
goroutine 2754 [select, 5 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x3ece760, 0xc000106240}, 0xc00213bf50, 0xc000d4abf8?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/wait.go:205 +0xd7
k8s.io/apimachinery/pkg/util/wait.poll({0x3ece760, 0xc000106240}, 0x1?, 0x1?, 0xc00213bfb8?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x3ece760?, 0xc000106240?}, 0x1?, 0x0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc00213bfd0?, 0x108dfc7?, 0xc000bca240?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2736
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 2577 [sync.Cond.Wait, 2 minutes]:
sync.runtime_notifyListWait(0xc0023f0b90, 0x11)
	/usr/local/go/src/runtime/sema.go:527 +0x15d
sync.(*Cond).Wait(0x3ea7c10?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc002d0afc0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/util/workqueue/queue.go:200 +0x99
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc0023f0bc0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc00269bf90?, {0x3eabfc0, 0xc0028b2000}, 0x1, 0xc000106240)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0x0?, 0x3b9aca00, 0x0, 0xd0?, 0xf4821c?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(0xc00269bfd0?, 0x108dfc7?, 0x0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/backoff.go:161 +0x1e
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2736
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 2735 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc002d0b260)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/util/workqueue/delaying_queue.go:276 +0x305
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 2734
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/util/workqueue/delaying_queue.go:113 +0x21f

                                                
                                                
goroutine 3175 [syscall, locked to thread]:
syscall.SyscallN(0xc002851020?, {0xc002547c28?, 0xc002547ba0?, 0xc002547c40?})
	/usr/local/go/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall6(0xc002547ce0?, 0xa?, 0xa?, 0xc002547ce8?, 0xf11265?, 0xf19308?, 0x3fda766?, 0x99c2adcabad?)
	/usr/local/go/src/runtime/syscall_windows.go:488 +0x4a
syscall.readFile(0x0?, {0xc00264ab69?, 0x497, 0xfb7fbf?}, 0x0?, 0x800000?)
	/usr/local/go/src/syscall/zsyscall_windows.go:1024 +0x8e
syscall.ReadFile(...)
	/usr/local/go/src/syscall/syscall_windows.go:438
syscall.Read(0xc000468000?, {0xc00264ab69?, 0xf11265?, 0xc00264a800?})
	/usr/local/go/src/syscall/syscall_windows.go:417 +0x2d
internal/poll.(*FD).Read(0xc000468000, {0xc00264ab69, 0x497, 0x497})
	/usr/local/go/src/internal/poll/fd_windows.go:422 +0x1c5
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc0007d8208, {0xc00264ab69?, 0xc0001181b9?, 0xc002547e68?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc002850ff0, {0x3eaac60, 0xc0007d8208})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x3eaace0, 0xc002850ff0}, {0x3eaac60, 0xc0007d8208}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:416 +0x147
io.Copy(...)
	/usr/local/go/src/io/io.go:389
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:560 +0x34
os/exec.(*Cmd).Start.func2(0xc000c21280?)
	/usr/local/go/src/os/exec/exec.go:717 +0x2c
created by os/exec.(*Cmd).Start in goroutine 3150
	/usr/local/go/src/os/exec/exec.go:716 +0xa75

                                                
                                                
goroutine 2945 [select, 3 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x3ece760, 0xc000106240}, 0xc00238bf50, 0xc0004ca160?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/wait.go:205 +0xd7
k8s.io/apimachinery/pkg/util/wait.poll({0x3ece760, 0xc000106240}, 0x0?, 0x0?, 0xc00238bfb8?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x3ece760?, 0xc000106240?}, 0x1?, 0x0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc00238bfd0?, 0x148c985?, 0xc0001f3500?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3033
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 3198 [chan receive]:
testing.(*T).Run(0xc0024c9380, {0x300b49d?, 0xc002829e00?}, 0xc000c20980)
	/usr/local/go/src/testing/testing.go:1649 +0x3c8
k8s.io/minikube/test/integration.TestStartStop.func1.1.1(0xc0024c9380)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:155 +0x25e
testing.tRunner(0xc0024c9380, 0xc000c20900)
	/usr/local/go/src/testing/testing.go:1595 +0xff
created by testing.(*T).Run in goroutine 2050
	/usr/local/go/src/testing/testing.go:1648 +0x3ad

                                                
                                                
goroutine 2888 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/poll.go:297 +0x1c5
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 2887
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/poll.go:280 +0xc5

                                                
                                                
goroutine 3201 [syscall, locked to thread]:
syscall.SyscallN(0x51fbc80?, {0xc002301c28?, 0x51e6b00?, 0x4068760?})
	/usr/local/go/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall6(0xc002301c80?, 0xf1e656?, 0x5226c40?, 0xc002301ce8?, 0xf113bd?, 0x1bd1e540a28?, 0x67?, 0xc000000000?)
	/usr/local/go/src/runtime/syscall_windows.go:488 +0x4a
syscall.readFile(0x0?, {0xc000c7bcbb?, 0x345, 0xfb7fbf?}, 0x0?, 0x800000?)
	/usr/local/go/src/syscall/zsyscall_windows.go:1024 +0x8e
syscall.ReadFile(...)
	/usr/local/go/src/syscall/syscall_windows.go:438
syscall.Read(0xc0004f4780?, {0xc000c7bcbb?, 0x0?, 0xc000c7a000?})
	/usr/local/go/src/syscall/syscall_windows.go:417 +0x2d
internal/poll.(*FD).Read(0xc0004f4780, {0xc000c7bcbb, 0x345, 0x345})
	/usr/local/go/src/internal/poll/fd_windows.go:422 +0x1c5
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc000d04f58, {0xc000c7bcbb?, 0xc002301e68?, 0xc002301e68?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc0008e4390, {0x3eaac60, 0xc000d04f58})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x3eaace0, 0xc0008e4390}, {0x3eaac60, 0xc000d04f58}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:416 +0x147
io.Copy(...)
	/usr/local/go/src/io/io.go:389
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:560 +0x34
os/exec.(*Cmd).Start.func2(0xc0029b1020?)
	/usr/local/go/src/os/exec/exec.go:717 +0x2c
created by os/exec.(*Cmd).Start in goroutine 3199
	/usr/local/go/src/os/exec/exec.go:716 +0xa75

                                                
                                                
goroutine 2887 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x3ece760, 0xc000106240}, 0xc002917f50, 0xc0006193d8?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/wait.go:205 +0xd7
k8s.io/apimachinery/pkg/util/wait.poll({0x3ece760, 0xc000106240}, 0x1?, 0x1?, 0xc002917fb8?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x3ece760?, 0xc000106240?}, 0x0?, 0x0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc002917fd0?, 0x108dfc7?, 0xc0008e2de0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2876
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 3155 [chan receive, 5 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc002127540, 0xc000106240)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 3073
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/transport/cache.go:122 +0x594

                                                
                                    

Test pass (162/208)

Order passed test Duration
3 TestDownloadOnly/v1.16.0/json-events 20.59
4 TestDownloadOnly/v1.16.0/preload-exists 0.07
7 TestDownloadOnly/v1.16.0/kubectl 0
8 TestDownloadOnly/v1.16.0/LogsDuration 0.28
10 TestDownloadOnly/v1.28.4/json-events 12.32
11 TestDownloadOnly/v1.28.4/preload-exists 0
14 TestDownloadOnly/v1.28.4/kubectl 0
15 TestDownloadOnly/v1.28.4/LogsDuration 0.27
17 TestDownloadOnly/v1.29.0-rc.2/json-events 12.93
18 TestDownloadOnly/v1.29.0-rc.2/preload-exists 0
21 TestDownloadOnly/v1.29.0-rc.2/kubectl 0
22 TestDownloadOnly/v1.29.0-rc.2/LogsDuration 0.25
23 TestDownloadOnly/DeleteAll 1.28
24 TestDownloadOnly/DeleteAlwaysSucceeds 1.22
26 TestBinaryMirror 7.12
27 TestOffline 253.09
30 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.29
31 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.3
32 TestAddons/Setup 376.12
35 TestAddons/parallel/Ingress 64.67
36 TestAddons/parallel/InspektorGadget 25.75
37 TestAddons/parallel/MetricsServer 21.7
38 TestAddons/parallel/HelmTiller 27.08
40 TestAddons/parallel/CSI 95.05
41 TestAddons/parallel/Headlamp 32.27
42 TestAddons/parallel/CloudSpanner 20.8
43 TestAddons/parallel/LocalPath 85.16
44 TestAddons/parallel/NvidiaDevicePlugin 20.24
45 TestAddons/parallel/Yakd 5.03
48 TestAddons/serial/GCPAuth/Namespaces 0.36
49 TestAddons/StoppedEnableDisable 46.19
50 TestCertOptions 490.15
52 TestDockerFlags 508.75
53 TestForceSystemdFlag 402.74
54 TestForceSystemdEnv 502.17
61 TestErrorSpam/start 17.34
62 TestErrorSpam/status 36.73
63 TestErrorSpam/pause 22.6
64 TestErrorSpam/unpause 22.9
65 TestErrorSpam/stop 51.25
68 TestFunctional/serial/CopySyncFile 0.03
69 TestFunctional/serial/StartWithProxy 202.02
70 TestFunctional/serial/AuditLog 0
71 TestFunctional/serial/SoftStart 110.31
72 TestFunctional/serial/KubeContext 0.14
73 TestFunctional/serial/KubectlGetPods 0.23
76 TestFunctional/serial/CacheCmd/cache/add_remote 27.3
77 TestFunctional/serial/CacheCmd/cache/add_local 10.16
78 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.28
79 TestFunctional/serial/CacheCmd/cache/list 0.27
80 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 9.34
81 TestFunctional/serial/CacheCmd/cache/cache_reload 36.62
82 TestFunctional/serial/CacheCmd/cache/delete 0.55
83 TestFunctional/serial/MinikubeKubectlCmd 0.52
84 TestFunctional/serial/MinikubeKubectlCmdDirectly 1.83
85 TestFunctional/serial/ExtraConfig 121.12
86 TestFunctional/serial/ComponentHealth 0.2
87 TestFunctional/serial/LogsCmd 8.46
88 TestFunctional/serial/LogsFileCmd 10.55
89 TestFunctional/serial/InvalidService 20.93
95 TestFunctional/parallel/StatusCmd 45.23
99 TestFunctional/parallel/ServiceCmdConnect 30.05
100 TestFunctional/parallel/AddonsCmd 0.9
101 TestFunctional/parallel/PersistentVolumeClaim 40.5
103 TestFunctional/parallel/SSHCmd 23.76
104 TestFunctional/parallel/CpCmd 63.07
105 TestFunctional/parallel/MySQL 61.96
106 TestFunctional/parallel/FileSync 11.21
107 TestFunctional/parallel/CertSync 67.72
111 TestFunctional/parallel/NodeLabels 0.21
113 TestFunctional/parallel/NonActiveRuntimeDisabled 12.4
115 TestFunctional/parallel/License 3.54
116 TestFunctional/parallel/Version/short 0.33
117 TestFunctional/parallel/Version/components 8.87
118 TestFunctional/parallel/ImageCommands/ImageListShort 7.68
119 TestFunctional/parallel/ImageCommands/ImageListTable 7.87
120 TestFunctional/parallel/ImageCommands/ImageListJson 7.71
121 TestFunctional/parallel/ImageCommands/ImageListYaml 8.04
122 TestFunctional/parallel/ImageCommands/ImageBuild 28.11
123 TestFunctional/parallel/ImageCommands/Setup 4.49
124 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 23.35
125 TestFunctional/parallel/ServiceCmd/DeployApp 18.53
126 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 21.62
127 TestFunctional/parallel/ServiceCmd/List 14.43
128 TestFunctional/parallel/ServiceCmd/JSONOutput 14.96
129 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 29.71
132 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 9.7
133 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
135 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 16.28
137 TestFunctional/parallel/ImageCommands/ImageSaveToFile 9.85
138 TestFunctional/parallel/ImageCommands/ImageRemove 16.25
145 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
146 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 18.83
147 TestFunctional/parallel/ProfileCmd/profile_not_create 9.84
148 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 11.1
149 TestFunctional/parallel/ProfileCmd/profile_list 9.51
150 TestFunctional/parallel/ProfileCmd/profile_json_output 9.4
151 TestFunctional/parallel/DockerEnv/powershell 46.92
152 TestFunctional/parallel/UpdateContextCmd/no_changes 2.62
153 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 2.58
154 TestFunctional/parallel/UpdateContextCmd/no_clusters 2.66
155 TestFunctional/delete_addon-resizer_images 0.48
156 TestFunctional/delete_my-image_image 0.22
157 TestFunctional/delete_minikube_cached_images 0.2
161 TestImageBuild/serial/Setup 187.71
162 TestImageBuild/serial/NormalBuild 8.99
163 TestImageBuild/serial/BuildWithBuildArg 8.69
164 TestImageBuild/serial/BuildWithDockerIgnore 7.57
165 TestImageBuild/serial/BuildWithSpecifiedDockerfile 7.47
168 TestIngressAddonLegacy/StartLegacyK8sCluster 239.68
170 TestIngressAddonLegacy/serial/ValidateIngressAddonActivation 39.18
171 TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation 14.46
172 TestIngressAddonLegacy/serial/ValidateIngressAddons 80.13
175 TestJSONOutput/start/Command 231.25
176 TestJSONOutput/start/Audit 0
178 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
179 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
181 TestJSONOutput/pause/Command 7.94
182 TestJSONOutput/pause/Audit 0
184 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
185 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
187 TestJSONOutput/unpause/Command 7.69
188 TestJSONOutput/unpause/Audit 0
190 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
191 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
193 TestJSONOutput/stop/Command 33.24
194 TestJSONOutput/stop/Audit 0
196 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
197 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
198 TestErrorJSONOutput 1.51
203 TestMainNoArgs 0.26
207 TestMountStart/serial/StartWithMountFirst 147.18
208 TestMountStart/serial/VerifyMountFirst 9.53
209 TestMountStart/serial/StartWithMountSecond 146.38
210 TestMountStart/serial/VerifyMountSecond 9.46
211 TestMountStart/serial/DeleteFirst 25.7
212 TestMountStart/serial/VerifyMountPostDelete 9.38
213 TestMountStart/serial/Stop 21.77
214 TestMountStart/serial/RestartStopped 110.06
215 TestMountStart/serial/VerifyMountPostStop 9.65
218 TestMultiNode/serial/FreshStart2Nodes 409.75
219 TestMultiNode/serial/DeployApp2Nodes 9.33
221 TestMultiNode/serial/AddNode 218.33
222 TestMultiNode/serial/MultiNodeLabels 0.2
223 TestMultiNode/serial/ProfileList 7.52
224 TestMultiNode/serial/CopyFile 358.35
225 TestMultiNode/serial/StopNode 65.7
226 TestMultiNode/serial/StartAfterStop 164.68
231 TestPreload 491.21
232 TestScheduledStopWindows 321.93
239 TestKubernetesUpgrade 922.24
242 TestNoKubernetes/serial/StartNoK8sWithVersion 0.33
263 TestPause/serial/Start 520.78
264 TestStoppedBinaryUpgrade/Setup 0.65
266 TestPause/serial/SecondStartNoReconfiguration 251.62
267 TestPause/serial/Pause 8.01
268 TestPause/serial/VerifyStatus 12.34
269 TestPause/serial/Unpause 7.98
270 TestPause/serial/PauseAgain 8.07
271 TestPause/serial/DeletePaused 50.24
272 TestPause/serial/VerifyDeletedResources 24.36
274 TestStoppedBinaryUpgrade/MinikubeLogs 10.39
x
+
TestDownloadOnly/v1.16.0/json-events (20.59s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-145800 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=hyperv
aaa_download_only_test.go:69: (dbg) Done: out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-145800 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=hyperv: (20.5902777s)
--- PASS: TestDownloadOnly/v1.16.0/json-events (20.59s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/preload-exists (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/preload-exists
--- PASS: TestDownloadOnly/v1.16.0/preload-exists (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/kubectl
--- PASS: TestDownloadOnly/v1.16.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/LogsDuration (0.28s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/LogsDuration
aaa_download_only_test.go:172: (dbg) Run:  out/minikube-windows-amd64.exe logs -p download-only-145800
aaa_download_only_test.go:172: (dbg) Non-zero exit: out/minikube-windows-amd64.exe logs -p download-only-145800: exit status 85 (283.7899ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|-------------------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |       User        | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|-------------------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-145800 | minikube7\jenkins | v1.32.0 | 08 Jan 24 20:10 UTC |          |
	|         | -p download-only-145800        |                      |                   |         |                     |          |
	|         | --force --alsologtostderr      |                      |                   |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |                   |         |                     |          |
	|         | --container-runtime=docker     |                      |                   |         |                     |          |
	|         | --driver=hyperv                |                      |                   |         |                     |          |
	|---------|--------------------------------|----------------------|-------------------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/01/08 20:10:38
	Running on machine: minikube7
	Binary: Built with gc go1.21.5 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0108 20:10:38.373498    3324 out.go:296] Setting OutFile to fd 576 ...
	I0108 20:10:38.374482    3324 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 20:10:38.374482    3324 out.go:309] Setting ErrFile to fd 580...
	I0108 20:10:38.374482    3324 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W0108 20:10:38.389311    3324 root.go:314] Error reading config file at C:\Users\jenkins.minikube7\minikube-integration\.minikube\config\config.json: open C:\Users\jenkins.minikube7\minikube-integration\.minikube\config\config.json: The system cannot find the path specified.
	I0108 20:10:38.401330    3324 out.go:303] Setting JSON to true
	I0108 20:10:38.404276    3324 start.go:128] hostinfo: {"hostname":"minikube7","uptime":22580,"bootTime":1704722057,"procs":191,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.3803 Build 19045.3803","kernelVersion":"10.0.19045.3803 Build 19045.3803","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f66ed2ea-6c04-4a6b-8eea-b2fb0953e990"}
	W0108 20:10:38.404363    3324 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0108 20:10:38.406192    3324 out.go:97] [download-only-145800] minikube v1.32.0 on Microsoft Windows 10 Enterprise N 10.0.19045.3803 Build 19045.3803
	I0108 20:10:38.407297    3324 out.go:169] KUBECONFIG=C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	I0108 20:10:38.406439    3324 notify.go:220] Checking for updates...
	W0108 20:10:38.406538    3324 preload.go:295] Failed to list preload files: open C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball: The system cannot find the file specified.
	I0108 20:10:38.407619    3324 out.go:169] MINIKUBE_HOME=C:\Users\jenkins.minikube7\minikube-integration\.minikube
	I0108 20:10:38.408379    3324 out.go:169] MINIKUBE_LOCATION=17907
	I0108 20:10:38.409058    3324 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	W0108 20:10:38.409780    3324 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0108 20:10:38.411371    3324 driver.go:392] Setting default libvirt URI to qemu:///system
	I0108 20:10:43.979814    3324 out.go:97] Using the hyperv driver based on user configuration
	I0108 20:10:43.979814    3324 start.go:298] selected driver: hyperv
	I0108 20:10:43.979814    3324 start.go:902] validating driver "hyperv" against <nil>
	I0108 20:10:43.980333    3324 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I0108 20:10:44.033866    3324 start_flags.go:394] Using suggested 6000MB memory alloc based on sys=65534MB, container=0MB
	I0108 20:10:44.034800    3324 start_flags.go:913] Wait components to verify : map[apiserver:true system_pods:true]
	I0108 20:10:44.035272    3324 cni.go:84] Creating CNI manager for ""
	I0108 20:10:44.035272    3324 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0108 20:10:44.035272    3324 start_flags.go:323] config:
	{Name:download-only-145800 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-145800 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I0108 20:10:44.035954    3324 iso.go:125] acquiring lock: {Name:mk1869c5b5033c1301af33a7fa364ec43dd63efe Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 20:10:44.037230    3324 out.go:97] Downloading VM boot image ...
	I0108 20:10:44.037467    3324 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/17806/minikube-v1.32.1-1702708929-17806-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/17806/minikube-v1.32.1-1702708929-17806-amd64.iso.sha256 -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\iso\amd64\minikube-v1.32.1-1702708929-17806-amd64.iso
	I0108 20:10:48.844418    3324 out.go:97] Starting control plane node download-only-145800 in cluster download-only-145800
	I0108 20:10:48.844418    3324 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0108 20:10:48.884498    3324 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I0108 20:10:48.884498    3324 cache.go:56] Caching tarball of preloaded images
	I0108 20:10:48.884498    3324 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0108 20:10:48.885905    3324 out.go:97] Downloading Kubernetes v1.16.0 preload ...
	I0108 20:10:48.885905    3324 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 ...
	I0108 20:10:48.958444    3324 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4?checksum=md5:326f3ce331abb64565b50b8c9e791244 -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I0108 20:10:52.909714    3324 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 ...
	I0108 20:10:52.911630    3324 preload.go:256] verifying checksum of C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 ...
	I0108 20:10:53.874389    3324 cache.go:59] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I0108 20:10:53.874818    3324 profile.go:148] Saving config to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\download-only-145800\config.json ...
	I0108 20:10:53.875440    3324 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\download-only-145800\config.json: {Name:mke8174748d3840fe0fdc5b718991472fd7737c1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 20:10:53.876997    3324 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0108 20:10:53.878982    3324 download.go:107] Downloading: https://dl.k8s.io/release/v1.16.0/bin/windows/amd64/kubectl.exe?checksum=file:https://dl.k8s.io/release/v1.16.0/bin/windows/amd64/kubectl.exe.sha1 -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\windows\amd64\v1.16.0/kubectl.exe
	
	
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-145800"

                                                
                                                
-- /stdout --
** stderr ** 
	W0108 20:10:58.967469   11312 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
aaa_download_only_test.go:173: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.16.0/LogsDuration (0.28s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/json-events (12.32s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-145800 --force --alsologtostderr --kubernetes-version=v1.28.4 --container-runtime=docker --driver=hyperv
aaa_download_only_test.go:69: (dbg) Done: out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-145800 --force --alsologtostderr --kubernetes-version=v1.28.4 --container-runtime=docker --driver=hyperv: (12.319s)
--- PASS: TestDownloadOnly/v1.28.4/json-events (12.32s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/preload-exists
--- PASS: TestDownloadOnly/v1.28.4/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/kubectl
--- PASS: TestDownloadOnly/v1.28.4/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/LogsDuration (0.27s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/LogsDuration
aaa_download_only_test.go:172: (dbg) Run:  out/minikube-windows-amd64.exe logs -p download-only-145800
aaa_download_only_test.go:172: (dbg) Non-zero exit: out/minikube-windows-amd64.exe logs -p download-only-145800: exit status 85 (266.7802ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|-------------------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |       User        | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|-------------------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-145800 | minikube7\jenkins | v1.32.0 | 08 Jan 24 20:10 UTC |          |
	|         | -p download-only-145800        |                      |                   |         |                     |          |
	|         | --force --alsologtostderr      |                      |                   |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |                   |         |                     |          |
	|         | --container-runtime=docker     |                      |                   |         |                     |          |
	|         | --driver=hyperv                |                      |                   |         |                     |          |
	| start   | -o=json --download-only        | download-only-145800 | minikube7\jenkins | v1.32.0 | 08 Jan 24 20:10 UTC |          |
	|         | -p download-only-145800        |                      |                   |         |                     |          |
	|         | --force --alsologtostderr      |                      |                   |         |                     |          |
	|         | --kubernetes-version=v1.28.4   |                      |                   |         |                     |          |
	|         | --container-runtime=docker     |                      |                   |         |                     |          |
	|         | --driver=hyperv                |                      |                   |         |                     |          |
	|---------|--------------------------------|----------------------|-------------------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/01/08 20:10:59
	Running on machine: minikube7
	Binary: Built with gc go1.21.5 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0108 20:10:59.314261   14216 out.go:296] Setting OutFile to fd 684 ...
	I0108 20:10:59.315135   14216 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 20:10:59.315135   14216 out.go:309] Setting ErrFile to fd 688...
	I0108 20:10:59.315135   14216 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W0108 20:10:59.328755   14216 root.go:314] Error reading config file at C:\Users\jenkins.minikube7\minikube-integration\.minikube\config\config.json: open C:\Users\jenkins.minikube7\minikube-integration\.minikube\config\config.json: The system cannot find the file specified.
	I0108 20:10:59.336719   14216 out.go:303] Setting JSON to true
	I0108 20:10:59.338674   14216 start.go:128] hostinfo: {"hostname":"minikube7","uptime":22601,"bootTime":1704722057,"procs":192,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.3803 Build 19045.3803","kernelVersion":"10.0.19045.3803 Build 19045.3803","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f66ed2ea-6c04-4a6b-8eea-b2fb0953e990"}
	W0108 20:10:59.338674   14216 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0108 20:10:59.340572   14216 out.go:97] [download-only-145800] minikube v1.32.0 on Microsoft Windows 10 Enterprise N 10.0.19045.3803 Build 19045.3803
	I0108 20:10:59.341129   14216 out.go:169] KUBECONFIG=C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	I0108 20:10:59.341818   14216 out.go:169] MINIKUBE_HOME=C:\Users\jenkins.minikube7\minikube-integration\.minikube
	I0108 20:10:59.340869   14216 notify.go:220] Checking for updates...
	I0108 20:10:59.343062   14216 out.go:169] MINIKUBE_LOCATION=17907
	I0108 20:10:59.343243   14216 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	W0108 20:10:59.344406   14216 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0108 20:10:59.345063   14216 config.go:182] Loaded profile config "download-only-145800": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	W0108 20:10:59.345904   14216 start.go:810] api.Load failed for download-only-145800: filestore "download-only-145800": Docker machine "download-only-145800" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0108 20:10:59.346196   14216 driver.go:392] Setting default libvirt URI to qemu:///system
	W0108 20:10:59.346382   14216 start.go:810] api.Load failed for download-only-145800: filestore "download-only-145800": Docker machine "download-only-145800" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0108 20:11:04.667002   14216 out.go:97] Using the hyperv driver based on existing profile
	I0108 20:11:04.667133   14216 start.go:298] selected driver: hyperv
	I0108 20:11:04.667133   14216 start.go:902] validating driver "hyperv" against &{Name:download-only-145800 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17806/minikube-v1.32.1-1702708929-17806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernete
sConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-145800 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:do
cker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I0108 20:11:04.719153   14216 cni.go:84] Creating CNI manager for ""
	I0108 20:11:04.719218   14216 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0108 20:11:04.719218   14216 start_flags.go:323] config:
	{Name:download-only-145800 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17806/minikube-v1.32.1-1702708929-17806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:download-only-145800 Namespace:de
fault APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFir
mwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I0108 20:11:04.719568   14216 iso.go:125] acquiring lock: {Name:mk1869c5b5033c1301af33a7fa364ec43dd63efe Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 20:11:04.720976   14216 out.go:97] Starting control plane node download-only-145800 in cluster download-only-145800
	I0108 20:11:04.721074   14216 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0108 20:11:04.762136   14216 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.4/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4
	I0108 20:11:04.762136   14216 cache.go:56] Caching tarball of preloaded images
	I0108 20:11:04.762247   14216 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0108 20:11:04.763271   14216 out.go:97] Downloading Kubernetes v1.28.4 preload ...
	I0108 20:11:04.763364   14216 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 ...
	I0108 20:11:04.829741   14216 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.4/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4?checksum=md5:7ebdea7754e21f51b865dbfc36b53b7d -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4
	I0108 20:11:08.602206   14216 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 ...
	I0108 20:11:08.603324   14216 preload.go:256] verifying checksum of C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 ...
	
	
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-145800"

                                                
                                                
-- /stdout --
** stderr ** 
	W0108 20:11:11.573222   13632 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
aaa_download_only_test.go:173: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.4/LogsDuration (0.27s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/json-events (12.93s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-145800 --force --alsologtostderr --kubernetes-version=v1.29.0-rc.2 --container-runtime=docker --driver=hyperv
aaa_download_only_test.go:69: (dbg) Done: out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-145800 --force --alsologtostderr --kubernetes-version=v1.29.0-rc.2 --container-runtime=docker --driver=hyperv: (12.9322956s)
--- PASS: TestDownloadOnly/v1.29.0-rc.2/json-events (12.93s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/preload-exists
--- PASS: TestDownloadOnly/v1.29.0-rc.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/kubectl
--- PASS: TestDownloadOnly/v1.29.0-rc.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/LogsDuration (0.25s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/LogsDuration
aaa_download_only_test.go:172: (dbg) Run:  out/minikube-windows-amd64.exe logs -p download-only-145800
aaa_download_only_test.go:172: (dbg) Non-zero exit: out/minikube-windows-amd64.exe logs -p download-only-145800: exit status 85 (247.1962ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------|----------------------|-------------------|---------|---------------------|----------|
	| Command |               Args                |       Profile        |       User        | Version |     Start Time      | End Time |
	|---------|-----------------------------------|----------------------|-------------------|---------|---------------------|----------|
	| start   | -o=json --download-only           | download-only-145800 | minikube7\jenkins | v1.32.0 | 08 Jan 24 20:10 UTC |          |
	|         | -p download-only-145800           |                      |                   |         |                     |          |
	|         | --force --alsologtostderr         |                      |                   |         |                     |          |
	|         | --kubernetes-version=v1.16.0      |                      |                   |         |                     |          |
	|         | --container-runtime=docker        |                      |                   |         |                     |          |
	|         | --driver=hyperv                   |                      |                   |         |                     |          |
	| start   | -o=json --download-only           | download-only-145800 | minikube7\jenkins | v1.32.0 | 08 Jan 24 20:10 UTC |          |
	|         | -p download-only-145800           |                      |                   |         |                     |          |
	|         | --force --alsologtostderr         |                      |                   |         |                     |          |
	|         | --kubernetes-version=v1.28.4      |                      |                   |         |                     |          |
	|         | --container-runtime=docker        |                      |                   |         |                     |          |
	|         | --driver=hyperv                   |                      |                   |         |                     |          |
	| start   | -o=json --download-only           | download-only-145800 | minikube7\jenkins | v1.32.0 | 08 Jan 24 20:11 UTC |          |
	|         | -p download-only-145800           |                      |                   |         |                     |          |
	|         | --force --alsologtostderr         |                      |                   |         |                     |          |
	|         | --kubernetes-version=v1.29.0-rc.2 |                      |                   |         |                     |          |
	|         | --container-runtime=docker        |                      |                   |         |                     |          |
	|         | --driver=hyperv                   |                      |                   |         |                     |          |
	|---------|-----------------------------------|----------------------|-------------------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/01/08 20:11:11
	Running on machine: minikube7
	Binary: Built with gc go1.21.5 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0108 20:11:11.913642    1120 out.go:296] Setting OutFile to fd 688 ...
	I0108 20:11:11.914656    1120 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 20:11:11.914656    1120 out.go:309] Setting ErrFile to fd 692...
	I0108 20:11:11.914656    1120 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W0108 20:11:11.928978    1120 root.go:314] Error reading config file at C:\Users\jenkins.minikube7\minikube-integration\.minikube\config\config.json: open C:\Users\jenkins.minikube7\minikube-integration\.minikube\config\config.json: The system cannot find the file specified.
	I0108 20:11:11.937749    1120 out.go:303] Setting JSON to true
	I0108 20:11:11.941753    1120 start.go:128] hostinfo: {"hostname":"minikube7","uptime":22613,"bootTime":1704722057,"procs":192,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.3803 Build 19045.3803","kernelVersion":"10.0.19045.3803 Build 19045.3803","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f66ed2ea-6c04-4a6b-8eea-b2fb0953e990"}
	W0108 20:11:11.941753    1120 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0108 20:11:11.943297    1120 out.go:97] [download-only-145800] minikube v1.32.0 on Microsoft Windows 10 Enterprise N 10.0.19045.3803 Build 19045.3803
	I0108 20:11:11.944016    1120 out.go:169] KUBECONFIG=C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	I0108 20:11:11.943297    1120 notify.go:220] Checking for updates...
	I0108 20:11:11.945324    1120 out.go:169] MINIKUBE_HOME=C:\Users\jenkins.minikube7\minikube-integration\.minikube
	I0108 20:11:11.945993    1120 out.go:169] MINIKUBE_LOCATION=17907
	I0108 20:11:11.946843    1120 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	W0108 20:11:11.949044    1120 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0108 20:11:11.950049    1120 config.go:182] Loaded profile config "download-only-145800": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	W0108 20:11:11.950049    1120 start.go:810] api.Load failed for download-only-145800: filestore "download-only-145800": Docker machine "download-only-145800" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0108 20:11:11.950049    1120 driver.go:392] Setting default libvirt URI to qemu:///system
	W0108 20:11:11.950049    1120 start.go:810] api.Load failed for download-only-145800: filestore "download-only-145800": Docker machine "download-only-145800" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0108 20:11:17.374767    1120 out.go:97] Using the hyperv driver based on existing profile
	I0108 20:11:17.374767    1120 start.go:298] selected driver: hyperv
	I0108 20:11:17.374767    1120 start.go:902] validating driver "hyperv" against &{Name:download-only-145800 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17806/minikube-v1.32.1-1702708929-17806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernete
sConfig:{KubernetesVersion:v1.28.4 ClusterName:download-only-145800 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:do
cker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I0108 20:11:17.421505    1120 cni.go:84] Creating CNI manager for ""
	I0108 20:11:17.421781    1120 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0108 20:11:17.421781    1120 start_flags.go:323] config:
	{Name:download-only-145800 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17806/minikube-v1.32.1-1702708929-17806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:download-only-145800 Namespa
ce:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQe
muFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I0108 20:11:17.422110    1120 iso.go:125] acquiring lock: {Name:mk1869c5b5033c1301af33a7fa364ec43dd63efe Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 20:11:17.423486    1120 out.go:97] Starting control plane node download-only-145800 in cluster download-only-145800
	I0108 20:11:17.423486    1120 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime docker
	I0108 20:11:17.466760    1120 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.29.0-rc.2/preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-amd64.tar.lz4
	I0108 20:11:17.467440    1120 cache.go:56] Caching tarball of preloaded images
	I0108 20:11:17.467820    1120 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime docker
	I0108 20:11:17.468802    1120 out.go:97] Downloading Kubernetes v1.29.0-rc.2 preload ...
	I0108 20:11:17.468802    1120 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-amd64.tar.lz4 ...
	I0108 20:11:17.539014    1120 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.29.0-rc.2/preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-amd64.tar.lz4?checksum=md5:74b99cd9fa76659778caad266ad399ba -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-amd64.tar.lz4
	
	
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-145800"

                                                
                                                
-- /stdout --
** stderr ** 
	W0108 20:11:24.771455    2452 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
aaa_download_only_test.go:173: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.29.0-rc.2/LogsDuration (0.25s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAll (1.28s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAll
aaa_download_only_test.go:190: (dbg) Run:  out/minikube-windows-amd64.exe delete --all
aaa_download_only_test.go:190: (dbg) Done: out/minikube-windows-amd64.exe delete --all: (1.2807909s)
--- PASS: TestDownloadOnly/DeleteAll (1.28s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAlwaysSucceeds (1.22s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAlwaysSucceeds
aaa_download_only_test.go:202: (dbg) Run:  out/minikube-windows-amd64.exe delete -p download-only-145800
aaa_download_only_test.go:202: (dbg) Done: out/minikube-windows-amd64.exe delete -p download-only-145800: (1.2161564s)
--- PASS: TestDownloadOnly/DeleteAlwaysSucceeds (1.22s)

                                                
                                    
x
+
TestBinaryMirror (7.12s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:307: (dbg) Run:  out/minikube-windows-amd64.exe start --download-only -p binary-mirror-038100 --alsologtostderr --binary-mirror http://127.0.0.1:50621 --driver=hyperv
aaa_download_only_test.go:307: (dbg) Done: out/minikube-windows-amd64.exe start --download-only -p binary-mirror-038100 --alsologtostderr --binary-mirror http://127.0.0.1:50621 --driver=hyperv: (6.2353864s)
helpers_test.go:175: Cleaning up "binary-mirror-038100" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p binary-mirror-038100
--- PASS: TestBinaryMirror (7.12s)

                                                
                                    
x
+
TestOffline (253.09s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-windows-amd64.exe start -p offline-docker-152000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=hyperv
aab_offline_test.go:55: (dbg) Done: out/minikube-windows-amd64.exe start -p offline-docker-152000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=hyperv: (3m35.9484412s)
helpers_test.go:175: Cleaning up "offline-docker-152000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p offline-docker-152000
E0108 22:10:48.016568    3008 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ingress-addon-legacy-054400\client.crt: The system cannot find the path specified.
E0108 22:11:01.510712    3008 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-242800\client.crt: The system cannot find the path specified.
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p offline-docker-152000: (37.1448627s)
--- PASS: TestOffline (253.09s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.29s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:928: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p addons-084500
addons_test.go:928: (dbg) Non-zero exit: out/minikube-windows-amd64.exe addons enable dashboard -p addons-084500: exit status 85 (291.0582ms)

                                                
                                                
-- stdout --
	* Profile "addons-084500" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-084500"

                                                
                                                
-- /stdout --
** stderr ** 
	W0108 20:11:35.917136    3168 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.29s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.3s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:939: (dbg) Run:  out/minikube-windows-amd64.exe addons disable dashboard -p addons-084500
addons_test.go:939: (dbg) Non-zero exit: out/minikube-windows-amd64.exe addons disable dashboard -p addons-084500: exit status 85 (297.1714ms)

                                                
                                                
-- stdout --
	* Profile "addons-084500" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-084500"

                                                
                                                
-- /stdout --
** stderr ** 
	W0108 20:11:35.917136    9112 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.30s)

                                                
                                    
x
+
TestAddons/Setup (376.12s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:109: (dbg) Run:  out/minikube-windows-amd64.exe start -p addons-084500 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=hyperv --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:109: (dbg) Done: out/minikube-windows-amd64.exe start -p addons-084500 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=hyperv --addons=ingress --addons=ingress-dns --addons=helm-tiller: (6m16.1152004s)
--- PASS: TestAddons/Setup (376.12s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (64.67s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:207: (dbg) Run:  kubectl --context addons-084500 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:232: (dbg) Run:  kubectl --context addons-084500 replace --force -f testdata\nginx-ingress-v1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context addons-084500 replace --force -f testdata\nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [b2c5a00e-1ce0-42aa-bf30-6b4b50277d2b] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [b2c5a00e-1ce0-42aa-bf30-6b4b50277d2b] Running
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 13.0056109s
addons_test.go:262: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-084500 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:262: (dbg) Done: out/minikube-windows-amd64.exe -p addons-084500 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": (9.85695s)
addons_test.go:269: debug: unexpected stderr for out/minikube-windows-amd64.exe -p addons-084500 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'":
W0108 20:19:31.704087    9252 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
addons_test.go:286: (dbg) Run:  kubectl --context addons-084500 replace --force -f testdata\ingress-dns-example-v1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-084500 ip
addons_test.go:291: (dbg) Done: out/minikube-windows-amd64.exe -p addons-084500 ip: (2.6020873s)
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 172.29.100.38
addons_test.go:306: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-084500 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:306: (dbg) Done: out/minikube-windows-amd64.exe -p addons-084500 addons disable ingress-dns --alsologtostderr -v=1: (15.5578565s)
addons_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-084500 addons disable ingress --alsologtostderr -v=1
addons_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe -p addons-084500 addons disable ingress --alsologtostderr -v=1: (21.5546708s)
--- PASS: TestAddons/parallel/Ingress (64.67s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (25.75s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:838: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-vfh65" [161400a4-6cfc-47fc-a4f9-2c7ed8aa8592] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:838: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.0186773s
addons_test.go:841: (dbg) Run:  out/minikube-windows-amd64.exe addons disable inspektor-gadget -p addons-084500
addons_test.go:841: (dbg) Done: out/minikube-windows-amd64.exe addons disable inspektor-gadget -p addons-084500: (20.7319012s)
--- PASS: TestAddons/parallel/InspektorGadget (25.75s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (21.7s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:407: metrics-server stabilized in 51.206ms
addons_test.go:409: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-7c66d45ddc-98ddh" [acd30cd8-7ec4-4df1-b6c1-c2fe6dcb3bf1] Running
addons_test.go:409: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.0102212s
addons_test.go:415: (dbg) Run:  kubectl --context addons-084500 top pods -n kube-system
addons_test.go:432: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-084500 addons disable metrics-server --alsologtostderr -v=1
addons_test.go:432: (dbg) Done: out/minikube-windows-amd64.exe -p addons-084500 addons disable metrics-server --alsologtostderr -v=1: (16.4619018s)
--- PASS: TestAddons/parallel/MetricsServer (21.70s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (27.08s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:456: tiller-deploy stabilized in 6.2129ms
addons_test.go:458: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-7b677967b9-6qkvr" [b1da5a47-1256-407e-b524-de2bd1b2d0ec] Running
addons_test.go:458: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.00841s
addons_test.go:473: (dbg) Run:  kubectl --context addons-084500 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:473: (dbg) Done: kubectl --context addons-084500 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (6.7718918s)
addons_test.go:490: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-084500 addons disable helm-tiller --alsologtostderr -v=1
addons_test.go:490: (dbg) Done: out/minikube-windows-amd64.exe -p addons-084500 addons disable helm-tiller --alsologtostderr -v=1: (15.2762497s)
--- PASS: TestAddons/parallel/HelmTiller (27.08s)

                                                
                                    
x
+
TestAddons/parallel/CSI (95.05s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:561: csi-hostpath-driver pods stabilized in 56.5016ms
addons_test.go:564: (dbg) Run:  kubectl --context addons-084500 create -f testdata\csi-hostpath-driver\pvc.yaml
addons_test.go:569: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-084500 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-084500 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-084500 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-084500 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-084500 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-084500 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-084500 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-084500 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-084500 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-084500 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-084500 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-084500 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-084500 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-084500 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-084500 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-084500 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:574: (dbg) Run:  kubectl --context addons-084500 create -f testdata\csi-hostpath-driver\pv-pod.yaml
addons_test.go:579: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [ff681f32-3d7d-43a4-8a30-374c03333e78] Pending
helpers_test.go:344: "task-pv-pod" [ff681f32-3d7d-43a4-8a30-374c03333e78] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [ff681f32-3d7d-43a4-8a30-374c03333e78] Running
addons_test.go:579: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 22.0256214s
addons_test.go:584: (dbg) Run:  kubectl --context addons-084500 create -f testdata\csi-hostpath-driver\snapshot.yaml
addons_test.go:589: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-084500 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-084500 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:594: (dbg) Run:  kubectl --context addons-084500 delete pod task-pv-pod
addons_test.go:594: (dbg) Done: kubectl --context addons-084500 delete pod task-pv-pod: (1.1529117s)
addons_test.go:600: (dbg) Run:  kubectl --context addons-084500 delete pvc hpvc
addons_test.go:606: (dbg) Run:  kubectl --context addons-084500 create -f testdata\csi-hostpath-driver\pvc-restore.yaml
addons_test.go:611: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-084500 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-084500 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-084500 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-084500 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-084500 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:616: (dbg) Run:  kubectl --context addons-084500 create -f testdata\csi-hostpath-driver\pv-pod-restore.yaml
addons_test.go:621: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [40438817-77a5-46b6-83b7-d542628949ae] Pending
helpers_test.go:344: "task-pv-pod-restore" [40438817-77a5-46b6-83b7-d542628949ae] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [40438817-77a5-46b6-83b7-d542628949ae] Running
addons_test.go:621: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 9.0097752s
addons_test.go:626: (dbg) Run:  kubectl --context addons-084500 delete pod task-pv-pod-restore
addons_test.go:626: (dbg) Done: kubectl --context addons-084500 delete pod task-pv-pod-restore: (1.8704656s)
addons_test.go:630: (dbg) Run:  kubectl --context addons-084500 delete pvc hpvc-restore
addons_test.go:634: (dbg) Run:  kubectl --context addons-084500 delete volumesnapshot new-snapshot-demo
addons_test.go:638: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-084500 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:638: (dbg) Done: out/minikube-windows-amd64.exe -p addons-084500 addons disable csi-hostpath-driver --alsologtostderr -v=1: (21.2325684s)
addons_test.go:642: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-084500 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:642: (dbg) Done: out/minikube-windows-amd64.exe -p addons-084500 addons disable volumesnapshots --alsologtostderr -v=1: (15.7295491s)
--- PASS: TestAddons/parallel/CSI (95.05s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (32.27s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:824: (dbg) Run:  out/minikube-windows-amd64.exe addons enable headlamp -p addons-084500 --alsologtostderr -v=1
addons_test.go:824: (dbg) Done: out/minikube-windows-amd64.exe addons enable headlamp -p addons-084500 --alsologtostderr -v=1: (16.2471703s)
addons_test.go:829: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-7ddfbb94ff-mjkwm" [b498b98a-ab2b-4540-b942-12093c60d36d] Pending
helpers_test.go:344: "headlamp-7ddfbb94ff-mjkwm" [b498b98a-ab2b-4540-b942-12093c60d36d] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7ddfbb94ff-mjkwm" [b498b98a-ab2b-4540-b942-12093c60d36d] Running
addons_test.go:829: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 16.0233792s
--- PASS: TestAddons/parallel/Headlamp (32.27s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (20.8s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:857: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-64c8c85f65-g9rnn" [183aadd3-8321-445a-a8c0-33ba15992182] Running
addons_test.go:857: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.0200227s
addons_test.go:860: (dbg) Run:  out/minikube-windows-amd64.exe addons disable cloud-spanner -p addons-084500
addons_test.go:860: (dbg) Done: out/minikube-windows-amd64.exe addons disable cloud-spanner -p addons-084500: (15.7588956s)
--- PASS: TestAddons/parallel/CloudSpanner (20.80s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (85.16s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:873: (dbg) Run:  kubectl --context addons-084500 apply -f testdata\storage-provisioner-rancher\pvc.yaml
addons_test.go:879: (dbg) Run:  kubectl --context addons-084500 apply -f testdata\storage-provisioner-rancher\pod.yaml
addons_test.go:883: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-084500 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-084500 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-084500 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-084500 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-084500 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-084500 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-084500 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:886: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [0f6f3d8d-d704-45ff-8423-f5b053639813] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [0f6f3d8d-d704-45ff-8423-f5b053639813] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [0f6f3d8d-d704-45ff-8423-f5b053639813] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:886: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 6.0263005s
addons_test.go:891: (dbg) Run:  kubectl --context addons-084500 get pvc test-pvc -o=json
addons_test.go:900: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-084500 ssh "cat /opt/local-path-provisioner/pvc-b343249b-7af6-4a98-9f32-9a613e622e0b_default_test-pvc/file1"
addons_test.go:900: (dbg) Done: out/minikube-windows-amd64.exe -p addons-084500 ssh "cat /opt/local-path-provisioner/pvc-b343249b-7af6-4a98-9f32-9a613e622e0b_default_test-pvc/file1": (10.4041107s)
addons_test.go:912: (dbg) Run:  kubectl --context addons-084500 delete pod test-local-path
addons_test.go:916: (dbg) Run:  kubectl --context addons-084500 delete pvc test-pvc
addons_test.go:920: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-084500 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:920: (dbg) Done: out/minikube-windows-amd64.exe -p addons-084500 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (1m0.8476504s)
--- PASS: TestAddons/parallel/LocalPath (85.16s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (20.24s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:952: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-jftp2" [1d58fcc0-907e-4efe-a35d-abfba69e4440] Running
addons_test.go:952: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.0233118s
addons_test.go:955: (dbg) Run:  out/minikube-windows-amd64.exe addons disable nvidia-device-plugin -p addons-084500
addons_test.go:955: (dbg) Done: out/minikube-windows-amd64.exe addons disable nvidia-device-plugin -p addons-084500: (15.2089678s)
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (20.24s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (5.03s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:963: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-9947fc6bf-gcq5f" [f10290fd-df07-47a6-8a6f-46a5dcd1bef0] Running
addons_test.go:963: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.0233036s
--- PASS: TestAddons/parallel/Yakd (5.03s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.36s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:650: (dbg) Run:  kubectl --context addons-084500 create ns new-namespace
addons_test.go:664: (dbg) Run:  kubectl --context addons-084500 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.36s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (46.19s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-windows-amd64.exe stop -p addons-084500
addons_test.go:172: (dbg) Done: out/minikube-windows-amd64.exe stop -p addons-084500: (34.3893897s)
addons_test.go:176: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p addons-084500
addons_test.go:176: (dbg) Done: out/minikube-windows-amd64.exe addons enable dashboard -p addons-084500: (4.64353s)
addons_test.go:180: (dbg) Run:  out/minikube-windows-amd64.exe addons disable dashboard -p addons-084500
addons_test.go:180: (dbg) Done: out/minikube-windows-amd64.exe addons disable dashboard -p addons-084500: (4.5949801s)
addons_test.go:185: (dbg) Run:  out/minikube-windows-amd64.exe addons disable gvisor -p addons-084500
addons_test.go:185: (dbg) Done: out/minikube-windows-amd64.exe addons disable gvisor -p addons-084500: (2.5593384s)
--- PASS: TestAddons/StoppedEnableDisable (46.19s)

                                                
                                    
x
+
TestCertOptions (490.15s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-windows-amd64.exe start -p cert-options-283400 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=hyperv
E0108 22:17:58.314439    3008 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-242800\client.crt: The system cannot find the path specified.
cert_options_test.go:49: (dbg) Done: out/minikube-windows-amd64.exe start -p cert-options-283400 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=hyperv: (7m6.0576915s)
cert_options_test.go:60: (dbg) Run:  out/minikube-windows-amd64.exe -p cert-options-283400 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:60: (dbg) Done: out/minikube-windows-amd64.exe -p cert-options-283400 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt": (10.2321607s)
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-283400 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p cert-options-283400 -- "sudo cat /etc/kubernetes/admin.conf"
cert_options_test.go:100: (dbg) Done: out/minikube-windows-amd64.exe ssh -p cert-options-283400 -- "sudo cat /etc/kubernetes/admin.conf": (9.9596321s)
helpers_test.go:175: Cleaning up "cert-options-283400" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p cert-options-283400
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p cert-options-283400: (43.740354s)
--- PASS: TestCertOptions (490.15s)

                                                
                                    
x
+
TestDockerFlags (508.75s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-windows-amd64.exe start -p docker-flags-715600 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=hyperv
E0108 22:12:52.249783    3008 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-084500\client.crt: The system cannot find the path specified.
E0108 22:12:58.306521    3008 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-242800\client.crt: The system cannot find the path specified.
docker_test.go:51: (dbg) Done: out/minikube-windows-amd64.exe start -p docker-flags-715600 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=hyperv: (7m31.0250621s)
docker_test.go:56: (dbg) Run:  out/minikube-windows-amd64.exe -p docker-flags-715600 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:56: (dbg) Done: out/minikube-windows-amd64.exe -p docker-flags-715600 ssh "sudo systemctl show docker --property=Environment --no-pager": (10.1585817s)
docker_test.go:67: (dbg) Run:  out/minikube-windows-amd64.exe -p docker-flags-715600 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
docker_test.go:67: (dbg) Done: out/minikube-windows-amd64.exe -p docker-flags-715600 ssh "sudo systemctl show docker --property=ExecStart --no-pager": (10.2244021s)
helpers_test.go:175: Cleaning up "docker-flags-715600" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p docker-flags-715600
E0108 22:20:48.031623    3008 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ingress-addon-legacy-054400\client.crt: The system cannot find the path specified.
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p docker-flags-715600: (37.3375954s)
--- PASS: TestDockerFlags (508.75s)

                                                
                                    
x
+
TestForceSystemdFlag (402.74s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-windows-amd64.exe start -p force-systemd-flag-852700 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=hyperv
docker_test.go:91: (dbg) Done: out/minikube-windows-amd64.exe start -p force-systemd-flag-852700 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=hyperv: (5m55.2171315s)
docker_test.go:110: (dbg) Run:  out/minikube-windows-amd64.exe -p force-systemd-flag-852700 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Done: out/minikube-windows-amd64.exe -p force-systemd-flag-852700 ssh "docker info --format {{.CgroupDriver}}": (10.1332961s)
helpers_test.go:175: Cleaning up "force-systemd-flag-852700" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p force-systemd-flag-852700
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p force-systemd-flag-852700: (37.3905816s)
--- PASS: TestForceSystemdFlag (402.74s)

                                                
                                    
x
+
TestForceSystemdEnv (502.17s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-windows-amd64.exe start -p force-systemd-env-868600 --memory=2048 --alsologtostderr -v=5 --driver=hyperv
E0108 22:07:52.246366    3008 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-084500\client.crt: The system cannot find the path specified.
E0108 22:07:58.304956    3008 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-242800\client.crt: The system cannot find the path specified.
docker_test.go:155: (dbg) Done: out/minikube-windows-amd64.exe start -p force-systemd-env-868600 --memory=2048 --alsologtostderr -v=5 --driver=hyperv: (7m35.9095932s)
docker_test.go:110: (dbg) Run:  out/minikube-windows-amd64.exe -p force-systemd-env-868600 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Done: out/minikube-windows-amd64.exe -p force-systemd-env-868600 ssh "docker info --format {{.CgroupDriver}}": (10.3320399s)
helpers_test.go:175: Cleaning up "force-systemd-env-868600" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p force-systemd-env-868600
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p force-systemd-env-868600: (35.9248105s)
--- PASS: TestForceSystemdEnv (502.17s)

                                                
                                    
x
+
TestErrorSpam/start (17.34s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-974700 --log_dir C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-974700 start --dry-run
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-974700 --log_dir C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-974700 start --dry-run: (5.7210947s)
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-974700 --log_dir C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-974700 start --dry-run
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-974700 --log_dir C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-974700 start --dry-run: (5.7767033s)
error_spam_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-974700 --log_dir C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-974700 start --dry-run
error_spam_test.go:182: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-974700 --log_dir C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-974700 start --dry-run: (5.8379792s)
--- PASS: TestErrorSpam/start (17.34s)

                                                
                                    
x
+
TestErrorSpam/status (36.73s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-974700 --log_dir C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-974700 status
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-974700 --log_dir C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-974700 status: (12.7027701s)
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-974700 --log_dir C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-974700 status
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-974700 --log_dir C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-974700 status: (11.9180448s)
error_spam_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-974700 --log_dir C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-974700 status
error_spam_test.go:182: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-974700 --log_dir C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-974700 status: (12.1067485s)
--- PASS: TestErrorSpam/status (36.73s)

                                                
                                    
x
+
TestErrorSpam/pause (22.6s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-974700 --log_dir C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-974700 pause
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-974700 --log_dir C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-974700 pause: (7.8247317s)
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-974700 --log_dir C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-974700 pause
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-974700 --log_dir C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-974700 pause: (7.396565s)
error_spam_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-974700 --log_dir C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-974700 pause
error_spam_test.go:182: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-974700 --log_dir C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-974700 pause: (7.3783622s)
--- PASS: TestErrorSpam/pause (22.60s)

                                                
                                    
x
+
TestErrorSpam/unpause (22.9s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-974700 --log_dir C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-974700 unpause
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-974700 --log_dir C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-974700 unpause: (7.6647254s)
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-974700 --log_dir C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-974700 unpause
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-974700 --log_dir C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-974700 unpause: (7.6037683s)
error_spam_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-974700 --log_dir C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-974700 unpause
error_spam_test.go:182: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-974700 --log_dir C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-974700 unpause: (7.6240681s)
--- PASS: TestErrorSpam/unpause (22.90s)

                                                
                                    
x
+
TestErrorSpam/stop (51.25s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-974700 --log_dir C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-974700 stop
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-974700 --log_dir C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-974700 stop: (33.5729066s)
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-974700 --log_dir C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-974700 stop
E0108 20:27:52.211514    3008 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-084500\client.crt: The system cannot find the path specified.
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-974700 --log_dir C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-974700 stop: (8.9396052s)
error_spam_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-974700 --log_dir C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-974700 stop
error_spam_test.go:182: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-974700 --log_dir C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-974700 stop: (8.7374891s)
--- PASS: TestErrorSpam/stop (51.25s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0.03s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1854: local sync path: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\test\nested\copy\3008\hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.03s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (202.02s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2233: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-242800 --memory=4000 --apiserver-port=8441 --wait=all --driver=hyperv
functional_test.go:2233: (dbg) Done: out/minikube-windows-amd64.exe start -p functional-242800 --memory=4000 --apiserver-port=8441 --wait=all --driver=hyperv: (3m22.0129408s)
--- PASS: TestFunctional/serial/StartWithProxy (202.02s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (110.31s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-242800 --alsologtostderr -v=8
E0108 20:32:52.216933    3008 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-084500\client.crt: The system cannot find the path specified.
functional_test.go:655: (dbg) Done: out/minikube-windows-amd64.exe start -p functional-242800 --alsologtostderr -v=8: (1m50.3083712s)
functional_test.go:659: soft start took 1m50.310102s for "functional-242800" cluster.
--- PASS: TestFunctional/serial/SoftStart (110.31s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.14s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.23s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-242800 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.23s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (27.3s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-242800 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-windows-amd64.exe -p functional-242800 cache add registry.k8s.io/pause:3.1: (9.6633653s)
functional_test.go:1045: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-242800 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-windows-amd64.exe -p functional-242800 cache add registry.k8s.io/pause:3.3: (8.9458341s)
functional_test.go:1045: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-242800 cache add registry.k8s.io/pause:latest
functional_test.go:1045: (dbg) Done: out/minikube-windows-amd64.exe -p functional-242800 cache add registry.k8s.io/pause:latest: (8.6887584s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (27.30s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (10.16s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-242800 C:\Users\jenkins.minikube7\AppData\Local\Temp\TestFunctionalserialCacheCmdcacheadd_local2874863560\001
functional_test.go:1073: (dbg) Done: docker build -t minikube-local-cache-test:functional-242800 C:\Users\jenkins.minikube7\AppData\Local\Temp\TestFunctionalserialCacheCmdcacheadd_local2874863560\001: (1.6311059s)
functional_test.go:1085: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-242800 cache add minikube-local-cache-test:functional-242800
functional_test.go:1085: (dbg) Done: out/minikube-windows-amd64.exe -p functional-242800 cache add minikube-local-cache-test:functional-242800: (8.0524913s)
functional_test.go:1090: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-242800 cache delete minikube-local-cache-test:functional-242800
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-242800
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (10.16s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.28s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-windows-amd64.exe cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.28s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.27s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-windows-amd64.exe cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.27s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (9.34s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-242800 ssh sudo crictl images
functional_test.go:1120: (dbg) Done: out/minikube-windows-amd64.exe -p functional-242800 ssh sudo crictl images: (9.337137s)
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (9.34s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (36.62s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-242800 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1143: (dbg) Done: out/minikube-windows-amd64.exe -p functional-242800 ssh sudo docker rmi registry.k8s.io/pause:latest: (9.4149453s)
functional_test.go:1149: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-242800 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-242800 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (9.5185s)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	W0108 20:34:31.903173   11076 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-242800 cache reload
functional_test.go:1154: (dbg) Done: out/minikube-windows-amd64.exe -p functional-242800 cache reload: (8.1523844s)
functional_test.go:1159: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-242800 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1159: (dbg) Done: out/minikube-windows-amd64.exe -p functional-242800 ssh sudo crictl inspecti registry.k8s.io/pause:latest: (9.5308924s)
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (36.62s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.55s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-windows-amd64.exe cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-windows-amd64.exe cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.55s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.52s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-242800 kubectl -- --context functional-242800 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.52s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (1.83s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out\kubectl.exe --context functional-242800 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (1.83s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (121.12s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-242800 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:753: (dbg) Done: out/minikube-windows-amd64.exe start -p functional-242800 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (2m1.1169529s)
functional_test.go:757: restart took 2m1.1174044s for "functional-242800" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (121.12s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.2s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-242800 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.20s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (8.46s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-242800 logs
functional_test.go:1232: (dbg) Done: out/minikube-windows-amd64.exe -p functional-242800 logs: (8.463181s)
--- PASS: TestFunctional/serial/LogsCmd (8.46s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (10.55s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-242800 logs --file C:\Users\jenkins.minikube7\AppData\Local\Temp\TestFunctionalserialLogsFileCmd2646316772\001\logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-windows-amd64.exe -p functional-242800 logs --file C:\Users\jenkins.minikube7\AppData\Local\Temp\TestFunctionalserialLogsFileCmd2646316772\001\logs.txt: (10.5458727s)
--- PASS: TestFunctional/serial/LogsFileCmd (10.55s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (20.93s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2320: (dbg) Run:  kubectl --context functional-242800 apply -f testdata\invalidsvc.yaml
functional_test.go:2334: (dbg) Run:  out/minikube-windows-amd64.exe service invalid-svc -p functional-242800
functional_test.go:2334: (dbg) Non-zero exit: out/minikube-windows-amd64.exe service invalid-svc -p functional-242800: exit status 115 (16.7174486s)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|-----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |             URL             |
	|-----------|-------------|-------------|-----------------------------|
	| default   | invalid-svc |          80 | http://172.29.109.168:32653 |
	|-----------|-------------|-------------|-----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0108 20:37:25.735358    2712 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - C:\Users\jenkins.minikube7\AppData\Local\Temp\minikube_service_c9bf6787273d25f6c9d72c0b156373dea6a4fe44_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2326: (dbg) Run:  kubectl --context functional-242800 delete -f testdata\invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (20.93s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (45.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-242800 status
functional_test.go:850: (dbg) Done: out/minikube-windows-amd64.exe -p functional-242800 status: (14.5998233s)
functional_test.go:856: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-242800 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:856: (dbg) Done: out/minikube-windows-amd64.exe -p functional-242800 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}: (14.6648183s)
functional_test.go:868: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-242800 status -o json
functional_test.go:868: (dbg) Done: out/minikube-windows-amd64.exe -p functional-242800 status -o json: (15.96267s)
--- PASS: TestFunctional/parallel/StatusCmd (45.23s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (30.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1628: (dbg) Run:  kubectl --context functional-242800 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1634: (dbg) Run:  kubectl --context functional-242800 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-55497b8b78-2xjfn" [fc46d87d-7f75-44ad-be4b-57f21a658162] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-55497b8b78-2xjfn" [fc46d87d-7f75-44ad-be4b-57f21a658162] Running
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 11.0107571s
functional_test.go:1648: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-242800 service hello-node-connect --url
functional_test.go:1648: (dbg) Done: out/minikube-windows-amd64.exe -p functional-242800 service hello-node-connect --url: (18.5897804s)
functional_test.go:1654: found endpoint for hello-node-connect: http://172.29.109.168:32335
functional_test.go:1674: http://172.29.109.168:32335: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-55497b8b78-2xjfn

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://172.29.109.168:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=172.29.109.168:32335
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (30.05s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.9s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1689: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-242800 addons list
functional_test.go:1701: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-242800 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.90s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (40.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [280d79a1-ac58-40c3-bf50-998fe852beb6] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.0179313s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-242800 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-242800 apply -f testdata/storage-provisioner/pvc.yaml
E0108 20:39:15.397839    3008 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-084500\client.crt: The system cannot find the path specified.
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-242800 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-242800 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [3787de2d-83b4-4f31-b595-2a9d480a4da0] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [3787de2d-83b4-4f31-b595-2a9d480a4da0] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 23.0146344s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-242800 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-242800 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-242800 delete -f testdata/storage-provisioner/pod.yaml: (1.1317267s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-242800 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [36246889-f3f6-4300-8ed7-f55b040ef100] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [36246889-f3f6-4300-8ed7-f55b040ef100] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 9.0221683s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-242800 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (40.50s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (23.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1724: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-242800 ssh "echo hello"
functional_test.go:1724: (dbg) Done: out/minikube-windows-amd64.exe -p functional-242800 ssh "echo hello": (11.5951116s)
functional_test.go:1741: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-242800 ssh "cat /etc/hostname"
functional_test.go:1741: (dbg) Done: out/minikube-windows-amd64.exe -p functional-242800 ssh "cat /etc/hostname": (12.1652304s)
--- PASS: TestFunctional/parallel/SSHCmd (23.76s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (63.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-242800 cp testdata\cp-test.txt /home/docker/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p functional-242800 cp testdata\cp-test.txt /home/docker/cp-test.txt: (9.6782262s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-242800 ssh -n functional-242800 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p functional-242800 ssh -n functional-242800 "sudo cat /home/docker/cp-test.txt": (11.4149146s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-242800 cp functional-242800:/home/docker/cp-test.txt C:\Users\jenkins.minikube7\AppData\Local\Temp\TestFunctionalparallelCpCmd2393151244\001\cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p functional-242800 cp functional-242800:/home/docker/cp-test.txt C:\Users\jenkins.minikube7\AppData\Local\Temp\TestFunctionalparallelCpCmd2393151244\001\cp-test.txt: (10.4285794s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-242800 ssh -n functional-242800 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p functional-242800 ssh -n functional-242800 "sudo cat /home/docker/cp-test.txt": (11.3538205s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-242800 cp testdata\cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p functional-242800 cp testdata\cp-test.txt /tmp/does/not/exist/cp-test.txt: (8.5433353s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-242800 ssh -n functional-242800 "sudo cat /tmp/does/not/exist/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p functional-242800 ssh -n functional-242800 "sudo cat /tmp/does/not/exist/cp-test.txt": (11.6435541s)
--- PASS: TestFunctional/parallel/CpCmd (63.07s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (61.96s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1792: (dbg) Run:  kubectl --context functional-242800 replace --force -f testdata\mysql.yaml
functional_test.go:1798: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-859648c796-96rb8" [cc9ee6ba-f937-4468-8f5b-25a6d60c204b] Pending
helpers_test.go:344: "mysql-859648c796-96rb8" [cc9ee6ba-f937-4468-8f5b-25a6d60c204b] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-859648c796-96rb8" [cc9ee6ba-f937-4468-8f5b-25a6d60c204b] Running
functional_test.go:1798: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 50.0120987s
functional_test.go:1806: (dbg) Run:  kubectl --context functional-242800 exec mysql-859648c796-96rb8 -- mysql -ppassword -e "show databases;"
functional_test.go:1806: (dbg) Non-zero exit: kubectl --context functional-242800 exec mysql-859648c796-96rb8 -- mysql -ppassword -e "show databases;": exit status 1 (356.225ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1806: (dbg) Run:  kubectl --context functional-242800 exec mysql-859648c796-96rb8 -- mysql -ppassword -e "show databases;"
functional_test.go:1806: (dbg) Non-zero exit: kubectl --context functional-242800 exec mysql-859648c796-96rb8 -- mysql -ppassword -e "show databases;": exit status 1 (412.3864ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1806: (dbg) Run:  kubectl --context functional-242800 exec mysql-859648c796-96rb8 -- mysql -ppassword -e "show databases;"
functional_test.go:1806: (dbg) Non-zero exit: kubectl --context functional-242800 exec mysql-859648c796-96rb8 -- mysql -ppassword -e "show databases;": exit status 1 (358.578ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1806: (dbg) Run:  kubectl --context functional-242800 exec mysql-859648c796-96rb8 -- mysql -ppassword -e "show databases;"
functional_test.go:1806: (dbg) Non-zero exit: kubectl --context functional-242800 exec mysql-859648c796-96rb8 -- mysql -ppassword -e "show databases;": exit status 1 (519.3736ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1806: (dbg) Run:  kubectl --context functional-242800 exec mysql-859648c796-96rb8 -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (61.96s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (11.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1928: Checking for existence of /etc/test/nested/copy/3008/hosts within VM
functional_test.go:1930: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-242800 ssh "sudo cat /etc/test/nested/copy/3008/hosts"
functional_test.go:1930: (dbg) Done: out/minikube-windows-amd64.exe -p functional-242800 ssh "sudo cat /etc/test/nested/copy/3008/hosts": (11.2105496s)
functional_test.go:1935: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (11.21s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (67.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1971: Checking for existence of /etc/ssl/certs/3008.pem within VM
functional_test.go:1972: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-242800 ssh "sudo cat /etc/ssl/certs/3008.pem"
functional_test.go:1972: (dbg) Done: out/minikube-windows-amd64.exe -p functional-242800 ssh "sudo cat /etc/ssl/certs/3008.pem": (12.328296s)
functional_test.go:1971: Checking for existence of /usr/share/ca-certificates/3008.pem within VM
functional_test.go:1972: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-242800 ssh "sudo cat /usr/share/ca-certificates/3008.pem"
functional_test.go:1972: (dbg) Done: out/minikube-windows-amd64.exe -p functional-242800 ssh "sudo cat /usr/share/ca-certificates/3008.pem": (10.7907377s)
functional_test.go:1971: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1972: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-242800 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1972: (dbg) Done: out/minikube-windows-amd64.exe -p functional-242800 ssh "sudo cat /etc/ssl/certs/51391683.0": (10.6286498s)
functional_test.go:1998: Checking for existence of /etc/ssl/certs/30082.pem within VM
functional_test.go:1999: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-242800 ssh "sudo cat /etc/ssl/certs/30082.pem"
functional_test.go:1999: (dbg) Done: out/minikube-windows-amd64.exe -p functional-242800 ssh "sudo cat /etc/ssl/certs/30082.pem": (11.2335776s)
functional_test.go:1998: Checking for existence of /usr/share/ca-certificates/30082.pem within VM
functional_test.go:1999: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-242800 ssh "sudo cat /usr/share/ca-certificates/30082.pem"
functional_test.go:1999: (dbg) Done: out/minikube-windows-amd64.exe -p functional-242800 ssh "sudo cat /usr/share/ca-certificates/30082.pem": (11.3473828s)
functional_test.go:1998: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1999: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-242800 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
functional_test.go:1999: (dbg) Done: out/minikube-windows-amd64.exe -p functional-242800 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0": (11.3856048s)
--- PASS: TestFunctional/parallel/CertSync (67.72s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-242800 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (12.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2026: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-242800 ssh "sudo systemctl is-active crio"
functional_test.go:2026: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-242800 ssh "sudo systemctl is-active crio": exit status 1 (12.3949054s)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	W0108 20:37:45.302696   10160 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (12.40s)

                                                
                                    
x
+
TestFunctional/parallel/License (3.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2287: (dbg) Run:  out/minikube-windows-amd64.exe license
functional_test.go:2287: (dbg) Done: out/minikube-windows-amd64.exe license: (3.5175484s)
--- PASS: TestFunctional/parallel/License (3.54s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2255: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-242800 version --short
--- PASS: TestFunctional/parallel/Version/short (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (8.87s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2269: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-242800 version -o=json --components
functional_test.go:2269: (dbg) Done: out/minikube-windows-amd64.exe -p functional-242800 version -o=json --components: (8.8672573s)
--- PASS: TestFunctional/parallel/Version/components (8.87s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (7.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-242800 image ls --format short --alsologtostderr
functional_test.go:260: (dbg) Done: out/minikube-windows-amd64.exe -p functional-242800 image ls --format short --alsologtostderr: (7.6814796s)
functional_test.go:265: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-242800 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.28.4
registry.k8s.io/kube-proxy:v1.28.4
registry.k8s.io/kube-controller-manager:v1.28.4
registry.k8s.io/kube-apiserver:v1.28.4
registry.k8s.io/etcd:3.5.9-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.10.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/google-containers/addon-resizer:functional-242800
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/minikube-local-cache-test:functional-242800
functional_test.go:268: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-242800 image ls --format short --alsologtostderr:
W0108 20:40:42.473804    7136 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
I0108 20:40:42.554096    7136 out.go:296] Setting OutFile to fd 1212 ...
I0108 20:40:42.568119    7136 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0108 20:40:42.568195    7136 out.go:309] Setting ErrFile to fd 1116...
I0108 20:40:42.568195    7136 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0108 20:40:42.583043    7136 config.go:182] Loaded profile config "functional-242800": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0108 20:40:42.583298    7136 config.go:182] Loaded profile config "functional-242800": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0108 20:40:42.584037    7136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-242800 ).state
I0108 20:40:44.846850    7136 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0108 20:40:44.846942    7136 main.go:141] libmachine: [stderr =====>] : 
I0108 20:40:44.861844    7136 ssh_runner.go:195] Run: systemctl --version
I0108 20:40:44.861844    7136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-242800 ).state
I0108 20:40:47.120435    7136 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0108 20:40:47.120500    7136 main.go:141] libmachine: [stderr =====>] : 
I0108 20:40:47.120552    7136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-242800 ).networkadapters[0]).ipaddresses[0]
I0108 20:40:49.796212    7136 main.go:141] libmachine: [stdout =====>] : 172.29.109.168

                                                
                                                
I0108 20:40:49.796212    7136 main.go:141] libmachine: [stderr =====>] : 
I0108 20:40:49.796212    7136 sshutil.go:53] new ssh client: &{IP:172.29.109.168 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\functional-242800\id_rsa Username:docker}
I0108 20:40:49.915387    7136 ssh_runner.go:235] Completed: systemctl --version: (5.053517s)
I0108 20:40:49.930827    7136 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (7.68s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (7.87s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-242800 image ls --format table --alsologtostderr
functional_test.go:260: (dbg) Done: out/minikube-windows-amd64.exe -p functional-242800 image ls --format table --alsologtostderr: (7.8706316s)
functional_test.go:265: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-242800 image ls --format table --alsologtostderr:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| docker.io/library/nginx                     | latest            | d453dd892d935 | 187MB  |
| gcr.io/k8s-minikube/busybox                 | latest            | beae173ccac6a | 1.24MB |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | 6e38f40d628db | 31.5MB |
| registry.k8s.io/pause                       | latest            | 350b164e7ae1d | 240kB  |
| registry.k8s.io/kube-controller-manager     | v1.28.4           | d058aa5ab969c | 122MB  |
| docker.io/library/nginx                     | alpine            | 529b5644c430c | 42.6MB |
| gcr.io/google-containers/addon-resizer      | functional-242800 | ffd4cfbbe753e | 32.9MB |
| registry.k8s.io/pause                       | 3.1               | da86e6ba6ca19 | 742kB  |
| registry.k8s.io/echoserver                  | 1.8               | 82e4c8a736a4f | 95.4MB |
| docker.io/localhost/my-image                | functional-242800 | fad219e51318d | 1.24MB |
| registry.k8s.io/kube-apiserver              | v1.28.4           | 7fe0e6f37db33 | 126MB  |
| registry.k8s.io/kube-proxy                  | v1.28.4           | 83f6cc407eed8 | 73.2MB |
| registry.k8s.io/kube-scheduler              | v1.28.4           | e3db313c6dbc0 | 60.1MB |
| registry.k8s.io/coredns/coredns             | v1.10.1           | ead0a4a53df89 | 53.6MB |
| registry.k8s.io/pause                       | 3.3               | 0184c1613d929 | 683kB  |
| docker.io/library/minikube-local-cache-test | functional-242800 | 3bd0eb42d21b3 | 30B    |
| docker.io/library/mysql                     | 5.7               | 5107333e08a87 | 501MB  |
| registry.k8s.io/etcd                        | 3.5.9-0           | 73deb9a3f7025 | 294MB  |
| registry.k8s.io/pause                       | 3.9               | e6f1816883972 | 744kB  |
|---------------------------------------------|-------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-242800 image ls --format table --alsologtostderr:
W0108 20:41:13.254912   12408 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
I0108 20:41:13.351906   12408 out.go:296] Setting OutFile to fd 736 ...
I0108 20:41:13.368067   12408 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0108 20:41:13.368067   12408 out.go:309] Setting ErrFile to fd 776...
I0108 20:41:13.368257   12408 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0108 20:41:13.389647   12408 config.go:182] Loaded profile config "functional-242800": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0108 20:41:13.390631   12408 config.go:182] Loaded profile config "functional-242800": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0108 20:41:13.390979   12408 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-242800 ).state
I0108 20:41:15.681444   12408 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0108 20:41:15.682047   12408 main.go:141] libmachine: [stderr =====>] : 
I0108 20:41:15.705037   12408 ssh_runner.go:195] Run: systemctl --version
I0108 20:41:15.705037   12408 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-242800 ).state
I0108 20:41:17.998618   12408 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0108 20:41:17.998618   12408 main.go:141] libmachine: [stderr =====>] : 
I0108 20:41:17.998618   12408 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-242800 ).networkadapters[0]).ipaddresses[0]
I0108 20:41:20.782970   12408 main.go:141] libmachine: [stdout =====>] : 172.29.109.168

                                                
                                                
I0108 20:41:20.783145   12408 main.go:141] libmachine: [stderr =====>] : 
I0108 20:41:20.783367   12408 sshutil.go:53] new ssh client: &{IP:172.29.109.168 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\functional-242800\id_rsa Username:docker}
I0108 20:41:20.905973   12408 ssh_runner.go:235] Completed: systemctl --version: (5.20091s)
I0108 20:41:20.920347   12408 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
E0108 20:42:52.216964    3008 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-084500\client.crt: The system cannot find the path specified.
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (7.87s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (7.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-242800 image ls --format json --alsologtostderr
functional_test.go:260: (dbg) Done: out/minikube-windows-amd64.exe -p functional-242800 image ls --format json --alsologtostderr: (7.7055004s)
functional_test.go:265: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-242800 image ls --format json --alsologtostderr:
[{"id":"3bd0eb42d21b36e2942620ad9e59c5814d2aac73209c7c6ab61be6e0f589098d","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-242800"],"size":"30"},{"id":"e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.28.4"],"size":"60100000"},{"id":"d453dd892d9357f3559b967478ae9cbc417b52de66b53142f6c16c8a275486b9","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"187000000"},{"id":"e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.9"],"size":"744000"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"742000"},{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":[],"repoTags":["docker.io/library/mysql:5.7"],"size":"501000000"},{"id":"beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d
0f6a","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:latest"],"size":"1240000"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31500000"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"683000"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":[],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"95400000"},{"id":"7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.28.4"],"size":"126000000"},{"id":"83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.28.4"],"size":"73200000"},{"id":"d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manag
er:v1.28.4"],"size":"122000000"},{"id":"ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.10.1"],"size":"53600000"},{"id":"ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":[],"repoTags":["gcr.io/google-containers/addon-resizer:functional-242800"],"size":"32900000"},{"id":"fad219e51318dfc23a3dec30544a246d5d879f91e7f964ad3e265769bd93b8a5","repoDigests":[],"repoTags":["docker.io/localhost/my-image:functional-242800"],"size":"1240000"},{"id":"529b5644c430c06553d2e8082c6713fe19a4169c9dc2369cbb960081f52924ff","repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"],"size":"42600000"},{"id":"73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.9-0"],"size":"294000000"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"}]
functional_test.go:268: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-242800 image ls --format json --alsologtostderr:
W0108 20:41:08.203634    5720 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
I0108 20:41:08.284403    5720 out.go:296] Setting OutFile to fd 924 ...
I0108 20:41:08.285082    5720 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0108 20:41:08.285082    5720 out.go:309] Setting ErrFile to fd 1376...
I0108 20:41:08.285175    5720 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0108 20:41:08.302226    5720 config.go:182] Loaded profile config "functional-242800": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0108 20:41:08.302226    5720 config.go:182] Loaded profile config "functional-242800": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0108 20:41:08.303546    5720 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-242800 ).state
I0108 20:41:10.530691    5720 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0108 20:41:10.530691    5720 main.go:141] libmachine: [stderr =====>] : 
I0108 20:41:10.553518    5720 ssh_runner.go:195] Run: systemctl --version
I0108 20:41:10.553518    5720 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-242800 ).state
I0108 20:41:12.803212    5720 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0108 20:41:12.803275    5720 main.go:141] libmachine: [stderr =====>] : 
I0108 20:41:12.803275    5720 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-242800 ).networkadapters[0]).ipaddresses[0]
I0108 20:41:15.569901    5720 main.go:141] libmachine: [stdout =====>] : 172.29.109.168

                                                
                                                
I0108 20:41:15.569901    5720 main.go:141] libmachine: [stderr =====>] : 
I0108 20:41:15.570437    5720 sshutil.go:53] new ssh client: &{IP:172.29.109.168 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\functional-242800\id_rsa Username:docker}
I0108 20:41:15.691229    5720 ssh_runner.go:235] Completed: systemctl --version: (5.1375948s)
I0108 20:41:15.704036    5720 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (7.71s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (8.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-242800 image ls --format yaml --alsologtostderr
functional_test.go:260: (dbg) Done: out/minikube-windows-amd64.exe -p functional-242800 image ls --format yaml --alsologtostderr: (8.0403108s)
functional_test.go:265: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-242800 image ls --format yaml --alsologtostderr:
- id: 5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests: []
repoTags:
- docker.io/library/mysql:5.7
size: "501000000"
- id: e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.9
size: "744000"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "742000"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"
- id: 3bd0eb42d21b36e2942620ad9e59c5814d2aac73209c7c6ab61be6e0f589098d
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-242800
size: "30"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "683000"
- id: d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.28.4
size: "122000000"
- id: d453dd892d9357f3559b967478ae9cbc417b52de66b53142f6c16c8a275486b9
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "187000000"
- id: 73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.9-0
size: "294000000"
- id: ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.10.1
size: "53600000"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31500000"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests: []
repoTags:
- gcr.io/google-containers/addon-resizer:functional-242800
size: "32900000"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests: []
repoTags:
- registry.k8s.io/echoserver:1.8
size: "95400000"
- id: e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.28.4
size: "60100000"
- id: 83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.28.4
size: "73200000"
- id: 529b5644c430c06553d2e8082c6713fe19a4169c9dc2369cbb960081f52924ff
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "42600000"
- id: 7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.28.4
size: "126000000"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-242800 image ls --format yaml --alsologtostderr:
W0108 20:41:00.163804    5596 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
I0108 20:41:00.240881    5596 out.go:296] Setting OutFile to fd 1088 ...
I0108 20:41:00.241960    5596 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0108 20:41:00.241960    5596 out.go:309] Setting ErrFile to fd 1056...
I0108 20:41:00.241960    5596 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0108 20:41:00.255632    5596 config.go:182] Loaded profile config "functional-242800": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0108 20:41:00.256627    5596 config.go:182] Loaded profile config "functional-242800": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0108 20:41:00.256627    5596 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-242800 ).state
I0108 20:41:02.829946    5596 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0108 20:41:02.829946    5596 main.go:141] libmachine: [stderr =====>] : 
I0108 20:41:02.844675    5596 ssh_runner.go:195] Run: systemctl --version
I0108 20:41:02.844675    5596 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-242800 ).state
I0108 20:41:05.124661    5596 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0108 20:41:05.124661    5596 main.go:141] libmachine: [stderr =====>] : 
I0108 20:41:05.124661    5596 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-242800 ).networkadapters[0]).ipaddresses[0]
I0108 20:41:07.805187    5596 main.go:141] libmachine: [stdout =====>] : 172.29.109.168

                                                
                                                
I0108 20:41:07.805187    5596 main.go:141] libmachine: [stderr =====>] : 
I0108 20:41:07.805187    5596 sshutil.go:53] new ssh client: &{IP:172.29.109.168 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\functional-242800\id_rsa Username:docker}
I0108 20:41:07.967605    5596 ssh_runner.go:235] Completed: systemctl --version: (5.1229062s)
I0108 20:41:07.979648    5596 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (8.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (28.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-242800 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-242800 ssh pgrep buildkitd: exit status 1 (9.8426714s)

                                                
                                                
** stderr ** 
	W0108 20:40:50.152894    9400 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-242800 image build -t localhost/my-image:functional-242800 testdata\build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-windows-amd64.exe -p functional-242800 image build -t localhost/my-image:functional-242800 testdata\build --alsologtostderr: (10.4842854s)
functional_test.go:319: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-242800 image build -t localhost/my-image:functional-242800 testdata\build --alsologtostderr:
Sending build context to Docker daemon  3.072kB

Step 1/3 : FROM gcr.io/k8s-minikube/busybox
latest: Pulling from k8s-minikube/busybox
5cc84ad355aa: Pulling fs layer
5cc84ad355aa: Verifying Checksum
5cc84ad355aa: Download complete
5cc84ad355aa: Pull complete
Digest: sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:latest
---> beae173ccac6
Step 2/3 : RUN true
---> Running in 7fedba87574e
Removing intermediate container 7fedba87574e
---> 1eb4ba64c356
Step 3/3 : ADD content.txt /
---> fad219e51318
Successfully built fad219e51318
Successfully tagged localhost/my-image:functional-242800
functional_test.go:322: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-242800 image build -t localhost/my-image:functional-242800 testdata\build --alsologtostderr:
W0108 20:40:59.998478    7176 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
I0108 20:41:00.096894    7176 out.go:296] Setting OutFile to fd 1412 ...
I0108 20:41:00.112828    7176 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0108 20:41:00.112828    7176 out.go:309] Setting ErrFile to fd 1424...
I0108 20:41:00.112828    7176 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0108 20:41:00.131811    7176 config.go:182] Loaded profile config "functional-242800": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0108 20:41:00.148831    7176 config.go:182] Loaded profile config "functional-242800": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0108 20:41:00.149804    7176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-242800 ).state
I0108 20:41:02.703921    7176 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0108 20:41:02.703921    7176 main.go:141] libmachine: [stderr =====>] : 
I0108 20:41:02.720692    7176 ssh_runner.go:195] Run: systemctl --version
I0108 20:41:02.720692    7176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-242800 ).state
I0108 20:41:05.014490    7176 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0108 20:41:05.014490    7176 main.go:141] libmachine: [stderr =====>] : 
I0108 20:41:05.014490    7176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-242800 ).networkadapters[0]).ipaddresses[0]
I0108 20:41:07.664581    7176 main.go:141] libmachine: [stdout =====>] : 172.29.109.168

                                                
                                                
I0108 20:41:07.664581    7176 main.go:141] libmachine: [stderr =====>] : 
I0108 20:41:07.664581    7176 sshutil.go:53] new ssh client: &{IP:172.29.109.168 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\functional-242800\id_rsa Username:docker}
I0108 20:41:07.783105    7176 ssh_runner.go:235] Completed: systemctl --version: (5.0623895s)
I0108 20:41:07.783207    7176 build_images.go:151] Building image from path: C:\Users\jenkins.minikube7\AppData\Local\Temp\build.3085744278.tar
I0108 20:41:07.796886    7176 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0108 20:41:07.827901    7176 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.3085744278.tar
I0108 20:41:07.842134    7176 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.3085744278.tar: stat -c "%s %y" /var/lib/minikube/build/build.3085744278.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.3085744278.tar': No such file or directory
I0108 20:41:07.842264    7176 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\AppData\Local\Temp\build.3085744278.tar --> /var/lib/minikube/build/build.3085744278.tar (3072 bytes)
I0108 20:41:07.909027    7176 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.3085744278
I0108 20:41:07.937462    7176 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.3085744278 -xf /var/lib/minikube/build/build.3085744278.tar
I0108 20:41:07.961748    7176 docker.go:360] Building image: /var/lib/minikube/build/build.3085744278
I0108 20:41:07.972628    7176 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-242800 /var/lib/minikube/build/build.3085744278
DEPRECATED: The legacy builder is deprecated and will be removed in a future release.
Install the buildx component to build images with BuildKit:
https://docs.docker.com/go/buildx/

                                                
                                                
I0108 20:41:10.227144    7176 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-242800 /var/lib/minikube/build/build.3085744278: (2.254506s)
I0108 20:41:10.241307    7176 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.3085744278
I0108 20:41:10.275692    7176 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.3085744278.tar
I0108 20:41:10.304207    7176 build_images.go:207] Built localhost/my-image:functional-242800 from C:\Users\jenkins.minikube7\AppData\Local\Temp\build.3085744278.tar
I0108 20:41:10.304346    7176 build_images.go:123] succeeded building to: functional-242800
I0108 20:41:10.304346    7176 build_images.go:124] failed building to: 
functional_test.go:447: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-242800 image ls
functional_test.go:447: (dbg) Done: out/minikube-windows-amd64.exe -p functional-242800 image ls: (7.7806786s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (28.11s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (4.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (4.2202146s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-242800
--- PASS: TestFunctional/parallel/ImageCommands/Setup (4.49s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (23.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-242800 image load --daemon gcr.io/google-containers/addon-resizer:functional-242800 --alsologtostderr
E0108 20:37:52.221258    3008 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-084500\client.crt: The system cannot find the path specified.
functional_test.go:354: (dbg) Done: out/minikube-windows-amd64.exe -p functional-242800 image load --daemon gcr.io/google-containers/addon-resizer:functional-242800 --alsologtostderr: (15.2351411s)
functional_test.go:447: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-242800 image ls
functional_test.go:447: (dbg) Done: out/minikube-windows-amd64.exe -p functional-242800 image ls: (8.1186987s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (23.35s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (18.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1438: (dbg) Run:  kubectl --context functional-242800 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1444: (dbg) Run:  kubectl --context functional-242800 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-d7447cc7f-w8tb6" [3d604e5e-ef71-4c79-8775-be980649d5c2] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-d7447cc7f-w8tb6" [3d604e5e-ef71-4c79-8775-be980649d5c2] Running
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 18.0153223s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (18.53s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (21.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-242800 image load --daemon gcr.io/google-containers/addon-resizer:functional-242800 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-windows-amd64.exe -p functional-242800 image load --daemon gcr.io/google-containers/addon-resizer:functional-242800 --alsologtostderr: (13.0358908s)
functional_test.go:447: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-242800 image ls
functional_test.go:447: (dbg) Done: out/minikube-windows-amd64.exe -p functional-242800 image ls: (8.581381s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (21.62s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (14.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1458: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-242800 service list
functional_test.go:1458: (dbg) Done: out/minikube-windows-amd64.exe -p functional-242800 service list: (14.4311906s)
--- PASS: TestFunctional/parallel/ServiceCmd/List (14.43s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (14.96s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1488: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-242800 service list -o json
functional_test.go:1488: (dbg) Done: out/minikube-windows-amd64.exe -p functional-242800 service list -o json: (14.9564454s)
functional_test.go:1493: Took "14.9564454s" to run "out/minikube-windows-amd64.exe -p functional-242800 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (14.96s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (29.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:234: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (3.6912951s)
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-242800
functional_test.go:244: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-242800 image load --daemon gcr.io/google-containers/addon-resizer:functional-242800 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-windows-amd64.exe -p functional-242800 image load --daemon gcr.io/google-containers/addon-resizer:functional-242800 --alsologtostderr: (16.0251795s)
functional_test.go:447: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-242800 image ls
functional_test.go:447: (dbg) Done: out/minikube-windows-amd64.exe -p functional-242800 image ls: (9.742695s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (29.71s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (9.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-windows-amd64.exe -p functional-242800 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-windows-amd64.exe -p functional-242800 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-windows-amd64.exe -p functional-242800 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-windows-amd64.exe -p functional-242800 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 14100: OpenProcess: The parameter is incorrect.
helpers_test.go:508: unable to kill pid 13252: TerminateProcess: Access is denied.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (9.70s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-windows-amd64.exe -p functional-242800 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (16.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-242800 apply -f testdata\testsvc.yaml
functional_test_tunnel_test.go:212: (dbg) Done: kubectl --context functional-242800 apply -f testdata\testsvc.yaml: (1.2261191s)
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [0630dce8-8df1-415d-9873-1eac6d3a6532] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [0630dce8-8df1-415d-9873-1eac6d3a6532] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 15.0229513s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (16.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (9.85s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-242800 image save gcr.io/google-containers/addon-resizer:functional-242800 C:\jenkins\workspace\Hyper-V_Windows_integration\addon-resizer-save.tar --alsologtostderr
functional_test.go:379: (dbg) Done: out/minikube-windows-amd64.exe -p functional-242800 image save gcr.io/google-containers/addon-resizer:functional-242800 C:\jenkins\workspace\Hyper-V_Windows_integration\addon-resizer-save.tar --alsologtostderr: (9.8510651s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (9.85s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (16.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-242800 image rm gcr.io/google-containers/addon-resizer:functional-242800 --alsologtostderr
functional_test.go:391: (dbg) Done: out/minikube-windows-amd64.exe -p functional-242800 image rm gcr.io/google-containers/addon-resizer:functional-242800 --alsologtostderr: (8.345258s)
functional_test.go:447: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-242800 image ls
functional_test.go:447: (dbg) Done: out/minikube-windows-amd64.exe -p functional-242800 image ls: (7.9028664s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (16.25s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-windows-amd64.exe -p functional-242800 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 13504: TerminateProcess: Access is denied.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (18.83s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-242800 image load C:\jenkins\workspace\Hyper-V_Windows_integration\addon-resizer-save.tar --alsologtostderr
functional_test.go:408: (dbg) Done: out/minikube-windows-amd64.exe -p functional-242800 image load C:\jenkins\workspace\Hyper-V_Windows_integration\addon-resizer-save.tar --alsologtostderr: (10.8504737s)
functional_test.go:447: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-242800 image ls
functional_test.go:447: (dbg) Done: out/minikube-windows-amd64.exe -p functional-242800 image ls: (7.9829838s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (18.83s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (9.84s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1269: (dbg) Run:  out/minikube-windows-amd64.exe profile lis
functional_test.go:1274: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
functional_test.go:1274: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (9.3223819s)
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (9.84s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (11.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-242800
functional_test.go:423: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-242800 image save --daemon gcr.io/google-containers/addon-resizer:functional-242800 --alsologtostderr
functional_test.go:423: (dbg) Done: out/minikube-windows-amd64.exe -p functional-242800 image save --daemon gcr.io/google-containers/addon-resizer:functional-242800 --alsologtostderr: (10.6496322s)
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-242800
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (11.10s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (9.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1309: (dbg) Run:  out/minikube-windows-amd64.exe profile list
functional_test.go:1309: (dbg) Done: out/minikube-windows-amd64.exe profile list: (9.24197s)
functional_test.go:1314: Took "9.2421802s" to run "out/minikube-windows-amd64.exe profile list"
functional_test.go:1323: (dbg) Run:  out/minikube-windows-amd64.exe profile list -l
functional_test.go:1328: Took "266.8575ms" to run "out/minikube-windows-amd64.exe profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (9.51s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (9.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1360: (dbg) Run:  out/minikube-windows-amd64.exe profile list -o json
functional_test.go:1360: (dbg) Done: out/minikube-windows-amd64.exe profile list -o json: (9.0686028s)
functional_test.go:1365: Took "9.0686901s" to run "out/minikube-windows-amd64.exe profile list -o json"
functional_test.go:1373: (dbg) Run:  out/minikube-windows-amd64.exe profile list -o json --light
functional_test.go:1378: Took "331.5818ms" to run "out/minikube-windows-amd64.exe profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (9.40s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/powershell (46.92s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/powershell
functional_test.go:495: (dbg) Run:  powershell.exe -NoProfile -NonInteractive "out/minikube-windows-amd64.exe -p functional-242800 docker-env | Invoke-Expression ; out/minikube-windows-amd64.exe status -p functional-242800"
functional_test.go:495: (dbg) Done: powershell.exe -NoProfile -NonInteractive "out/minikube-windows-amd64.exe -p functional-242800 docker-env | Invoke-Expression ; out/minikube-windows-amd64.exe status -p functional-242800": (32.1816964s)
functional_test.go:518: (dbg) Run:  powershell.exe -NoProfile -NonInteractive "out/minikube-windows-amd64.exe -p functional-242800 docker-env | Invoke-Expression ; docker images"
functional_test.go:518: (dbg) Done: powershell.exe -NoProfile -NonInteractive "out/minikube-windows-amd64.exe -p functional-242800 docker-env | Invoke-Expression ; docker images": (14.7208388s)
--- PASS: TestFunctional/parallel/DockerEnv/powershell (46.92s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (2.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2118: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-242800 update-context --alsologtostderr -v=2
functional_test.go:2118: (dbg) Done: out/minikube-windows-amd64.exe -p functional-242800 update-context --alsologtostderr -v=2: (2.6175696s)
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (2.62s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (2.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2118: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-242800 update-context --alsologtostderr -v=2
functional_test.go:2118: (dbg) Done: out/minikube-windows-amd64.exe -p functional-242800 update-context --alsologtostderr -v=2: (2.5732311s)
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (2.58s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (2.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2118: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-242800 update-context --alsologtostderr -v=2
functional_test.go:2118: (dbg) Done: out/minikube-windows-amd64.exe -p functional-242800 update-context --alsologtostderr -v=2: (2.6560975s)
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (2.66s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.48s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-242800
--- PASS: TestFunctional/delete_addon-resizer_images (0.48s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.22s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-242800
--- PASS: TestFunctional/delete_my-image_image (0.22s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.2s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-242800
--- PASS: TestFunctional/delete_minikube_cached_images (0.20s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (187.71s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-windows-amd64.exe start -p image-192800 --driver=hyperv
E0108 20:47:52.225382    3008 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-084500\client.crt: The system cannot find the path specified.
E0108 20:47:58.282372    3008 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-242800\client.crt: The system cannot find the path specified.
E0108 20:47:58.297649    3008 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-242800\client.crt: The system cannot find the path specified.
E0108 20:47:58.313480    3008 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-242800\client.crt: The system cannot find the path specified.
E0108 20:47:58.344349    3008 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-242800\client.crt: The system cannot find the path specified.
E0108 20:47:58.391384    3008 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-242800\client.crt: The system cannot find the path specified.
E0108 20:47:58.472080    3008 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-242800\client.crt: The system cannot find the path specified.
E0108 20:47:58.646876    3008 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-242800\client.crt: The system cannot find the path specified.
E0108 20:47:58.980973    3008 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-242800\client.crt: The system cannot find the path specified.
E0108 20:47:59.633844    3008 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-242800\client.crt: The system cannot find the path specified.
E0108 20:48:00.915585    3008 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-242800\client.crt: The system cannot find the path specified.
E0108 20:48:03.476272    3008 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-242800\client.crt: The system cannot find the path specified.
E0108 20:48:08.596810    3008 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-242800\client.crt: The system cannot find the path specified.
E0108 20:48:18.847966    3008 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-242800\client.crt: The system cannot find the path specified.
E0108 20:48:39.336241    3008 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-242800\client.crt: The system cannot find the path specified.
E0108 20:49:20.308962    3008 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-242800\client.crt: The system cannot find the path specified.
image_test.go:69: (dbg) Done: out/minikube-windows-amd64.exe start -p image-192800 --driver=hyperv: (3m7.7129648s)
--- PASS: TestImageBuild/serial/Setup (187.71s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (8.99s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-windows-amd64.exe image build -t aaa:latest ./testdata/image-build/test-normal -p image-192800
image_test.go:78: (dbg) Done: out/minikube-windows-amd64.exe image build -t aaa:latest ./testdata/image-build/test-normal -p image-192800: (8.9875389s)
--- PASS: TestImageBuild/serial/NormalBuild (8.99s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (8.69s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-windows-amd64.exe image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-192800
image_test.go:99: (dbg) Done: out/minikube-windows-amd64.exe image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-192800: (8.6902186s)
--- PASS: TestImageBuild/serial/BuildWithBuildArg (8.69s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (7.57s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-192800
image_test.go:133: (dbg) Done: out/minikube-windows-amd64.exe image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-192800: (7.5721126s)
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (7.57s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (7.47s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-windows-amd64.exe image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-192800
image_test.go:88: (dbg) Done: out/minikube-windows-amd64.exe image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-192800: (7.4704361s)
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (7.47s)

                                                
                                    
x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (239.68s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run:  out/minikube-windows-amd64.exe start -p ingress-addon-legacy-054400 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=hyperv
E0108 20:52:52.225367    3008 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-084500\client.crt: The system cannot find the path specified.
E0108 20:52:58.291151    3008 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-242800\client.crt: The system cannot find the path specified.
E0108 20:53:26.088149    3008 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-242800\client.crt: The system cannot find the path specified.
ingress_addon_legacy_test.go:39: (dbg) Done: out/minikube-windows-amd64.exe start -p ingress-addon-legacy-054400 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=hyperv: (3m59.68009s)
--- PASS: TestIngressAddonLegacy/StartLegacyK8sCluster (239.68s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (39.18s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:70: (dbg) Run:  out/minikube-windows-amd64.exe -p ingress-addon-legacy-054400 addons enable ingress --alsologtostderr -v=5
ingress_addon_legacy_test.go:70: (dbg) Done: out/minikube-windows-amd64.exe -p ingress-addon-legacy-054400 addons enable ingress --alsologtostderr -v=5: (39.1838246s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (39.18s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (14.46s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation
ingress_addon_legacy_test.go:79: (dbg) Run:  out/minikube-windows-amd64.exe -p ingress-addon-legacy-054400 addons enable ingress-dns --alsologtostderr -v=5
ingress_addon_legacy_test.go:79: (dbg) Done: out/minikube-windows-amd64.exe -p ingress-addon-legacy-054400 addons enable ingress-dns --alsologtostderr -v=5: (14.4596793s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (14.46s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddons (80.13s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddons
addons_test.go:207: (dbg) Run:  kubectl --context ingress-addon-legacy-054400 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:232: (dbg) Run:  kubectl --context ingress-addon-legacy-054400 replace --force -f testdata\nginx-ingress-v1beta1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context ingress-addon-legacy-054400 replace --force -f testdata\nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [68db8d8c-1fbf-40ea-8cbf-3d260200ccd8] Pending
helpers_test.go:344: "nginx" [68db8d8c-1fbf-40ea-8cbf-3d260200ccd8] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
E0108 20:55:55.416055    3008 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-084500\client.crt: The system cannot find the path specified.
helpers_test.go:344: "nginx" [68db8d8c-1fbf-40ea-8cbf-3d260200ccd8] Running
addons_test.go:250: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: run=nginx healthy within 28.0203039s
addons_test.go:262: (dbg) Run:  out/minikube-windows-amd64.exe -p ingress-addon-legacy-054400 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:262: (dbg) Done: out/minikube-windows-amd64.exe -p ingress-addon-legacy-054400 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": (9.4148068s)
addons_test.go:269: debug: unexpected stderr for out/minikube-windows-amd64.exe -p ingress-addon-legacy-054400 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'":
W0108 20:56:17.798542   10708 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
addons_test.go:286: (dbg) Run:  kubectl --context ingress-addon-legacy-054400 replace --force -f testdata\ingress-dns-example-v1beta1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-windows-amd64.exe -p ingress-addon-legacy-054400 ip
addons_test.go:291: (dbg) Done: out/minikube-windows-amd64.exe -p ingress-addon-legacy-054400 ip: (2.535773s)
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 172.29.105.214
addons_test.go:306: (dbg) Run:  out/minikube-windows-amd64.exe -p ingress-addon-legacy-054400 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:306: (dbg) Done: out/minikube-windows-amd64.exe -p ingress-addon-legacy-054400 addons disable ingress-dns --alsologtostderr -v=1: (16.4249869s)
addons_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe -p ingress-addon-legacy-054400 addons disable ingress --alsologtostderr -v=1
addons_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe -p ingress-addon-legacy-054400 addons disable ingress --alsologtostderr -v=1: (21.3227062s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddons (80.13s)

                                                
                                    
x
+
TestJSONOutput/start/Command (231.25s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe start -p json-output-708000 --output=json --user=testUser --memory=2200 --wait=true --driver=hyperv
E0108 21:00:48.000264    3008 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ingress-addon-legacy-054400\client.crt: The system cannot find the path specified.
E0108 21:00:48.015555    3008 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ingress-addon-legacy-054400\client.crt: The system cannot find the path specified.
E0108 21:00:48.031152    3008 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ingress-addon-legacy-054400\client.crt: The system cannot find the path specified.
E0108 21:00:48.062819    3008 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ingress-addon-legacy-054400\client.crt: The system cannot find the path specified.
E0108 21:00:48.110619    3008 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ingress-addon-legacy-054400\client.crt: The system cannot find the path specified.
E0108 21:00:48.204933    3008 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ingress-addon-legacy-054400\client.crt: The system cannot find the path specified.
E0108 21:00:48.379184    3008 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ingress-addon-legacy-054400\client.crt: The system cannot find the path specified.
E0108 21:00:48.713554    3008 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ingress-addon-legacy-054400\client.crt: The system cannot find the path specified.
E0108 21:00:49.360499    3008 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ingress-addon-legacy-054400\client.crt: The system cannot find the path specified.
E0108 21:00:50.649099    3008 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ingress-addon-legacy-054400\client.crt: The system cannot find the path specified.
E0108 21:00:53.220162    3008 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ingress-addon-legacy-054400\client.crt: The system cannot find the path specified.
E0108 21:00:58.350322    3008 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ingress-addon-legacy-054400\client.crt: The system cannot find the path specified.
E0108 21:01:08.604466    3008 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ingress-addon-legacy-054400\client.crt: The system cannot find the path specified.
E0108 21:01:29.090955    3008 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ingress-addon-legacy-054400\client.crt: The system cannot find the path specified.
json_output_test.go:63: (dbg) Done: out/minikube-windows-amd64.exe start -p json-output-708000 --output=json --user=testUser --memory=2200 --wait=true --driver=hyperv: (3m51.2441428s)
--- PASS: TestJSONOutput/start/Command (231.25s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (7.94s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe pause -p json-output-708000 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-windows-amd64.exe pause -p json-output-708000 --output=json --user=testUser: (7.9404046s)
--- PASS: TestJSONOutput/pause/Command (7.94s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (7.69s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe unpause -p json-output-708000 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-windows-amd64.exe unpause -p json-output-708000 --output=json --user=testUser: (7.6915739s)
--- PASS: TestJSONOutput/unpause/Command (7.69s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (33.24s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe stop -p json-output-708000 --output=json --user=testUser
E0108 21:02:10.059064    3008 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ingress-addon-legacy-054400\client.crt: The system cannot find the path specified.
json_output_test.go:63: (dbg) Done: out/minikube-windows-amd64.exe stop -p json-output-708000 --output=json --user=testUser: (33.2440792s)
--- PASS: TestJSONOutput/stop/Command (33.24s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (1.51s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-windows-amd64.exe start -p json-output-error-765100 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p json-output-error-765100 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (263.8368ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"c3cfb7c7-77d8-4fe5-a555-76f0fa666e9c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-765100] minikube v1.32.0 on Microsoft Windows 10 Enterprise N 10.0.19045.3803 Build 19045.3803","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"c212823e-7895-4072-82fe-232293590ab5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=C:\\Users\\jenkins.minikube7\\minikube-integration\\kubeconfig"}}
	{"specversion":"1.0","id":"9617456d-3c68-47ec-8670-28eb59c64f47","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"58762cd3-485b-4a79-b355-c5a61e54fd1f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube"}}
	{"specversion":"1.0","id":"e2766d16-3859-41ec-81ed-c4d0861d950f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=17907"}}
	{"specversion":"1.0","id":"6f4faf5c-d269-4d5f-8083-573aff93b4b0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"ffe58db5-8ead-4598-a391-05b127358386","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on windows/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
** stderr ** 
	W0108 21:02:52.966586    4500 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "json-output-error-765100" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p json-output-error-765100
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p json-output-error-765100: (1.2483921s)
--- PASS: TestErrorJSONOutput (1.51s)

                                                
                                    
x
+
TestMainNoArgs (0.26s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-windows-amd64.exe
--- PASS: TestMainNoArgs (0.26s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (147.18s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-windows-amd64.exe start -p mount-start-1-474500 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=hyperv
E0108 21:12:35.434776    3008 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-084500\client.crt: The system cannot find the path specified.
E0108 21:12:52.236107    3008 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-084500\client.crt: The system cannot find the path specified.
E0108 21:12:58.291580    3008 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-242800\client.crt: The system cannot find the path specified.
mount_start_test.go:98: (dbg) Done: out/minikube-windows-amd64.exe start -p mount-start-1-474500 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=hyperv: (2m26.1766308s)
--- PASS: TestMountStart/serial/StartWithMountFirst (147.18s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (9.53s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-windows-amd64.exe -p mount-start-1-474500 ssh -- ls /minikube-host
mount_start_test.go:114: (dbg) Done: out/minikube-windows-amd64.exe -p mount-start-1-474500 ssh -- ls /minikube-host: (9.5246998s)
--- PASS: TestMountStart/serial/VerifyMountFirst (9.53s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (146.38s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-windows-amd64.exe start -p mount-start-2-474500 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=hyperv
E0108 21:15:48.013451    3008 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ingress-addon-legacy-054400\client.crt: The system cannot find the path specified.
mount_start_test.go:98: (dbg) Done: out/minikube-windows-amd64.exe start -p mount-start-2-474500 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=hyperv: (2m25.3651776s)
--- PASS: TestMountStart/serial/StartWithMountSecond (146.38s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (9.46s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-windows-amd64.exe -p mount-start-2-474500 ssh -- ls /minikube-host
E0108 21:17:11.204183    3008 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ingress-addon-legacy-054400\client.crt: The system cannot find the path specified.
mount_start_test.go:114: (dbg) Done: out/minikube-windows-amd64.exe -p mount-start-2-474500 ssh -- ls /minikube-host: (9.460525s)
--- PASS: TestMountStart/serial/VerifyMountSecond (9.46s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (25.7s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-windows-amd64.exe delete -p mount-start-1-474500 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-windows-amd64.exe delete -p mount-start-1-474500 --alsologtostderr -v=5: (25.6956649s)
--- PASS: TestMountStart/serial/DeleteFirst (25.70s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (9.38s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-windows-amd64.exe -p mount-start-2-474500 ssh -- ls /minikube-host
mount_start_test.go:114: (dbg) Done: out/minikube-windows-amd64.exe -p mount-start-2-474500 ssh -- ls /minikube-host: (9.3752008s)
--- PASS: TestMountStart/serial/VerifyMountPostDelete (9.38s)

                                                
                                    
x
+
TestMountStart/serial/Stop (21.77s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-windows-amd64.exe stop -p mount-start-2-474500
E0108 21:17:52.231842    3008 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-084500\client.crt: The system cannot find the path specified.
E0108 21:17:58.291546    3008 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-242800\client.crt: The system cannot find the path specified.
mount_start_test.go:155: (dbg) Done: out/minikube-windows-amd64.exe stop -p mount-start-2-474500: (21.7746867s)
--- PASS: TestMountStart/serial/Stop (21.77s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (110.06s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-windows-amd64.exe start -p mount-start-2-474500
mount_start_test.go:166: (dbg) Done: out/minikube-windows-amd64.exe start -p mount-start-2-474500: (1m49.0498712s)
--- PASS: TestMountStart/serial/RestartStopped (110.06s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (9.65s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-windows-amd64.exe -p mount-start-2-474500 ssh -- ls /minikube-host
mount_start_test.go:114: (dbg) Done: out/minikube-windows-amd64.exe -p mount-start-2-474500 ssh -- ls /minikube-host: (9.6451547s)
--- PASS: TestMountStart/serial/VerifyMountPostStop (9.65s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (409.75s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:86: (dbg) Run:  out/minikube-windows-amd64.exe start -p multinode-554300 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=hyperv
E0108 21:20:48.015586    3008 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ingress-addon-legacy-054400\client.crt: The system cannot find the path specified.
E0108 21:21:01.472026    3008 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-242800\client.crt: The system cannot find the path specified.
E0108 21:22:52.239682    3008 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-084500\client.crt: The system cannot find the path specified.
E0108 21:22:58.287466    3008 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-242800\client.crt: The system cannot find the path specified.
E0108 21:25:48.015019    3008 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ingress-addon-legacy-054400\client.crt: The system cannot find the path specified.
multinode_test.go:86: (dbg) Done: out/minikube-windows-amd64.exe start -p multinode-554300 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=hyperv: (6m25.764006s)
multinode_test.go:92: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-554300 status --alsologtostderr
multinode_test.go:92: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-554300 status --alsologtostderr: (23.9826727s)
--- PASS: TestMultiNode/serial/FreshStart2Nodes (409.75s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (9.33s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:509: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-554300 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:514: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-554300 -- rollout status deployment/busybox
multinode_test.go:514: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p multinode-554300 -- rollout status deployment/busybox: (3.2170787s)
multinode_test.go:521: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-554300 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:544: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-554300 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-554300 -- exec busybox-5bc68d56bd-hrhnw -- nslookup kubernetes.io
multinode_test.go:552: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p multinode-554300 -- exec busybox-5bc68d56bd-hrhnw -- nslookup kubernetes.io: (1.7583426s)
multinode_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-554300 -- exec busybox-5bc68d56bd-w2zbn -- nslookup kubernetes.io
multinode_test.go:562: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-554300 -- exec busybox-5bc68d56bd-hrhnw -- nslookup kubernetes.default
multinode_test.go:562: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-554300 -- exec busybox-5bc68d56bd-w2zbn -- nslookup kubernetes.default
multinode_test.go:570: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-554300 -- exec busybox-5bc68d56bd-hrhnw -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:570: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-554300 -- exec busybox-5bc68d56bd-w2zbn -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (9.33s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (218.33s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:111: (dbg) Run:  out/minikube-windows-amd64.exe node add -p multinode-554300 -v 3 --alsologtostderr
E0108 21:29:15.454489    3008 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-084500\client.crt: The system cannot find the path specified.
E0108 21:30:48.003408    3008 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ingress-addon-legacy-054400\client.crt: The system cannot find the path specified.
multinode_test.go:111: (dbg) Done: out/minikube-windows-amd64.exe node add -p multinode-554300 -v 3 --alsologtostderr: (3m2.8879382s)
multinode_test.go:117: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-554300 status --alsologtostderr
multinode_test.go:117: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-554300 status --alsologtostderr: (35.4365284s)
--- PASS: TestMultiNode/serial/AddNode (218.33s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.2s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:211: (dbg) Run:  kubectl --context multinode-554300 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.20s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (7.52s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
multinode_test.go:133: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (7.5231039s)
--- PASS: TestMultiNode/serial/ProfileList (7.52s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (358.35s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:174: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-554300 status --output json --alsologtostderr
E0108 21:32:52.239392    3008 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-084500\client.crt: The system cannot find the path specified.
multinode_test.go:174: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-554300 status --output json --alsologtostderr: (35.3696213s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-554300 cp testdata\cp-test.txt multinode-554300:/home/docker/cp-test.txt
E0108 21:32:58.296257    3008 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-242800\client.crt: The system cannot find the path specified.
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-554300 cp testdata\cp-test.txt multinode-554300:/home/docker/cp-test.txt: (9.4472882s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-554300 ssh -n multinode-554300 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-554300 ssh -n multinode-554300 "sudo cat /home/docker/cp-test.txt": (9.3702427s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-554300 cp multinode-554300:/home/docker/cp-test.txt C:\Users\jenkins.minikube7\AppData\Local\Temp\TestMultiNodeserialCopyFile213548577\001\cp-test_multinode-554300.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-554300 cp multinode-554300:/home/docker/cp-test.txt C:\Users\jenkins.minikube7\AppData\Local\Temp\TestMultiNodeserialCopyFile213548577\001\cp-test_multinode-554300.txt: (9.361899s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-554300 ssh -n multinode-554300 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-554300 ssh -n multinode-554300 "sudo cat /home/docker/cp-test.txt": (9.4753443s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-554300 cp multinode-554300:/home/docker/cp-test.txt multinode-554300-m02:/home/docker/cp-test_multinode-554300_multinode-554300-m02.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-554300 cp multinode-554300:/home/docker/cp-test.txt multinode-554300-m02:/home/docker/cp-test_multinode-554300_multinode-554300-m02.txt: (16.361675s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-554300 ssh -n multinode-554300 "sudo cat /home/docker/cp-test.txt"
E0108 21:33:51.217916    3008 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ingress-addon-legacy-054400\client.crt: The system cannot find the path specified.
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-554300 ssh -n multinode-554300 "sudo cat /home/docker/cp-test.txt": (9.3751181s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-554300 ssh -n multinode-554300-m02 "sudo cat /home/docker/cp-test_multinode-554300_multinode-554300-m02.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-554300 ssh -n multinode-554300-m02 "sudo cat /home/docker/cp-test_multinode-554300_multinode-554300-m02.txt": (9.2825239s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-554300 cp multinode-554300:/home/docker/cp-test.txt multinode-554300-m03:/home/docker/cp-test_multinode-554300_multinode-554300-m03.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-554300 cp multinode-554300:/home/docker/cp-test.txt multinode-554300-m03:/home/docker/cp-test_multinode-554300_multinode-554300-m03.txt: (16.2302153s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-554300 ssh -n multinode-554300 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-554300 ssh -n multinode-554300 "sudo cat /home/docker/cp-test.txt": (9.3223698s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-554300 ssh -n multinode-554300-m03 "sudo cat /home/docker/cp-test_multinode-554300_multinode-554300-m03.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-554300 ssh -n multinode-554300-m03 "sudo cat /home/docker/cp-test_multinode-554300_multinode-554300-m03.txt": (9.3721036s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-554300 cp testdata\cp-test.txt multinode-554300-m02:/home/docker/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-554300 cp testdata\cp-test.txt multinode-554300-m02:/home/docker/cp-test.txt: (9.3588961s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-554300 ssh -n multinode-554300-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-554300 ssh -n multinode-554300-m02 "sudo cat /home/docker/cp-test.txt": (9.4126967s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-554300 cp multinode-554300-m02:/home/docker/cp-test.txt C:\Users\jenkins.minikube7\AppData\Local\Temp\TestMultiNodeserialCopyFile213548577\001\cp-test_multinode-554300-m02.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-554300 cp multinode-554300-m02:/home/docker/cp-test.txt C:\Users\jenkins.minikube7\AppData\Local\Temp\TestMultiNodeserialCopyFile213548577\001\cp-test_multinode-554300-m02.txt: (9.3709658s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-554300 ssh -n multinode-554300-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-554300 ssh -n multinode-554300-m02 "sudo cat /home/docker/cp-test.txt": (9.3742215s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-554300 cp multinode-554300-m02:/home/docker/cp-test.txt multinode-554300:/home/docker/cp-test_multinode-554300-m02_multinode-554300.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-554300 cp multinode-554300-m02:/home/docker/cp-test.txt multinode-554300:/home/docker/cp-test_multinode-554300-m02_multinode-554300.txt: (16.5114318s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-554300 ssh -n multinode-554300-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-554300 ssh -n multinode-554300-m02 "sudo cat /home/docker/cp-test.txt": (9.3653655s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-554300 ssh -n multinode-554300 "sudo cat /home/docker/cp-test_multinode-554300-m02_multinode-554300.txt"
E0108 21:35:48.017575    3008 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ingress-addon-legacy-054400\client.crt: The system cannot find the path specified.
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-554300 ssh -n multinode-554300 "sudo cat /home/docker/cp-test_multinode-554300-m02_multinode-554300.txt": (9.3415018s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-554300 cp multinode-554300-m02:/home/docker/cp-test.txt multinode-554300-m03:/home/docker/cp-test_multinode-554300-m02_multinode-554300-m03.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-554300 cp multinode-554300-m02:/home/docker/cp-test.txt multinode-554300-m03:/home/docker/cp-test_multinode-554300-m02_multinode-554300-m03.txt: (16.3331205s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-554300 ssh -n multinode-554300-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-554300 ssh -n multinode-554300-m02 "sudo cat /home/docker/cp-test.txt": (9.2730571s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-554300 ssh -n multinode-554300-m03 "sudo cat /home/docker/cp-test_multinode-554300-m02_multinode-554300-m03.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-554300 ssh -n multinode-554300-m03 "sudo cat /home/docker/cp-test_multinode-554300-m02_multinode-554300-m03.txt": (9.3852523s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-554300 cp testdata\cp-test.txt multinode-554300-m03:/home/docker/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-554300 cp testdata\cp-test.txt multinode-554300-m03:/home/docker/cp-test.txt: (9.4065068s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-554300 ssh -n multinode-554300-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-554300 ssh -n multinode-554300-m03 "sudo cat /home/docker/cp-test.txt": (9.3556425s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-554300 cp multinode-554300-m03:/home/docker/cp-test.txt C:\Users\jenkins.minikube7\AppData\Local\Temp\TestMultiNodeserialCopyFile213548577\001\cp-test_multinode-554300-m03.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-554300 cp multinode-554300-m03:/home/docker/cp-test.txt C:\Users\jenkins.minikube7\AppData\Local\Temp\TestMultiNodeserialCopyFile213548577\001\cp-test_multinode-554300-m03.txt: (9.4185781s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-554300 ssh -n multinode-554300-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-554300 ssh -n multinode-554300-m03 "sudo cat /home/docker/cp-test.txt": (9.3816797s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-554300 cp multinode-554300-m03:/home/docker/cp-test.txt multinode-554300:/home/docker/cp-test_multinode-554300-m03_multinode-554300.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-554300 cp multinode-554300-m03:/home/docker/cp-test.txt multinode-554300:/home/docker/cp-test_multinode-554300-m03_multinode-554300.txt: (16.278094s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-554300 ssh -n multinode-554300-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-554300 ssh -n multinode-554300-m03 "sudo cat /home/docker/cp-test.txt": (9.5060568s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-554300 ssh -n multinode-554300 "sudo cat /home/docker/cp-test_multinode-554300-m03_multinode-554300.txt"
E0108 21:37:41.489193    3008 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-242800\client.crt: The system cannot find the path specified.
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-554300 ssh -n multinode-554300 "sudo cat /home/docker/cp-test_multinode-554300-m03_multinode-554300.txt": (9.3596741s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-554300 cp multinode-554300-m03:/home/docker/cp-test.txt multinode-554300-m02:/home/docker/cp-test_multinode-554300-m03_multinode-554300-m02.txt
E0108 21:37:52.243998    3008 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-084500\client.crt: The system cannot find the path specified.
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-554300 cp multinode-554300-m03:/home/docker/cp-test.txt multinode-554300-m02:/home/docker/cp-test_multinode-554300-m03_multinode-554300-m02.txt: (16.2318761s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-554300 ssh -n multinode-554300-m03 "sudo cat /home/docker/cp-test.txt"
E0108 21:37:58.303204    3008 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-242800\client.crt: The system cannot find the path specified.
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-554300 ssh -n multinode-554300-m03 "sudo cat /home/docker/cp-test.txt": (9.3909323s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-554300 ssh -n multinode-554300-m02 "sudo cat /home/docker/cp-test_multinode-554300-m03_multinode-554300-m02.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-554300 ssh -n multinode-554300-m02 "sudo cat /home/docker/cp-test_multinode-554300-m03_multinode-554300-m02.txt": (9.3126329s)
--- PASS: TestMultiNode/serial/CopyFile (358.35s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (65.7s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:238: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-554300 node stop m03
multinode_test.go:238: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-554300 node stop m03: (14.1697898s)
multinode_test.go:244: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-554300 status
multinode_test.go:244: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-554300 status: exit status 7 (25.8167772s)

                                                
                                                
-- stdout --
	multinode-554300
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-554300-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-554300-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0108 21:38:30.953835    2188 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
multinode_test.go:251: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-554300 status --alsologtostderr
multinode_test.go:251: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-554300 status --alsologtostderr: exit status 7 (25.7149762s)

                                                
                                                
-- stdout --
	multinode-554300
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-554300-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-554300-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0108 21:38:56.778462    7428 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I0108 21:38:56.854453    7428 out.go:296] Setting OutFile to fd 1256 ...
	I0108 21:38:56.855451    7428 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 21:38:56.855451    7428 out.go:309] Setting ErrFile to fd 1568...
	I0108 21:38:56.855451    7428 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 21:38:56.871820    7428 out.go:303] Setting JSON to false
	I0108 21:38:56.871820    7428 mustload.go:65] Loading cluster: multinode-554300
	I0108 21:38:56.872400    7428 notify.go:220] Checking for updates...
	I0108 21:38:56.873187    7428 config.go:182] Loaded profile config "multinode-554300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0108 21:38:56.873222    7428 status.go:255] checking status of multinode-554300 ...
	I0108 21:38:56.873988    7428 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-554300 ).state
	I0108 21:38:59.052367    7428 main.go:141] libmachine: [stdout =====>] : Running
	
	I0108 21:38:59.052367    7428 main.go:141] libmachine: [stderr =====>] : 
	I0108 21:38:59.052367    7428 status.go:330] multinode-554300 host status = "Running" (err=<nil>)
	I0108 21:38:59.052367    7428 host.go:66] Checking if "multinode-554300" exists ...
	I0108 21:38:59.053125    7428 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-554300 ).state
	I0108 21:39:01.188254    7428 main.go:141] libmachine: [stdout =====>] : Running
	
	I0108 21:39:01.188254    7428 main.go:141] libmachine: [stderr =====>] : 
	I0108 21:39:01.188339    7428 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-554300 ).networkadapters[0]).ipaddresses[0]
	I0108 21:39:03.786459    7428 main.go:141] libmachine: [stdout =====>] : 172.29.107.59
	
	I0108 21:39:03.786459    7428 main.go:141] libmachine: [stderr =====>] : 
	I0108 21:39:03.786538    7428 host.go:66] Checking if "multinode-554300" exists ...
	I0108 21:39:03.800176    7428 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0108 21:39:03.801206    7428 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-554300 ).state
	I0108 21:39:05.946581    7428 main.go:141] libmachine: [stdout =====>] : Running
	
	I0108 21:39:05.946581    7428 main.go:141] libmachine: [stderr =====>] : 
	I0108 21:39:05.946798    7428 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-554300 ).networkadapters[0]).ipaddresses[0]
	I0108 21:39:08.440632    7428 main.go:141] libmachine: [stdout =====>] : 172.29.107.59
	
	I0108 21:39:08.440663    7428 main.go:141] libmachine: [stderr =====>] : 
	I0108 21:39:08.441417    7428 sshutil.go:53] new ssh client: &{IP:172.29.107.59 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-554300\id_rsa Username:docker}
	I0108 21:39:08.548589    7428 ssh_runner.go:235] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (4.7473594s)
	I0108 21:39:08.563774    7428 ssh_runner.go:195] Run: systemctl --version
	I0108 21:39:08.586694    7428 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0108 21:39:08.611458    7428 kubeconfig.go:92] found "multinode-554300" server: "https://172.29.107.59:8443"
	I0108 21:39:08.611458    7428 api_server.go:166] Checking apiserver status ...
	I0108 21:39:08.628076    7428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 21:39:08.669565    7428 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2078/cgroup
	I0108 21:39:08.685752    7428 api_server.go:182] apiserver freezer: "9:freezer:/kubepods/burstable/pod2efb47d905867f62472179a55c21eb33/eb93c2ad9198efe4f00dde51e8d9be4d532ac18013b6ed0d120d8f84b6abf8f5"
	I0108 21:39:08.699106    7428 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod2efb47d905867f62472179a55c21eb33/eb93c2ad9198efe4f00dde51e8d9be4d532ac18013b6ed0d120d8f84b6abf8f5/freezer.state
	I0108 21:39:08.712667    7428 api_server.go:204] freezer state: "THAWED"
	I0108 21:39:08.712667    7428 api_server.go:253] Checking apiserver healthz at https://172.29.107.59:8443/healthz ...
	I0108 21:39:08.720194    7428 api_server.go:279] https://172.29.107.59:8443/healthz returned 200:
	ok
	I0108 21:39:08.720194    7428 status.go:421] multinode-554300 apiserver status = Running (err=<nil>)
	I0108 21:39:08.720194    7428 status.go:257] multinode-554300 status: &{Name:multinode-554300 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0108 21:39:08.720194    7428 status.go:255] checking status of multinode-554300-m02 ...
	I0108 21:39:08.720876    7428 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-554300-m02 ).state
	I0108 21:39:10.880685    7428 main.go:141] libmachine: [stdout =====>] : Running
	
	I0108 21:39:10.880870    7428 main.go:141] libmachine: [stderr =====>] : 
	I0108 21:39:10.880870    7428 status.go:330] multinode-554300-m02 host status = "Running" (err=<nil>)
	I0108 21:39:10.880870    7428 host.go:66] Checking if "multinode-554300-m02" exists ...
	I0108 21:39:10.881621    7428 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-554300-m02 ).state
	I0108 21:39:13.014541    7428 main.go:141] libmachine: [stdout =====>] : Running
	
	I0108 21:39:13.014541    7428 main.go:141] libmachine: [stderr =====>] : 
	I0108 21:39:13.014671    7428 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-554300-m02 ).networkadapters[0]).ipaddresses[0]
	I0108 21:39:15.510664    7428 main.go:141] libmachine: [stdout =====>] : 172.29.96.43
	
	I0108 21:39:15.510888    7428 main.go:141] libmachine: [stderr =====>] : 
	I0108 21:39:15.510888    7428 host.go:66] Checking if "multinode-554300-m02" exists ...
	I0108 21:39:15.526294    7428 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0108 21:39:15.526294    7428 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-554300-m02 ).state
	I0108 21:39:17.611816    7428 main.go:141] libmachine: [stdout =====>] : Running
	
	I0108 21:39:17.611816    7428 main.go:141] libmachine: [stderr =====>] : 
	I0108 21:39:17.611963    7428 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-554300-m02 ).networkadapters[0]).ipaddresses[0]
	I0108 21:39:20.104820    7428 main.go:141] libmachine: [stdout =====>] : 172.29.96.43
	
	I0108 21:39:20.104820    7428 main.go:141] libmachine: [stderr =====>] : 
	I0108 21:39:20.105416    7428 sshutil.go:53] new ssh client: &{IP:172.29.96.43 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-554300-m02\id_rsa Username:docker}
	I0108 21:39:20.209918    7428 ssh_runner.go:235] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (4.6836007s)
	I0108 21:39:20.223706    7428 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0108 21:39:20.244782    7428 status.go:257] multinode-554300-m02 status: &{Name:multinode-554300-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0108 21:39:20.244950    7428 status.go:255] checking status of multinode-554300-m03 ...
	I0108 21:39:20.245753    7428 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-554300-m03 ).state
	I0108 21:39:22.320812    7428 main.go:141] libmachine: [stdout =====>] : Off
	
	I0108 21:39:22.320812    7428 main.go:141] libmachine: [stderr =====>] : 
	I0108 21:39:22.320909    7428 status.go:330] multinode-554300-m03 host status = "Stopped" (err=<nil>)
	I0108 21:39:22.320909    7428 status.go:343] host is not running, skipping remaining checks
	I0108 21:39:22.320909    7428 status.go:257] multinode-554300-m03 status: &{Name:multinode-554300-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (65.70s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (164.68s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-554300 node start m03 --alsologtostderr
E0108 21:40:48.014626    3008 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ingress-addon-legacy-054400\client.crt: The system cannot find the path specified.
multinode_test.go:282: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-554300 node start m03 --alsologtostderr: (2m9.2290706s)
multinode_test.go:289: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-554300 status
multinode_test.go:289: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-554300 status: (35.2653554s)
multinode_test.go:303: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (164.68s)

                                                
                                    
x
+
TestPreload (491.21s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-windows-amd64.exe start -p test-preload-973400 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=hyperv --kubernetes-version=v1.24.4
E0108 21:54:21.496718    3008 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-242800\client.crt: The system cannot find the path specified.
E0108 21:55:48.024700    3008 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ingress-addon-legacy-054400\client.crt: The system cannot find the path specified.
preload_test.go:44: (dbg) Done: out/minikube-windows-amd64.exe start -p test-preload-973400 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=hyperv --kubernetes-version=v1.24.4: (4m14.0216805s)
preload_test.go:52: (dbg) Run:  out/minikube-windows-amd64.exe -p test-preload-973400 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-windows-amd64.exe -p test-preload-973400 image pull gcr.io/k8s-minikube/busybox: (8.3253507s)
preload_test.go:58: (dbg) Run:  out/minikube-windows-amd64.exe stop -p test-preload-973400
E0108 21:57:52.247400    3008 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-084500\client.crt: The system cannot find the path specified.
E0108 21:57:58.311955    3008 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-242800\client.crt: The system cannot find the path specified.
preload_test.go:58: (dbg) Done: out/minikube-windows-amd64.exe stop -p test-preload-973400: (34.3782073s)
preload_test.go:66: (dbg) Run:  out/minikube-windows-amd64.exe start -p test-preload-973400 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=hyperv
E0108 22:00:48.021472    3008 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ingress-addon-legacy-054400\client.crt: The system cannot find the path specified.
preload_test.go:66: (dbg) Done: out/minikube-windows-amd64.exe start -p test-preload-973400 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=hyperv: (2m31.6923246s)
preload_test.go:71: (dbg) Run:  out/minikube-windows-amd64.exe -p test-preload-973400 image list
preload_test.go:71: (dbg) Done: out/minikube-windows-amd64.exe -p test-preload-973400 image list: (7.2737753s)
helpers_test.go:175: Cleaning up "test-preload-973400" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p test-preload-973400
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p test-preload-973400: (35.5203524s)
--- PASS: TestPreload (491.21s)

                                                
                                    
x
+
TestScheduledStopWindows (321.93s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-windows-amd64.exe start -p scheduled-stop-483200 --memory=2048 --driver=hyperv
E0108 22:02:35.468638    3008 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-084500\client.crt: The system cannot find the path specified.
E0108 22:02:52.243935    3008 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-084500\client.crt: The system cannot find the path specified.
E0108 22:02:58.301461    3008 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-242800\client.crt: The system cannot find the path specified.
scheduled_stop_test.go:128: (dbg) Done: out/minikube-windows-amd64.exe start -p scheduled-stop-483200 --memory=2048 --driver=hyperv: (3m9.3489962s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-windows-amd64.exe stop -p scheduled-stop-483200 --schedule 5m
scheduled_stop_test.go:137: (dbg) Done: out/minikube-windows-amd64.exe stop -p scheduled-stop-483200 --schedule 5m: (10.6230699s)
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.TimeToStop}} -p scheduled-stop-483200 -n scheduled-stop-483200
scheduled_stop_test.go:191: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.TimeToStop}} -p scheduled-stop-483200 -n scheduled-stop-483200: exit status 1 (10.0444892s)

                                                
                                                
** stderr ** 
	W0108 22:04:55.469577    7360 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
scheduled_stop_test.go:191: status error: exit status 1 (may be ok)
scheduled_stop_test.go:54: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p scheduled-stop-483200 -- sudo systemctl show minikube-scheduled-stop --no-page
scheduled_stop_test.go:54: (dbg) Done: out/minikube-windows-amd64.exe ssh -p scheduled-stop-483200 -- sudo systemctl show minikube-scheduled-stop --no-page: (9.5152185s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-windows-amd64.exe stop -p scheduled-stop-483200 --schedule 5s
scheduled_stop_test.go:137: (dbg) Done: out/minikube-windows-amd64.exe stop -p scheduled-stop-483200 --schedule 5s: (10.5171771s)
E0108 22:05:48.015049    3008 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ingress-addon-legacy-054400\client.crt: The system cannot find the path specified.
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-windows-amd64.exe status -p scheduled-stop-483200
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status -p scheduled-stop-483200: exit status 7 (2.3833239s)

                                                
                                                
-- stdout --
	scheduled-stop-483200
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0108 22:06:25.559545    3952 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p scheduled-stop-483200 -n scheduled-stop-483200
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p scheduled-stop-483200 -n scheduled-stop-483200: exit status 7 (2.4402759s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
** stderr ** 
	W0108 22:06:27.940497    3048 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-483200" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p scheduled-stop-483200
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p scheduled-stop-483200: (27.0464766s)
--- PASS: TestScheduledStopWindows (321.93s)

                                                
                                    
x
+
TestKubernetesUpgrade (922.24s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:235: (dbg) Run:  out/minikube-windows-amd64.exe start -p kubernetes-upgrade-158500 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=hyperv
version_upgrade_test.go:235: (dbg) Done: out/minikube-windows-amd64.exe start -p kubernetes-upgrade-158500 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=hyperv: (8m26.163089s)
version_upgrade_test.go:240: (dbg) Run:  out/minikube-windows-amd64.exe stop -p kubernetes-upgrade-158500
version_upgrade_test.go:240: (dbg) Done: out/minikube-windows-amd64.exe stop -p kubernetes-upgrade-158500: (28.6601136s)
version_upgrade_test.go:245: (dbg) Run:  out/minikube-windows-amd64.exe -p kubernetes-upgrade-158500 status --format={{.Host}}
version_upgrade_test.go:245: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p kubernetes-upgrade-158500 status --format={{.Host}}: exit status 7 (2.6112262s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
** stderr ** 
	W0108 22:30:15.478892   11860 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
version_upgrade_test.go:247: status error: exit status 7 (may be ok)
version_upgrade_test.go:256: (dbg) Run:  out/minikube-windows-amd64.exe start -p kubernetes-upgrade-158500 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=hyperv
version_upgrade_test.go:256: (dbg) Done: out/minikube-windows-amd64.exe start -p kubernetes-upgrade-158500 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=hyperv: (3m7.2840606s)
version_upgrade_test.go:261: (dbg) Run:  kubectl --context kubernetes-upgrade-158500 version --output=json
version_upgrade_test.go:280: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:282: (dbg) Run:  out/minikube-windows-amd64.exe start -p kubernetes-upgrade-158500 --memory=2200 --kubernetes-version=v1.16.0 --driver=hyperv
version_upgrade_test.go:282: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p kubernetes-upgrade-158500 --memory=2200 --kubernetes-version=v1.16.0 --driver=hyperv: exit status 106 (328.2834ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-158500] minikube v1.32.0 on Microsoft Windows 10 Enterprise N 10.0.19045.3803 Build 19045.3803
	  - KUBECONFIG=C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube7\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=17907
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0108 22:33:25.620981   13592 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.29.0-rc.2 cluster to v1.16.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.16.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-158500
	    minikube start -p kubernetes-upgrade-158500 --kubernetes-version=v1.16.0
	    
	    2) Create a second cluster with Kubernetes 1.16.0, by running:
	    
	    minikube start -p kubernetes-upgrade-1585002 --kubernetes-version=v1.16.0
	    
	    3) Use the existing cluster at version Kubernetes 1.29.0-rc.2, by running:
	    
	    minikube start -p kubernetes-upgrade-158500 --kubernetes-version=v1.29.0-rc.2
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:286: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:288: (dbg) Run:  out/minikube-windows-amd64.exe start -p kubernetes-upgrade-158500 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=hyperv
version_upgrade_test.go:288: (dbg) Done: out/minikube-windows-amd64.exe start -p kubernetes-upgrade-158500 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=hyperv: (2m37.3214124s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-158500" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p kubernetes-upgrade-158500
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p kubernetes-upgrade-158500: (39.6515636s)
--- PASS: TestKubernetesUpgrade (922.24s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.33s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-windows-amd64.exe start -p NoKubernetes-152000 --no-kubernetes --kubernetes-version=1.20 --driver=hyperv
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p NoKubernetes-152000 --no-kubernetes --kubernetes-version=1.20 --driver=hyperv: exit status 14 (324.9241ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-152000] minikube v1.32.0 on Microsoft Windows 10 Enterprise N 10.0.19045.3803 Build 19045.3803
	  - KUBECONFIG=C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube7\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=17907
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0108 22:06:57.447800    6460 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.33s)

                                                
                                    
x
+
TestPause/serial/Start (520.78s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-windows-amd64.exe start -p pause-810600 --memory=2048 --install-addons=false --wait=all --driver=hyperv
E0108 22:19:15.481094    3008 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-084500\client.crt: The system cannot find the path specified.
pause_test.go:80: (dbg) Done: out/minikube-windows-amd64.exe start -p pause-810600 --memory=2048 --install-addons=false --wait=all --driver=hyperv: (8m40.775523s)
--- PASS: TestPause/serial/Start (520.78s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.65s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.65s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (251.62s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-windows-amd64.exe start -p pause-810600 --alsologtostderr -v=1 --driver=hyperv
E0108 22:27:41.530490    3008 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-242800\client.crt: The system cannot find the path specified.
E0108 22:27:52.251246    3008 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-084500\client.crt: The system cannot find the path specified.
E0108 22:27:58.310792    3008 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-242800\client.crt: The system cannot find the path specified.
pause_test.go:92: (dbg) Done: out/minikube-windows-amd64.exe start -p pause-810600 --alsologtostderr -v=1 --driver=hyperv: (4m11.600382s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (251.62s)

                                                
                                    
x
+
TestPause/serial/Pause (8.01s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-windows-amd64.exe pause -p pause-810600 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-windows-amd64.exe pause -p pause-810600 --alsologtostderr -v=5: (8.0071642s)
--- PASS: TestPause/serial/Pause (8.01s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (12.34s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-windows-amd64.exe status -p pause-810600 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status -p pause-810600 --output=json --layout=cluster: exit status 2 (12.3399804s)

                                                
                                                
-- stdout --
	{"Name":"pause-810600","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 12 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.32.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-810600","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	W0108 22:31:46.235417   11816 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
--- PASS: TestPause/serial/VerifyStatus (12.34s)

                                                
                                    
x
+
TestPause/serial/Unpause (7.98s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-windows-amd64.exe unpause -p pause-810600 --alsologtostderr -v=5
pause_test.go:121: (dbg) Done: out/minikube-windows-amd64.exe unpause -p pause-810600 --alsologtostderr -v=5: (7.9833545s)
--- PASS: TestPause/serial/Unpause (7.98s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (8.07s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-windows-amd64.exe pause -p pause-810600 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-windows-amd64.exe pause -p pause-810600 --alsologtostderr -v=5: (8.0657593s)
--- PASS: TestPause/serial/PauseAgain (8.07s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (50.24s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-windows-amd64.exe delete -p pause-810600 --alsologtostderr -v=5
E0108 22:32:52.261029    3008 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-084500\client.crt: The system cannot find the path specified.
E0108 22:32:58.312995    3008 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-242800\client.crt: The system cannot find the path specified.
pause_test.go:132: (dbg) Done: out/minikube-windows-amd64.exe delete -p pause-810600 --alsologtostderr -v=5: (50.2405654s)
--- PASS: TestPause/serial/DeletePaused (50.24s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (24.36s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
pause_test.go:142: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (24.3590309s)
--- PASS: TestPause/serial/VerifyDeletedResources (24.36s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (10.39s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:219: (dbg) Run:  out/minikube-windows-amd64.exe logs -p stopped-upgrade-266300
version_upgrade_test.go:219: (dbg) Done: out/minikube-windows-amd64.exe logs -p stopped-upgrade-266300: (10.3946135s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (10.39s)

                                                
                                    

Test skip (32/208)

x
+
TestDownloadOnly/v1.16.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.16.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/binaries
aaa_download_only_test.go:139: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.16.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.4/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/binaries
aaa_download_only_test.go:139: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.4/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.29.0-rc.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/binaries
aaa_download_only_test.go:139: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.29.0-rc.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:213: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:498: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker false windows amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:41: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (300.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-windows-amd64.exe dashboard --url --port 36195 -p functional-242800 --alsologtostderr -v=1]
functional_test.go:912: output didn't produce a URL
functional_test.go:906: (dbg) stopping [out/minikube-windows-amd64.exe dashboard --url --port 36195 -p functional-242800 --alsologtostderr -v=1] ...
helpers_test.go:502: unable to terminate pid 8460: Access is denied.
--- SKIP: TestFunctional/parallel/DashboardCmd (300.02s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (5.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-242800 --dry-run --memory 250MB --alsologtostderr --driver=hyperv
functional_test.go:970: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p functional-242800 --dry-run --memory 250MB --alsologtostderr --driver=hyperv: exit status 1 (5.0495767s)

                                                
                                                
-- stdout --
	* [functional-242800] minikube v1.32.0 on Microsoft Windows 10 Enterprise N 10.0.19045.3803 Build 19045.3803
	  - KUBECONFIG=C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube7\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=17907
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true

                                                
                                                
-- /stdout --
** stderr ** 
	W0108 20:40:19.403066    8288 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I0108 20:40:19.489992    8288 out.go:296] Setting OutFile to fd 900 ...
	I0108 20:40:19.489992    8288 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 20:40:19.489992    8288 out.go:309] Setting ErrFile to fd 1220...
	I0108 20:40:19.489992    8288 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 20:40:19.511943    8288 out.go:303] Setting JSON to false
	I0108 20:40:19.518139    8288 start.go:128] hostinfo: {"hostname":"minikube7","uptime":24361,"bootTime":1704722057,"procs":199,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.3803 Build 19045.3803","kernelVersion":"10.0.19045.3803 Build 19045.3803","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f66ed2ea-6c04-4a6b-8eea-b2fb0953e990"}
	W0108 20:40:19.518275    8288 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0108 20:40:19.519657    8288 out.go:177] * [functional-242800] minikube v1.32.0 on Microsoft Windows 10 Enterprise N 10.0.19045.3803 Build 19045.3803
	I0108 20:40:19.520382    8288 notify.go:220] Checking for updates...
	I0108 20:40:19.521072    8288 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	I0108 20:40:19.521782    8288 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0108 20:40:19.521782    8288 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube7\minikube-integration\.minikube
	I0108 20:40:19.523209    8288 out.go:177]   - MINIKUBE_LOCATION=17907
	I0108 20:40:19.524053    8288 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0108 20:40:19.524053    8288 config.go:182] Loaded profile config "functional-242800": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0108 20:40:19.527130    8288 driver.go:392] Setting default libvirt URI to qemu:///system

                                                
                                                
** /stderr **
functional_test.go:976: skipping this error on HyperV till this issue is solved https://github.com/kubernetes/minikube/issues/9785
--- SKIP: TestFunctional/parallel/DryRun (5.05s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (5.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-242800 --dry-run --memory 250MB --alsologtostderr --driver=hyperv
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p functional-242800 --dry-run --memory 250MB --alsologtostderr --driver=hyperv: exit status 1 (5.0571705s)

                                                
                                                
-- stdout --
	* [functional-242800] minikube v1.32.0 sur Microsoft Windows 10 Enterprise N 10.0.19045.3803 Build 19045.3803
	  - KUBECONFIG=C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube7\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=17907
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true

                                                
                                                
-- /stdout --
** stderr ** 
	W0108 20:40:24.447769    8908 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I0108 20:40:24.532531    8908 out.go:296] Setting OutFile to fd 1072 ...
	I0108 20:40:24.533519    8908 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 20:40:24.533519    8908 out.go:309] Setting ErrFile to fd 1348...
	I0108 20:40:24.533519    8908 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 20:40:24.559841    8908 out.go:303] Setting JSON to false
	I0108 20:40:24.564823    8908 start.go:128] hostinfo: {"hostname":"minikube7","uptime":24366,"bootTime":1704722057,"procs":200,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.3803 Build 19045.3803","kernelVersion":"10.0.19045.3803 Build 19045.3803","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f66ed2ea-6c04-4a6b-8eea-b2fb0953e990"}
	W0108 20:40:24.565838    8908 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0108 20:40:24.566825    8908 out.go:177] * [functional-242800] minikube v1.32.0 sur Microsoft Windows 10 Enterprise N 10.0.19045.3803 Build 19045.3803
	I0108 20:40:24.567844    8908 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	I0108 20:40:24.566825    8908 notify.go:220] Checking for updates...
	I0108 20:40:24.567844    8908 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0108 20:40:24.568833    8908 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube7\minikube-integration\.minikube
	I0108 20:40:24.569832    8908 out.go:177]   - MINIKUBE_LOCATION=17907
	I0108 20:40:24.570829    8908 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0108 20:40:24.571839    8908 config.go:182] Loaded profile config "functional-242800": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0108 20:40:24.573848    8908 driver.go:392] Setting default libvirt URI to qemu:///system

                                                
                                                
** /stderr **
functional_test.go:1021: skipping this error on HyperV till this issue is solved https://github.com/kubernetes/minikube/issues/9785
--- SKIP: TestFunctional/parallel/InternationalLanguage (5.06s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd
=== PAUSE TestFunctional/parallel/MountCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd
functional_test_mount_test.go:57: skipping: mount broken on hyperv: https://github.com/kubernetes/minikube/issues/5029
--- SKIP: TestFunctional/parallel/MountCmd (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:230: The test WaitService/IngressIP is broken on hyperv https://github.com/kubernetes/minikube/issues/8381
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:258: skipping: access direct test is broken on windows: https://github.com/kubernetes/minikube/issues/8304
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestScheduledStopUnix (0s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:76: test only runs on unix
--- SKIP: TestScheduledStopUnix (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:39: skipping due to https://github.com/kubernetes/minikube/issues/14232
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:297: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
Copied to clipboard