Test Report: Hyper-V_Windows 17830

                    
                      f2d99d5d3acbee63fb92e6e0c0b75bbff35f3ad4:2024-01-09:32615
                    
                

Test fail (12/208)

x
+
TestAddons/parallel/Registry (73.05s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:330: registry stabilized in 23.0085ms
addons_test.go:332: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-hxcrm" [64becf8e-0c80-4b86-ad31-1fac01c460c7] Running
addons_test.go:332: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.0144546s
addons_test.go:335: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-7khrh" [11b6e0ae-7db1-42a9-849b-57bb9ee9a175] Running
addons_test.go:335: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.020928s
addons_test.go:340: (dbg) Run:  kubectl --context addons-852800 delete po -l run=registry-test --now
addons_test.go:345: (dbg) Run:  kubectl --context addons-852800 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:345: (dbg) Done: kubectl --context addons-852800 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (5.004747s)
addons_test.go:359: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-852800 ip
addons_test.go:359: (dbg) Done: out/minikube-windows-amd64.exe -p addons-852800 ip: (2.7709706s)
addons_test.go:364: expected stderr to be -empty- but got: *"W0108 23:00:45.721905    5928 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube1\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified.\n"* .  args "out/minikube-windows-amd64.exe -p addons-852800 ip"
2024/01/08 23:00:48 [DEBUG] GET http://172.24.111.87:5000
addons_test.go:388: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-852800 addons disable registry --alsologtostderr -v=1
addons_test.go:388: (dbg) Done: out/minikube-windows-amd64.exe -p addons-852800 addons disable registry --alsologtostderr -v=1: (15.9106698s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p addons-852800 -n addons-852800
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p addons-852800 -n addons-852800: (12.9740977s)
helpers_test.go:244: <<< TestAddons/parallel/Registry FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Registry]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-852800 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p addons-852800 logs -n 25: (10.3828172s)
helpers_test.go:252: TestAddons/parallel/Registry logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	| Command |                 Args                 |       Profile        |       User        | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	| start   | -o=json --download-only              | download-only-486300 | minikube1\jenkins | v1.32.0 | 08 Jan 24 22:53 UTC |                     |
	|         | -p download-only-486300              |                      |                   |         |                     |                     |
	|         | --force --alsologtostderr            |                      |                   |         |                     |                     |
	|         | --kubernetes-version=v1.16.0         |                      |                   |         |                     |                     |
	|         | --container-runtime=docker           |                      |                   |         |                     |                     |
	|         | --driver=hyperv                      |                      |                   |         |                     |                     |
	| start   | -o=json --download-only              | download-only-486300 | minikube1\jenkins | v1.32.0 | 08 Jan 24 22:53 UTC |                     |
	|         | -p download-only-486300              |                      |                   |         |                     |                     |
	|         | --force --alsologtostderr            |                      |                   |         |                     |                     |
	|         | --kubernetes-version=v1.28.4         |                      |                   |         |                     |                     |
	|         | --container-runtime=docker           |                      |                   |         |                     |                     |
	|         | --driver=hyperv                      |                      |                   |         |                     |                     |
	| start   | -o=json --download-only              | download-only-486300 | minikube1\jenkins | v1.32.0 | 08 Jan 24 22:53 UTC |                     |
	|         | -p download-only-486300              |                      |                   |         |                     |                     |
	|         | --force --alsologtostderr            |                      |                   |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2    |                      |                   |         |                     |                     |
	|         | --container-runtime=docker           |                      |                   |         |                     |                     |
	|         | --driver=hyperv                      |                      |                   |         |                     |                     |
	| delete  | --all                                | minikube             | minikube1\jenkins | v1.32.0 | 08 Jan 24 22:53 UTC | 08 Jan 24 22:53 UTC |
	| delete  | -p download-only-486300              | download-only-486300 | minikube1\jenkins | v1.32.0 | 08 Jan 24 22:53 UTC | 08 Jan 24 22:53 UTC |
	| delete  | -p download-only-486300              | download-only-486300 | minikube1\jenkins | v1.32.0 | 08 Jan 24 22:53 UTC | 08 Jan 24 22:53 UTC |
	| start   | --download-only -p                   | binary-mirror-325800 | minikube1\jenkins | v1.32.0 | 08 Jan 24 22:53 UTC |                     |
	|         | binary-mirror-325800                 |                      |                   |         |                     |                     |
	|         | --alsologtostderr                    |                      |                   |         |                     |                     |
	|         | --binary-mirror                      |                      |                   |         |                     |                     |
	|         | http://127.0.0.1:61448               |                      |                   |         |                     |                     |
	|         | --driver=hyperv                      |                      |                   |         |                     |                     |
	| delete  | -p binary-mirror-325800              | binary-mirror-325800 | minikube1\jenkins | v1.32.0 | 08 Jan 24 22:53 UTC | 08 Jan 24 22:53 UTC |
	| addons  | disable dashboard -p                 | addons-852800        | minikube1\jenkins | v1.32.0 | 08 Jan 24 22:53 UTC |                     |
	|         | addons-852800                        |                      |                   |         |                     |                     |
	| addons  | enable dashboard -p                  | addons-852800        | minikube1\jenkins | v1.32.0 | 08 Jan 24 22:53 UTC |                     |
	|         | addons-852800                        |                      |                   |         |                     |                     |
	| start   | -p addons-852800 --wait=true         | addons-852800        | minikube1\jenkins | v1.32.0 | 08 Jan 24 22:53 UTC | 08 Jan 24 23:00 UTC |
	|         | --memory=4000 --alsologtostderr      |                      |                   |         |                     |                     |
	|         | --addons=registry                    |                      |                   |         |                     |                     |
	|         | --addons=metrics-server              |                      |                   |         |                     |                     |
	|         | --addons=volumesnapshots             |                      |                   |         |                     |                     |
	|         | --addons=csi-hostpath-driver         |                      |                   |         |                     |                     |
	|         | --addons=gcp-auth                    |                      |                   |         |                     |                     |
	|         | --addons=cloud-spanner               |                      |                   |         |                     |                     |
	|         | --addons=inspektor-gadget            |                      |                   |         |                     |                     |
	|         | --addons=storage-provisioner-rancher |                      |                   |         |                     |                     |
	|         | --addons=nvidia-device-plugin        |                      |                   |         |                     |                     |
	|         | --addons=yakd --driver=hyperv        |                      |                   |         |                     |                     |
	|         | --addons=ingress                     |                      |                   |         |                     |                     |
	|         | --addons=ingress-dns                 |                      |                   |         |                     |                     |
	|         | --addons=helm-tiller                 |                      |                   |         |                     |                     |
	| addons  | addons-852800 addons                 | addons-852800        | minikube1\jenkins | v1.32.0 | 08 Jan 24 23:00 UTC | 08 Jan 24 23:00 UTC |
	|         | disable metrics-server               |                      |                   |         |                     |                     |
	|         | --alsologtostderr -v=1               |                      |                   |         |                     |                     |
	| addons  | disable inspektor-gadget -p          | addons-852800        | minikube1\jenkins | v1.32.0 | 08 Jan 24 23:00 UTC | 08 Jan 24 23:00 UTC |
	|         | addons-852800                        |                      |                   |         |                     |                     |
	| ip      | addons-852800 ip                     | addons-852800        | minikube1\jenkins | v1.32.0 | 08 Jan 24 23:00 UTC | 08 Jan 24 23:00 UTC |
	| addons  | addons-852800 addons disable         | addons-852800        | minikube1\jenkins | v1.32.0 | 08 Jan 24 23:00 UTC | 08 Jan 24 23:01 UTC |
	|         | registry --alsologtostderr           |                      |                   |         |                     |                     |
	|         | -v=1                                 |                      |                   |         |                     |                     |
	| addons  | addons-852800 addons disable         | addons-852800        | minikube1\jenkins | v1.32.0 | 08 Jan 24 23:01 UTC |                     |
	|         | helm-tiller --alsologtostderr        |                      |                   |         |                     |                     |
	|         | -v=1                                 |                      |                   |         |                     |                     |
	|---------|--------------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/01/08 22:53:56
	Running on machine: minikube1
	Binary: Built with gc go1.21.5 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0108 22:53:56.083026    5968 out.go:296] Setting OutFile to fd 748 ...
	I0108 22:53:56.083715    5968 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 22:53:56.083715    5968 out.go:309] Setting ErrFile to fd 916...
	I0108 22:53:56.083715    5968 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 22:53:56.117821    5968 out.go:303] Setting JSON to false
	I0108 22:53:56.122820    5968 start.go:128] hostinfo: {"hostname":"minikube1","uptime":1930,"bootTime":1704752505,"procs":196,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.3803 Build 19045.3803","kernelVersion":"10.0.19045.3803 Build 19045.3803","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f3192dc9-6fb2-4797-bdaa-5f567903ef41"}
	W0108 22:53:56.122820    5968 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0108 22:53:56.127247    5968 out.go:177] * [addons-852800] minikube v1.32.0 on Microsoft Windows 10 Enterprise N 10.0.19045.3803 Build 19045.3803
	I0108 22:53:56.131880    5968 notify.go:220] Checking for updates...
	I0108 22:53:56.135011    5968 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0108 22:53:56.138010    5968 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0108 22:53:56.140381    5968 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube1\minikube-integration\.minikube
	I0108 22:53:56.143190    5968 out.go:177]   - MINIKUBE_LOCATION=17830
	I0108 22:53:56.146219    5968 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0108 22:53:56.150179    5968 driver.go:392] Setting default libvirt URI to qemu:///system
	I0108 22:54:01.806334    5968 out.go:177] * Using the hyperv driver based on user configuration
	I0108 22:54:01.810419    5968 start.go:298] selected driver: hyperv
	I0108 22:54:01.810616    5968 start.go:902] validating driver "hyperv" against <nil>
	I0108 22:54:01.810616    5968 start.go:913] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0108 22:54:01.858133    5968 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I0108 22:54:01.859809    5968 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0108 22:54:01.859888    5968 cni.go:84] Creating CNI manager for ""
	I0108 22:54:01.859888    5968 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0108 22:54:01.859888    5968 start_flags.go:318] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0108 22:54:01.859888    5968 start_flags.go:323] config:
	{Name:addons-852800 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704751654-17830@sha256:cabd32f8d9e8d804966eb117ed5366660f6363a4d1415f0b5480de6e396be617 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:addons-852800 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker C
RISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I0108 22:54:01.860473    5968 iso.go:125] acquiring lock: {Name:mk6e44fd4f974e035b521383471f58bfbae3f4f1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 22:54:01.865406    5968 out.go:177] * Starting control plane node addons-852800 in cluster addons-852800
	I0108 22:54:01.868310    5968 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0108 22:54:01.868614    5968 preload.go:148] Found local preload: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4
	I0108 22:54:01.868648    5968 cache.go:56] Caching tarball of preloaded images
	I0108 22:54:01.869040    5968 preload.go:174] Found C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0108 22:54:01.869310    5968 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on docker
	I0108 22:54:01.869995    5968 profile.go:148] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-852800\config.json ...
	I0108 22:54:01.870311    5968 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-852800\config.json: {Name:mk474769bc538744fbbf96a765d8022bf2d12b58 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 22:54:01.871700    5968 start.go:365] acquiring machines lock for addons-852800: {Name:mk1756dfe9622c208593bf9b718faf6c1651aea2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0108 22:54:01.871907    5968 start.go:369] acquired machines lock for "addons-852800" in 147.2µs
	I0108 22:54:01.872147    5968 start.go:93] Provisioning new machine with config: &{Name:addons-852800 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17806/minikube-v1.32.1-1702708929-17806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704751654-17830@sha256:cabd32f8d9e8d804966eb117ed5366660f6363a4d1415f0b5480de6e396be617 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.28.4 ClusterName:addons-852800 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Mount
MSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0108 22:54:01.872147    5968 start.go:125] createHost starting for "" (driver="hyperv")
	I0108 22:54:01.878927    5968 out.go:204] * Creating hyperv VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0108 22:54:01.880004    5968 start.go:159] libmachine.API.Create for "addons-852800" (driver="hyperv")
	I0108 22:54:01.880112    5968 client.go:168] LocalClient.Create starting
	I0108 22:54:01.880322    5968 main.go:141] libmachine: Creating CA: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem
	I0108 22:54:02.179826    5968 main.go:141] libmachine: Creating client certificate: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem
	I0108 22:54:02.382331    5968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0108 22:54:04.613751    5968 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0108 22:54:04.613751    5968 main.go:141] libmachine: [stderr =====>] : 
	I0108 22:54:04.613847    5968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0108 22:54:06.396473    5968 main.go:141] libmachine: [stdout =====>] : False
	
	I0108 22:54:06.396673    5968 main.go:141] libmachine: [stderr =====>] : 
	I0108 22:54:06.396673    5968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0108 22:54:07.919372    5968 main.go:141] libmachine: [stdout =====>] : True
	
	I0108 22:54:07.919475    5968 main.go:141] libmachine: [stderr =====>] : 
	I0108 22:54:07.919550    5968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0108 22:54:11.921421    5968 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0108 22:54:11.921628    5968 main.go:141] libmachine: [stderr =====>] : 
	I0108 22:54:11.924975    5968 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube1/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.32.1-1702708929-17806-amd64.iso...
	I0108 22:54:12.407999    5968 main.go:141] libmachine: Creating SSH key...
	I0108 22:54:12.606978    5968 main.go:141] libmachine: Creating VM...
	I0108 22:54:12.606978    5968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0108 22:54:15.459047    5968 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0108 22:54:15.459047    5968 main.go:141] libmachine: [stderr =====>] : 
	I0108 22:54:15.459047    5968 main.go:141] libmachine: Using switch "Default Switch"
	I0108 22:54:15.459047    5968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0108 22:54:17.285321    5968 main.go:141] libmachine: [stdout =====>] : True
	
	I0108 22:54:17.285321    5968 main.go:141] libmachine: [stderr =====>] : 
	I0108 22:54:17.285545    5968 main.go:141] libmachine: Creating VHD
	I0108 22:54:17.285545    5968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\addons-852800\fixed.vhd' -SizeBytes 10MB -Fixed
	I0108 22:54:21.094901    5968 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube1
	Path                    : C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\addons-852800\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : DEAE7F21-00DB-4B22-9D98-A2FB9673778D
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0108 22:54:21.098008    5968 main.go:141] libmachine: [stderr =====>] : 
	I0108 22:54:21.098008    5968 main.go:141] libmachine: Writing magic tar header
	I0108 22:54:21.098134    5968 main.go:141] libmachine: Writing SSH key tar header
	I0108 22:54:21.106970    5968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\addons-852800\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\addons-852800\disk.vhd' -VHDType Dynamic -DeleteSource
	I0108 22:54:24.342972    5968 main.go:141] libmachine: [stdout =====>] : 
	I0108 22:54:24.342972    5968 main.go:141] libmachine: [stderr =====>] : 
	I0108 22:54:24.343082    5968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\addons-852800\disk.vhd' -SizeBytes 20000MB
	I0108 22:54:26.906510    5968 main.go:141] libmachine: [stdout =====>] : 
	I0108 22:54:26.906510    5968 main.go:141] libmachine: [stderr =====>] : 
	I0108 22:54:26.906510    5968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM addons-852800 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\addons-852800' -SwitchName 'Default Switch' -MemoryStartupBytes 4000MB
	I0108 22:54:31.294531    5968 main.go:141] libmachine: [stdout =====>] : 
	Name          State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----          ----- ----------- ----------------- ------   ------             -------
	addons-852800 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0108 22:54:31.294627    5968 main.go:141] libmachine: [stderr =====>] : 
	I0108 22:54:31.294627    5968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName addons-852800 -DynamicMemoryEnabled $false
	I0108 22:54:33.586144    5968 main.go:141] libmachine: [stdout =====>] : 
	I0108 22:54:33.586253    5968 main.go:141] libmachine: [stderr =====>] : 
	I0108 22:54:33.586253    5968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor addons-852800 -Count 2
	I0108 22:54:35.788763    5968 main.go:141] libmachine: [stdout =====>] : 
	I0108 22:54:35.788971    5968 main.go:141] libmachine: [stderr =====>] : 
	I0108 22:54:35.788971    5968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName addons-852800 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\addons-852800\boot2docker.iso'
	I0108 22:54:38.417738    5968 main.go:141] libmachine: [stdout =====>] : 
	I0108 22:54:38.417738    5968 main.go:141] libmachine: [stderr =====>] : 
	I0108 22:54:38.417738    5968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName addons-852800 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\addons-852800\disk.vhd'
	I0108 22:54:41.168382    5968 main.go:141] libmachine: [stdout =====>] : 
	I0108 22:54:41.168382    5968 main.go:141] libmachine: [stderr =====>] : 
	I0108 22:54:41.168496    5968 main.go:141] libmachine: Starting VM...
	I0108 22:54:41.168496    5968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM addons-852800
	I0108 22:54:44.387410    5968 main.go:141] libmachine: [stdout =====>] : 
	I0108 22:54:44.387491    5968 main.go:141] libmachine: [stderr =====>] : 
	I0108 22:54:44.387491    5968 main.go:141] libmachine: Waiting for host to start...
	I0108 22:54:44.387491    5968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-852800 ).state
	I0108 22:54:46.751530    5968 main.go:141] libmachine: [stdout =====>] : Running
	
	I0108 22:54:46.751530    5968 main.go:141] libmachine: [stderr =====>] : 
	I0108 22:54:46.751530    5968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-852800 ).networkadapters[0]).ipaddresses[0]
	I0108 22:54:49.355422    5968 main.go:141] libmachine: [stdout =====>] : 
	I0108 22:54:49.355422    5968 main.go:141] libmachine: [stderr =====>] : 
	I0108 22:54:50.357588    5968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-852800 ).state
	I0108 22:54:52.632690    5968 main.go:141] libmachine: [stdout =====>] : Running
	
	I0108 22:54:52.632690    5968 main.go:141] libmachine: [stderr =====>] : 
	I0108 22:54:52.632690    5968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-852800 ).networkadapters[0]).ipaddresses[0]
	I0108 22:54:55.252068    5968 main.go:141] libmachine: [stdout =====>] : 
	I0108 22:54:55.252295    5968 main.go:141] libmachine: [stderr =====>] : 
	I0108 22:54:56.265034    5968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-852800 ).state
	I0108 22:54:58.456731    5968 main.go:141] libmachine: [stdout =====>] : Running
	
	I0108 22:54:58.456879    5968 main.go:141] libmachine: [stderr =====>] : 
	I0108 22:54:58.456879    5968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-852800 ).networkadapters[0]).ipaddresses[0]
	I0108 22:55:01.002092    5968 main.go:141] libmachine: [stdout =====>] : 
	I0108 22:55:01.002092    5968 main.go:141] libmachine: [stderr =====>] : 
	I0108 22:55:02.015723    5968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-852800 ).state
	I0108 22:55:04.240528    5968 main.go:141] libmachine: [stdout =====>] : Running
	
	I0108 22:55:04.240528    5968 main.go:141] libmachine: [stderr =====>] : 
	I0108 22:55:04.240624    5968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-852800 ).networkadapters[0]).ipaddresses[0]
	I0108 22:55:06.782783    5968 main.go:141] libmachine: [stdout =====>] : 
	I0108 22:55:06.782783    5968 main.go:141] libmachine: [stderr =====>] : 
	I0108 22:55:07.787195    5968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-852800 ).state
	I0108 22:55:09.973032    5968 main.go:141] libmachine: [stdout =====>] : Running
	
	I0108 22:55:09.973032    5968 main.go:141] libmachine: [stderr =====>] : 
	I0108 22:55:09.973032    5968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-852800 ).networkadapters[0]).ipaddresses[0]
	I0108 22:55:12.591186    5968 main.go:141] libmachine: [stdout =====>] : 172.24.111.87
	
	I0108 22:55:12.591186    5968 main.go:141] libmachine: [stderr =====>] : 
	I0108 22:55:12.591466    5968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-852800 ).state
	I0108 22:55:14.756807    5968 main.go:141] libmachine: [stdout =====>] : Running
	
	I0108 22:55:14.756807    5968 main.go:141] libmachine: [stderr =====>] : 
	I0108 22:55:14.757028    5968 machine.go:88] provisioning docker machine ...
	I0108 22:55:14.757113    5968 buildroot.go:166] provisioning hostname "addons-852800"
	I0108 22:55:14.757208    5968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-852800 ).state
	I0108 22:55:16.969495    5968 main.go:141] libmachine: [stdout =====>] : Running
	
	I0108 22:55:16.969495    5968 main.go:141] libmachine: [stderr =====>] : 
	I0108 22:55:16.969688    5968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-852800 ).networkadapters[0]).ipaddresses[0]
	I0108 22:55:19.542698    5968 main.go:141] libmachine: [stdout =====>] : 172.24.111.87
	
	I0108 22:55:19.542698    5968 main.go:141] libmachine: [stderr =====>] : 
	I0108 22:55:19.549205    5968 main.go:141] libmachine: Using SSH client type: native
	I0108 22:55:19.558540    5968 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xac6120] 0xac8c60 <nil>  [] 0s} 172.24.111.87 22 <nil> <nil>}
	I0108 22:55:19.558540    5968 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-852800 && echo "addons-852800" | sudo tee /etc/hostname
	I0108 22:55:19.747277    5968 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-852800
	
	I0108 22:55:19.747354    5968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-852800 ).state
	I0108 22:55:21.909033    5968 main.go:141] libmachine: [stdout =====>] : Running
	
	I0108 22:55:21.909217    5968 main.go:141] libmachine: [stderr =====>] : 
	I0108 22:55:21.909217    5968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-852800 ).networkadapters[0]).ipaddresses[0]
	I0108 22:55:24.483832    5968 main.go:141] libmachine: [stdout =====>] : 172.24.111.87
	
	I0108 22:55:24.484006    5968 main.go:141] libmachine: [stderr =====>] : 
	I0108 22:55:24.489323    5968 main.go:141] libmachine: Using SSH client type: native
	I0108 22:55:24.490035    5968 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xac6120] 0xac8c60 <nil>  [] 0s} 172.24.111.87 22 <nil> <nil>}
	I0108 22:55:24.490035    5968 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-852800' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-852800/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-852800' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0108 22:55:24.645040    5968 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0108 22:55:24.645040    5968 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube1\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube1\minikube-integration\.minikube}
	I0108 22:55:24.645040    5968 buildroot.go:174] setting up certificates
	I0108 22:55:24.645040    5968 provision.go:83] configureAuth start
	I0108 22:55:24.645040    5968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-852800 ).state
	I0108 22:55:26.828946    5968 main.go:141] libmachine: [stdout =====>] : Running
	
	I0108 22:55:26.828946    5968 main.go:141] libmachine: [stderr =====>] : 
	I0108 22:55:26.828946    5968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-852800 ).networkadapters[0]).ipaddresses[0]
	I0108 22:55:29.391129    5968 main.go:141] libmachine: [stdout =====>] : 172.24.111.87
	
	I0108 22:55:29.391388    5968 main.go:141] libmachine: [stderr =====>] : 
	I0108 22:55:29.391388    5968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-852800 ).state
	I0108 22:55:31.536739    5968 main.go:141] libmachine: [stdout =====>] : Running
	
	I0108 22:55:31.536739    5968 main.go:141] libmachine: [stderr =====>] : 
	I0108 22:55:31.536851    5968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-852800 ).networkadapters[0]).ipaddresses[0]
	I0108 22:55:34.138653    5968 main.go:141] libmachine: [stdout =====>] : 172.24.111.87
	
	I0108 22:55:34.138653    5968 main.go:141] libmachine: [stderr =====>] : 
	I0108 22:55:34.138898    5968 provision.go:138] copyHostCerts
	I0108 22:55:34.139490    5968 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0108 22:55:34.141469    5968 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0108 22:55:34.142826    5968 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem (1675 bytes)
	I0108 22:55:34.144446    5968 provision.go:112] generating server cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.addons-852800 san=[172.24.111.87 172.24.111.87 localhost 127.0.0.1 minikube addons-852800]
	I0108 22:55:34.395999    5968 provision.go:172] copyRemoteCerts
	I0108 22:55:34.410020    5968 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0108 22:55:34.410020    5968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-852800 ).state
	I0108 22:55:36.546177    5968 main.go:141] libmachine: [stdout =====>] : Running
	
	I0108 22:55:36.546177    5968 main.go:141] libmachine: [stderr =====>] : 
	I0108 22:55:36.546277    5968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-852800 ).networkadapters[0]).ipaddresses[0]
	I0108 22:55:39.119770    5968 main.go:141] libmachine: [stdout =====>] : 172.24.111.87
	
	I0108 22:55:39.119946    5968 main.go:141] libmachine: [stderr =====>] : 
	I0108 22:55:39.120245    5968 sshutil.go:53] new ssh client: &{IP:172.24.111.87 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\addons-852800\id_rsa Username:docker}
	I0108 22:55:39.230371    5968 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.8202744s)
	I0108 22:55:39.230675    5968 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0108 22:55:39.272563    5968 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1216 bytes)
	I0108 22:55:39.315774    5968 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0108 22:55:39.355014    5968 provision.go:86] duration metric: configureAuth took 14.7099727s
	I0108 22:55:39.355014    5968 buildroot.go:189] setting minikube options for container-runtime
	I0108 22:55:39.356003    5968 config.go:182] Loaded profile config "addons-852800": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0108 22:55:39.356003    5968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-852800 ).state
	I0108 22:55:41.524727    5968 main.go:141] libmachine: [stdout =====>] : Running
	
	I0108 22:55:41.524727    5968 main.go:141] libmachine: [stderr =====>] : 
	I0108 22:55:41.524820    5968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-852800 ).networkadapters[0]).ipaddresses[0]
	I0108 22:55:44.067779    5968 main.go:141] libmachine: [stdout =====>] : 172.24.111.87
	
	I0108 22:55:44.067779    5968 main.go:141] libmachine: [stderr =====>] : 
	I0108 22:55:44.074154    5968 main.go:141] libmachine: Using SSH client type: native
	I0108 22:55:44.074837    5968 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xac6120] 0xac8c60 <nil>  [] 0s} 172.24.111.87 22 <nil> <nil>}
	I0108 22:55:44.074837    5968 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0108 22:55:44.230173    5968 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0108 22:55:44.230264    5968 buildroot.go:70] root file system type: tmpfs
	I0108 22:55:44.230646    5968 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0108 22:55:44.230777    5968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-852800 ).state
	I0108 22:55:46.385402    5968 main.go:141] libmachine: [stdout =====>] : Running
	
	I0108 22:55:46.385402    5968 main.go:141] libmachine: [stderr =====>] : 
	I0108 22:55:46.385538    5968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-852800 ).networkadapters[0]).ipaddresses[0]
	I0108 22:55:48.919165    5968 main.go:141] libmachine: [stdout =====>] : 172.24.111.87
	
	I0108 22:55:48.919165    5968 main.go:141] libmachine: [stderr =====>] : 
	I0108 22:55:48.925488    5968 main.go:141] libmachine: Using SSH client type: native
	I0108 22:55:48.926286    5968 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xac6120] 0xac8c60 <nil>  [] 0s} 172.24.111.87 22 <nil> <nil>}
	I0108 22:55:48.926286    5968 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0108 22:55:49.087425    5968 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0108 22:55:49.087632    5968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-852800 ).state
	I0108 22:55:51.189471    5968 main.go:141] libmachine: [stdout =====>] : Running
	
	I0108 22:55:51.189471    5968 main.go:141] libmachine: [stderr =====>] : 
	I0108 22:55:51.189664    5968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-852800 ).networkadapters[0]).ipaddresses[0]
	I0108 22:55:53.718121    5968 main.go:141] libmachine: [stdout =====>] : 172.24.111.87
	
	I0108 22:55:53.718121    5968 main.go:141] libmachine: [stderr =====>] : 
	I0108 22:55:53.724135    5968 main.go:141] libmachine: Using SSH client type: native
	I0108 22:55:53.724290    5968 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xac6120] 0xac8c60 <nil>  [] 0s} 172.24.111.87 22 <nil> <nil>}
	I0108 22:55:53.724879    5968 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0108 22:55:54.877799    5968 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0108 22:55:54.877799    5968 machine.go:91] provisioned docker machine in 40.1207676s
	I0108 22:55:54.877799    5968 client.go:171] LocalClient.Create took 1m52.9976393s
	I0108 22:55:54.877799    5968 start.go:167] duration metric: libmachine.API.Create for "addons-852800" took 1m52.9977838s
	I0108 22:55:54.877799    5968 start.go:300] post-start starting for "addons-852800" (driver="hyperv")
	I0108 22:55:54.878327    5968 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0108 22:55:54.892670    5968 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0108 22:55:54.892670    5968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-852800 ).state
	I0108 22:55:57.076789    5968 main.go:141] libmachine: [stdout =====>] : Running
	
	I0108 22:55:57.076789    5968 main.go:141] libmachine: [stderr =====>] : 
	I0108 22:55:57.076789    5968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-852800 ).networkadapters[0]).ipaddresses[0]
	I0108 22:55:59.718762    5968 main.go:141] libmachine: [stdout =====>] : 172.24.111.87
	
	I0108 22:55:59.719140    5968 main.go:141] libmachine: [stderr =====>] : 
	I0108 22:55:59.719359    5968 sshutil.go:53] new ssh client: &{IP:172.24.111.87 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\addons-852800\id_rsa Username:docker}
	I0108 22:55:59.830980    5968 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.9382435s)
	I0108 22:55:59.845395    5968 ssh_runner.go:195] Run: cat /etc/os-release
	I0108 22:55:59.850367    5968 info.go:137] Remote host: Buildroot 2021.02.12
	I0108 22:55:59.850367    5968 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\addons for local assets ...
	I0108 22:55:59.851361    5968 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\files for local assets ...
	I0108 22:55:59.851361    5968 start.go:303] post-start completed in 4.9735608s
	I0108 22:55:59.854421    5968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-852800 ).state
	I0108 22:56:02.027680    5968 main.go:141] libmachine: [stdout =====>] : Running
	
	I0108 22:56:02.027680    5968 main.go:141] libmachine: [stderr =====>] : 
	I0108 22:56:02.027873    5968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-852800 ).networkadapters[0]).ipaddresses[0]
	I0108 22:56:04.554290    5968 main.go:141] libmachine: [stdout =====>] : 172.24.111.87
	
	I0108 22:56:04.554290    5968 main.go:141] libmachine: [stderr =====>] : 
	I0108 22:56:04.554290    5968 profile.go:148] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-852800\config.json ...
	I0108 22:56:04.557634    5968 start.go:128] duration metric: createHost completed in 2m2.6854743s
	I0108 22:56:04.557634    5968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-852800 ).state
	I0108 22:56:06.684767    5968 main.go:141] libmachine: [stdout =====>] : Running
	
	I0108 22:56:06.684767    5968 main.go:141] libmachine: [stderr =====>] : 
	I0108 22:56:06.684767    5968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-852800 ).networkadapters[0]).ipaddresses[0]
	I0108 22:56:09.186633    5968 main.go:141] libmachine: [stdout =====>] : 172.24.111.87
	
	I0108 22:56:09.187031    5968 main.go:141] libmachine: [stderr =====>] : 
	I0108 22:56:09.194358    5968 main.go:141] libmachine: Using SSH client type: native
	I0108 22:56:09.195144    5968 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xac6120] 0xac8c60 <nil>  [] 0s} 172.24.111.87 22 <nil> <nil>}
	I0108 22:56:09.195144    5968 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0108 22:56:09.352316    5968 main.go:141] libmachine: SSH cmd err, output: <nil>: 1704754569.351701358
	
	I0108 22:56:09.352316    5968 fix.go:206] guest clock: 1704754569.351701358
	I0108 22:56:09.352316    5968 fix.go:219] Guest: 2024-01-08 22:56:09.351701358 +0000 UTC Remote: 2024-01-08 22:56:04.5576342 +0000 UTC m=+128.674142301 (delta=4.794067158s)
	I0108 22:56:09.352574    5968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-852800 ).state
	I0108 22:56:11.528434    5968 main.go:141] libmachine: [stdout =====>] : Running
	
	I0108 22:56:11.528646    5968 main.go:141] libmachine: [stderr =====>] : 
	I0108 22:56:11.528646    5968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-852800 ).networkadapters[0]).ipaddresses[0]
	I0108 22:56:14.093112    5968 main.go:141] libmachine: [stdout =====>] : 172.24.111.87
	
	I0108 22:56:14.093112    5968 main.go:141] libmachine: [stderr =====>] : 
	I0108 22:56:14.100948    5968 main.go:141] libmachine: Using SSH client type: native
	I0108 22:56:14.101783    5968 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xac6120] 0xac8c60 <nil>  [] 0s} 172.24.111.87 22 <nil> <nil>}
	I0108 22:56:14.101783    5968 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1704754569
	I0108 22:56:14.265823    5968 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Jan  8 22:56:09 UTC 2024
	
	I0108 22:56:14.265823    5968 fix.go:226] clock set: Mon Jan  8 22:56:09 UTC 2024
	 (err=<nil>)
	I0108 22:56:14.265823    5968 start.go:83] releasing machines lock for "addons-852800", held for 2m12.3938178s
	I0108 22:56:14.266506    5968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-852800 ).state
	I0108 22:56:16.392025    5968 main.go:141] libmachine: [stdout =====>] : Running
	
	I0108 22:56:16.392204    5968 main.go:141] libmachine: [stderr =====>] : 
	I0108 22:56:16.392204    5968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-852800 ).networkadapters[0]).ipaddresses[0]
	I0108 22:56:18.965634    5968 main.go:141] libmachine: [stdout =====>] : 172.24.111.87
	
	I0108 22:56:18.965857    5968 main.go:141] libmachine: [stderr =====>] : 
	I0108 22:56:18.970614    5968 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0108 22:56:18.970614    5968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-852800 ).state
	I0108 22:56:18.984854    5968 ssh_runner.go:195] Run: cat /version.json
	I0108 22:56:18.984854    5968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-852800 ).state
	I0108 22:56:21.239977    5968 main.go:141] libmachine: [stdout =====>] : Running
	
	I0108 22:56:21.240285    5968 main.go:141] libmachine: [stderr =====>] : 
	I0108 22:56:21.240285    5968 main.go:141] libmachine: [stdout =====>] : Running
	
	I0108 22:56:21.240285    5968 main.go:141] libmachine: [stderr =====>] : 
	I0108 22:56:21.240285    5968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-852800 ).networkadapters[0]).ipaddresses[0]
	I0108 22:56:21.240285    5968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-852800 ).networkadapters[0]).ipaddresses[0]
	I0108 22:56:23.923609    5968 main.go:141] libmachine: [stdout =====>] : 172.24.111.87
	
	I0108 22:56:23.923847    5968 main.go:141] libmachine: [stderr =====>] : 
	I0108 22:56:23.923920    5968 sshutil.go:53] new ssh client: &{IP:172.24.111.87 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\addons-852800\id_rsa Username:docker}
	I0108 22:56:23.943665    5968 main.go:141] libmachine: [stdout =====>] : 172.24.111.87
	
	I0108 22:56:23.943665    5968 main.go:141] libmachine: [stderr =====>] : 
	I0108 22:56:23.943665    5968 sshutil.go:53] new ssh client: &{IP:172.24.111.87 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\addons-852800\id_rsa Username:docker}
	I0108 22:56:24.107399    5968 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (5.1367846s)
	I0108 22:56:24.107399    5968 ssh_runner.go:235] Completed: cat /version.json: (5.1225444s)
	I0108 22:56:24.125829    5968 ssh_runner.go:195] Run: systemctl --version
	I0108 22:56:24.151140    5968 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0108 22:56:24.159652    5968 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0108 22:56:24.172939    5968 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0108 22:56:24.201926    5968 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0108 22:56:24.202105    5968 start.go:475] detecting cgroup driver to use...
	I0108 22:56:24.202499    5968 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0108 22:56:24.250585    5968 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0108 22:56:24.279981    5968 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0108 22:56:24.298475    5968 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0108 22:56:24.315928    5968 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0108 22:56:24.346212    5968 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0108 22:56:24.377718    5968 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0108 22:56:24.414971    5968 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0108 22:56:24.447678    5968 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0108 22:56:24.479983    5968 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0108 22:56:24.511925    5968 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0108 22:56:24.541323    5968 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0108 22:56:24.575709    5968 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0108 22:56:24.748093    5968 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0108 22:56:24.784766    5968 start.go:475] detecting cgroup driver to use...
	I0108 22:56:24.800673    5968 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0108 22:56:24.843154    5968 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0108 22:56:24.878599    5968 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0108 22:56:24.918765    5968 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0108 22:56:24.957105    5968 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0108 22:56:24.990801    5968 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0108 22:56:25.049648    5968 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0108 22:56:25.074162    5968 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0108 22:56:25.116779    5968 ssh_runner.go:195] Run: which cri-dockerd
	I0108 22:56:25.136734    5968 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0108 22:56:25.152336    5968 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0108 22:56:25.196039    5968 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0108 22:56:25.363428    5968 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0108 22:56:25.522958    5968 docker.go:560] configuring docker to use "cgroupfs" as cgroup driver...
	I0108 22:56:25.523238    5968 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0108 22:56:25.563121    5968 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0108 22:56:25.730650    5968 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0108 22:56:27.303020    5968 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.5723702s)
	I0108 22:56:27.318315    5968 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0108 22:56:27.485239    5968 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0108 22:56:27.657943    5968 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0108 22:56:27.829581    5968 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0108 22:56:27.998704    5968 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0108 22:56:28.036325    5968 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0108 22:56:28.209676    5968 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I0108 22:56:28.316605    5968 start.go:522] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0108 22:56:28.331802    5968 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0108 22:56:28.339829    5968 start.go:543] Will wait 60s for crictl version
	I0108 22:56:28.353690    5968 ssh_runner.go:195] Run: which crictl
	I0108 22:56:28.374108    5968 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0108 22:56:28.446744    5968 start.go:559] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  24.0.7
	RuntimeApiVersion:  v1
	I0108 22:56:28.456712    5968 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0108 22:56:28.499763    5968 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0108 22:56:28.539442    5968 out.go:204] * Preparing Kubernetes v1.28.4 on Docker 24.0.7 ...
	I0108 22:56:28.539442    5968 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0108 22:56:28.544605    5968 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0108 22:56:28.544605    5968 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0108 22:56:28.544605    5968 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0108 22:56:28.544605    5968 ip.go:207] Found interface: {Index:13 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:c4:61:0a Flags:up|broadcast|multicast|running}
	I0108 22:56:28.548150    5968 ip.go:210] interface addr: fe80::3fa5:15f5:46dc:dc8f/64
	I0108 22:56:28.548683    5968 ip.go:210] interface addr: 172.24.96.1/20
	I0108 22:56:28.564852    5968 ssh_runner.go:195] Run: grep 172.24.96.1	host.minikube.internal$ /etc/hosts
	I0108 22:56:28.570239    5968 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.24.96.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0108 22:56:28.588030    5968 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0108 22:56:28.597025    5968 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0108 22:56:28.621316    5968 docker.go:671] Got preloaded images: 
	I0108 22:56:28.621316    5968 docker.go:677] registry.k8s.io/kube-apiserver:v1.28.4 wasn't preloaded
	I0108 22:56:28.635026    5968 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0108 22:56:28.662981    5968 ssh_runner.go:195] Run: which lz4
	I0108 22:56:28.683402    5968 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0108 22:56:28.688968    5968 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0108 22:56:28.689203    5968 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (423165415 bytes)
	I0108 22:56:31.220975    5968 docker.go:635] Took 2.552563 seconds to copy over tarball
	I0108 22:56:31.235602    5968 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0108 22:56:38.000363    5968 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (6.7646775s)
	I0108 22:56:38.000439    5968 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0108 22:56:38.085852    5968 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0108 22:56:38.105657    5968 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2629 bytes)
	I0108 22:56:38.150234    5968 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0108 22:56:38.328521    5968 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0108 22:56:44.195522    5968 ssh_runner.go:235] Completed: sudo systemctl restart docker: (5.8670011s)
	I0108 22:56:44.207807    5968 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0108 22:56:44.236650    5968 docker.go:671] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.28.4
	registry.k8s.io/kube-proxy:v1.28.4
	registry.k8s.io/kube-controller-manager:v1.28.4
	registry.k8s.io/kube-scheduler:v1.28.4
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0108 22:56:44.236744    5968 cache_images.go:84] Images are preloaded, skipping loading
	I0108 22:56:44.247588    5968 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0108 22:56:44.287799    5968 cni.go:84] Creating CNI manager for ""
	I0108 22:56:44.288140    5968 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0108 22:56:44.288140    5968 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0108 22:56:44.288140    5968 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.24.111.87 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-852800 NodeName:addons-852800 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.24.111.87"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.24.111.87 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/ku
bernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0108 22:56:44.288505    5968 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.24.111.87
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "addons-852800"
	  kubeletExtraArgs:
	    node-ip: 172.24.111.87
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.24.111.87"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0108 22:56:44.288699    5968 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=addons-852800 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.24.111.87
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:addons-852800 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0108 22:56:44.302476    5968 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0108 22:56:44.317753    5968 binaries.go:44] Found k8s binaries, skipping transfer
	I0108 22:56:44.332542    5968 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0108 22:56:44.345713    5968 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (375 bytes)
	I0108 22:56:44.379787    5968 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0108 22:56:44.407330    5968 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2099 bytes)
	I0108 22:56:44.448860    5968 ssh_runner.go:195] Run: grep 172.24.111.87	control-plane.minikube.internal$ /etc/hosts
	I0108 22:56:44.454803    5968 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.24.111.87	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0108 22:56:44.472522    5968 certs.go:56] Setting up C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-852800 for IP: 172.24.111.87
	I0108 22:56:44.472638    5968 certs.go:190] acquiring lock for shared ca certs: {Name:mk7484196a709b348d442d7deac4228c8c4b804e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 22:56:44.472903    5968 certs.go:204] generating minikubeCA CA: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key
	I0108 22:56:45.112812    5968 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt ...
	I0108 22:56:45.112812    5968 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt: {Name:mk7a559291b59fd1cacf23acd98c76aadd417440 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 22:56:45.115094    5968 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key ...
	I0108 22:56:45.115094    5968 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key: {Name:mkbedd9bb05780b48b3744f1500f6ab6cea55798 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 22:56:45.117174    5968 certs.go:204] generating proxyClientCA CA: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key
	I0108 22:56:45.422224    5968 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt ...
	I0108 22:56:45.423203    5968 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt: {Name:mkd3d06d8ce13b6ea5bb86cd17b70e85416bbf21 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 22:56:45.424167    5968 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key ...
	I0108 22:56:45.424167    5968 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key: {Name:mkf3a613f937d3e2839d9a0e4a8e5134d5e75dad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 22:56:45.425818    5968 certs.go:319] generating minikube-user signed cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-852800\client.key
	I0108 22:56:45.425818    5968 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-852800\client.crt with IP's: []
	I0108 22:56:45.602841    5968 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-852800\client.crt ...
	I0108 22:56:45.602841    5968 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-852800\client.crt: {Name:mk2c1524e4a218759798e9a546549e0acfdf7a09 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 22:56:45.604299    5968 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-852800\client.key ...
	I0108 22:56:45.604299    5968 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-852800\client.key: {Name:mka5abe3ae1c2812116b5582d8f3bdbd8a23f94e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 22:56:45.605878    5968 certs.go:319] generating minikube signed cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-852800\apiserver.key.142731bf
	I0108 22:56:45.606543    5968 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-852800\apiserver.crt.142731bf with IP's: [172.24.111.87 10.96.0.1 127.0.0.1 10.0.0.1]
	I0108 22:56:45.703450    5968 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-852800\apiserver.crt.142731bf ...
	I0108 22:56:45.703450    5968 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-852800\apiserver.crt.142731bf: {Name:mka32259dc2b79df06a81a2dd5e56b8c60b49d3b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 22:56:45.705337    5968 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-852800\apiserver.key.142731bf ...
	I0108 22:56:45.705337    5968 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-852800\apiserver.key.142731bf: {Name:mkb2a9c64be88a0d9d67b0564ee064057954ff0e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 22:56:45.706218    5968 certs.go:337] copying C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-852800\apiserver.crt.142731bf -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-852800\apiserver.crt
	I0108 22:56:45.718165    5968 certs.go:341] copying C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-852800\apiserver.key.142731bf -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-852800\apiserver.key
	I0108 22:56:45.720265    5968 certs.go:319] generating aggregator signed cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-852800\proxy-client.key
	I0108 22:56:45.720265    5968 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-852800\proxy-client.crt with IP's: []
	I0108 22:56:45.857103    5968 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-852800\proxy-client.crt ...
	I0108 22:56:45.858121    5968 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-852800\proxy-client.crt: {Name:mk3b2040a6bd9c39737262bcc7dc417256da9343 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 22:56:45.859515    5968 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-852800\proxy-client.key ...
	I0108 22:56:45.859515    5968 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-852800\proxy-client.key: {Name:mk8d79cacc87aeea65107e28fa6b03ddec532481 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 22:56:45.872004    5968 certs.go:437] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I0108 22:56:45.872303    5968 certs.go:437] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0108 22:56:45.872535    5968 certs.go:437] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0108 22:56:45.872535    5968 certs.go:437] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I0108 22:56:45.873356    5968 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-852800\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0108 22:56:45.914859    5968 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-852800\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0108 22:56:45.961747    5968 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-852800\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0108 22:56:46.001079    5968 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-852800\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0108 22:56:46.039090    5968 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0108 22:56:46.081344    5968 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0108 22:56:46.123602    5968 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0108 22:56:46.164332    5968 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0108 22:56:46.211752    5968 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0108 22:56:46.249016    5968 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0108 22:56:46.295523    5968 ssh_runner.go:195] Run: openssl version
	I0108 22:56:46.320335    5968 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0108 22:56:46.349810    5968 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0108 22:56:46.355819    5968 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan  8 22:56 /usr/share/ca-certificates/minikubeCA.pem
	I0108 22:56:46.367856    5968 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0108 22:56:46.391657    5968 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0108 22:56:46.421380    5968 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0108 22:56:46.426207    5968 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0108 22:56:46.426207    5968 kubeadm.go:404] StartCluster: {Name:addons-852800 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17806/minikube-v1.32.1-1702708929-17806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704751654-17830@sha256:cabd32f8d9e8d804966eb117ed5366660f6363a4d1415f0b5480de6e396be617 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:
v1.28.4 ClusterName:addons-852800 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:172.24.111.87 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262
144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I0108 22:56:46.436330    5968 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0108 22:56:46.476429    5968 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0108 22:56:46.508185    5968 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0108 22:56:46.536116    5968 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0108 22:56:46.551005    5968 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0108 22:56:46.551005    5968 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0108 22:56:46.825959    5968 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0108 22:57:00.921619    5968 kubeadm.go:322] [init] Using Kubernetes version: v1.28.4
	I0108 22:57:00.921619    5968 kubeadm.go:322] [preflight] Running pre-flight checks
	I0108 22:57:00.922615    5968 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0108 22:57:00.922615    5968 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0108 22:57:00.922615    5968 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0108 22:57:00.922615    5968 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0108 22:57:00.925657    5968 out.go:204]   - Generating certificates and keys ...
	I0108 22:57:00.926639    5968 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0108 22:57:00.926639    5968 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0108 22:57:00.926639    5968 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0108 22:57:00.926639    5968 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0108 22:57:00.926639    5968 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0108 22:57:00.926639    5968 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0108 22:57:00.926639    5968 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0108 22:57:00.927615    5968 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [addons-852800 localhost] and IPs [172.24.111.87 127.0.0.1 ::1]
	I0108 22:57:00.927615    5968 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0108 22:57:00.927615    5968 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [addons-852800 localhost] and IPs [172.24.111.87 127.0.0.1 ::1]
	I0108 22:57:00.927615    5968 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0108 22:57:00.927615    5968 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0108 22:57:00.927615    5968 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0108 22:57:00.928629    5968 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0108 22:57:00.928629    5968 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0108 22:57:00.928629    5968 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0108 22:57:00.928629    5968 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0108 22:57:00.928629    5968 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0108 22:57:00.928629    5968 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0108 22:57:00.928629    5968 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0108 22:57:00.932616    5968 out.go:204]   - Booting up control plane ...
	I0108 22:57:00.932616    5968 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0108 22:57:00.932616    5968 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0108 22:57:00.932616    5968 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0108 22:57:00.933620    5968 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0108 22:57:00.933620    5968 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0108 22:57:00.933620    5968 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0108 22:57:00.933620    5968 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0108 22:57:00.933620    5968 kubeadm.go:322] [apiclient] All control plane components are healthy after 9.004389 seconds
	I0108 22:57:00.934634    5968 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0108 22:57:00.934634    5968 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0108 22:57:00.934634    5968 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0108 22:57:00.934634    5968 kubeadm.go:322] [mark-control-plane] Marking the node addons-852800 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0108 22:57:00.935615    5968 kubeadm.go:322] [bootstrap-token] Using token: auco2l.m2ny08ve0thga9w1
	I0108 22:57:00.938621    5968 out.go:204]   - Configuring RBAC rules ...
	I0108 22:57:00.938621    5968 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0108 22:57:00.938621    5968 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0108 22:57:00.938621    5968 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0108 22:57:00.939637    5968 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0108 22:57:00.939637    5968 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0108 22:57:00.939637    5968 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0108 22:57:00.939637    5968 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0108 22:57:00.940607    5968 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0108 22:57:00.940607    5968 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0108 22:57:00.940607    5968 kubeadm.go:322] 
	I0108 22:57:00.940607    5968 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0108 22:57:00.940607    5968 kubeadm.go:322] 
	I0108 22:57:00.940607    5968 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0108 22:57:00.940607    5968 kubeadm.go:322] 
	I0108 22:57:00.940607    5968 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0108 22:57:00.940607    5968 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0108 22:57:00.940607    5968 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0108 22:57:00.940607    5968 kubeadm.go:322] 
	I0108 22:57:00.940607    5968 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0108 22:57:00.940607    5968 kubeadm.go:322] 
	I0108 22:57:00.940607    5968 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0108 22:57:00.941607    5968 kubeadm.go:322] 
	I0108 22:57:00.941607    5968 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0108 22:57:00.941607    5968 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0108 22:57:00.941607    5968 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0108 22:57:00.941607    5968 kubeadm.go:322] 
	I0108 22:57:00.941607    5968 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0108 22:57:00.941607    5968 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0108 22:57:00.941607    5968 kubeadm.go:322] 
	I0108 22:57:00.941607    5968 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token auco2l.m2ny08ve0thga9w1 \
	I0108 22:57:00.942626    5968 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:6a12e94bf3397e7db59fa944f4e20c2c2c34b5794397b381e3c5134eb1900391 \
	I0108 22:57:00.942626    5968 kubeadm.go:322] 	--control-plane 
	I0108 22:57:00.942626    5968 kubeadm.go:322] 
	I0108 22:57:00.942626    5968 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0108 22:57:00.942626    5968 kubeadm.go:322] 
	I0108 22:57:00.942626    5968 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token auco2l.m2ny08ve0thga9w1 \
	I0108 22:57:00.942626    5968 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:6a12e94bf3397e7db59fa944f4e20c2c2c34b5794397b381e3c5134eb1900391 
	I0108 22:57:00.942626    5968 cni.go:84] Creating CNI manager for ""
	I0108 22:57:00.942626    5968 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0108 22:57:00.945615    5968 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0108 22:57:00.965620    5968 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0108 22:57:00.979354    5968 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0108 22:57:01.026213    5968 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0108 22:57:01.043575    5968 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:57:01.043575    5968 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a minikube.k8s.io/name=addons-852800 minikube.k8s.io/updated_at=2024_01_08T22_57_01_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:57:01.056926    5968 ops.go:34] apiserver oom_adj: -16
	I0108 22:57:01.358795    5968 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:57:01.867036    5968 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:57:02.372257    5968 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:57:02.861014    5968 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:57:03.362894    5968 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:57:03.862775    5968 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:57:04.369691    5968 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:57:04.869235    5968 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:57:05.369771    5968 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:57:05.861836    5968 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:57:06.367482    5968 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:57:06.869237    5968 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:57:07.359297    5968 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:57:07.861901    5968 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:57:08.361805    5968 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:57:08.869693    5968 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:57:09.365769    5968 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:57:09.869482    5968 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:57:10.370265    5968 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:57:10.870621    5968 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:57:11.371290    5968 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:57:11.861633    5968 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:57:12.362929    5968 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:57:12.867344    5968 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:57:13.361292    5968 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:57:13.512289    5968 kubeadm.go:1088] duration metric: took 12.4860034s to wait for elevateKubeSystemPrivileges.
	I0108 22:57:13.512289    5968 kubeadm.go:406] StartCluster complete in 27.086079s
	I0108 22:57:13.512289    5968 settings.go:142] acquiring lock: {Name:mkab556291043b7dcd90a9d60c03aa7fa181e125 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 22:57:13.512289    5968 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0108 22:57:13.513281    5968 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\kubeconfig: {Name:mk2e2755bd0ffee2cfcc8bbf22c26f99d53697ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 22:57:13.516299    5968 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0108 22:57:13.516299    5968 addons.go:505] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volumesnapshots:true yakd:true]
	I0108 22:57:13.516299    5968 addons.go:69] Setting yakd=true in profile "addons-852800"
	I0108 22:57:13.516299    5968 config.go:182] Loaded profile config "addons-852800": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0108 22:57:13.516299    5968 addons.go:237] Setting addon yakd=true in "addons-852800"
	I0108 22:57:13.516299    5968 addons.go:69] Setting cloud-spanner=true in profile "addons-852800"
	I0108 22:57:13.516299    5968 addons.go:69] Setting inspektor-gadget=true in profile "addons-852800"
	I0108 22:57:13.516299    5968 addons.go:237] Setting addon cloud-spanner=true in "addons-852800"
	I0108 22:57:13.516299    5968 addons.go:69] Setting ingress-dns=true in profile "addons-852800"
	I0108 22:57:13.516299    5968 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-852800"
	I0108 22:57:13.516299    5968 addons.go:237] Setting addon ingress-dns=true in "addons-852800"
	I0108 22:57:13.516299    5968 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-852800"
	I0108 22:57:13.516299    5968 host.go:66] Checking if "addons-852800" exists ...
	I0108 22:57:13.516299    5968 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-852800"
	I0108 22:57:13.516299    5968 addons.go:69] Setting registry=true in profile "addons-852800"
	I0108 22:57:13.516299    5968 addons.go:237] Setting addon registry=true in "addons-852800"
	I0108 22:57:13.516299    5968 host.go:66] Checking if "addons-852800" exists ...
	I0108 22:57:13.516299    5968 addons.go:237] Setting addon inspektor-gadget=true in "addons-852800"
	I0108 22:57:13.516299    5968 host.go:66] Checking if "addons-852800" exists ...
	I0108 22:57:13.516299    5968 addons.go:69] Setting metrics-server=true in profile "addons-852800"
	I0108 22:57:13.516299    5968 addons.go:237] Setting addon nvidia-device-plugin=true in "addons-852800"
	I0108 22:57:13.516299    5968 addons.go:69] Setting default-storageclass=true in profile "addons-852800"
	I0108 22:57:13.517300    5968 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-852800"
	I0108 22:57:13.517300    5968 host.go:66] Checking if "addons-852800" exists ...
	I0108 22:57:13.516299    5968 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-852800"
	I0108 22:57:13.517300    5968 addons.go:237] Setting addon csi-hostpath-driver=true in "addons-852800"
	I0108 22:57:13.517300    5968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-852800 ).state
	I0108 22:57:13.517300    5968 host.go:66] Checking if "addons-852800" exists ...
	I0108 22:57:13.516299    5968 addons.go:69] Setting helm-tiller=true in profile "addons-852800"
	I0108 22:57:13.516299    5968 addons.go:69] Setting ingress=true in profile "addons-852800"
	I0108 22:57:13.517300    5968 addons.go:237] Setting addon helm-tiller=true in "addons-852800"
	I0108 22:57:13.517300    5968 addons.go:237] Setting addon ingress=true in "addons-852800"
	I0108 22:57:13.517300    5968 host.go:66] Checking if "addons-852800" exists ...
	I0108 22:57:13.517300    5968 host.go:66] Checking if "addons-852800" exists ...
	I0108 22:57:13.516299    5968 addons.go:69] Setting storage-provisioner=true in profile "addons-852800"
	I0108 22:57:13.518301    5968 addons.go:237] Setting addon storage-provisioner=true in "addons-852800"
	I0108 22:57:13.518301    5968 host.go:66] Checking if "addons-852800" exists ...
	I0108 22:57:13.516299    5968 addons.go:69] Setting volumesnapshots=true in profile "addons-852800"
	I0108 22:57:13.518301    5968 addons.go:237] Setting addon volumesnapshots=true in "addons-852800"
	I0108 22:57:13.518301    5968 host.go:66] Checking if "addons-852800" exists ...
	I0108 22:57:13.518301    5968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-852800 ).state
	I0108 22:57:13.517300    5968 addons.go:237] Setting addon metrics-server=true in "addons-852800"
	I0108 22:57:13.519294    5968 host.go:66] Checking if "addons-852800" exists ...
	I0108 22:57:13.516299    5968 host.go:66] Checking if "addons-852800" exists ...
	I0108 22:57:13.516299    5968 addons.go:69] Setting gcp-auth=true in profile "addons-852800"
	I0108 22:57:13.519294    5968 mustload.go:65] Loading cluster: addons-852800
	I0108 22:57:13.517300    5968 host.go:66] Checking if "addons-852800" exists ...
	I0108 22:57:13.520286    5968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-852800 ).state
	I0108 22:57:13.520286    5968 config.go:182] Loaded profile config "addons-852800": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0108 22:57:13.520286    5968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-852800 ).state
	I0108 22:57:13.521303    5968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-852800 ).state
	I0108 22:57:13.522293    5968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-852800 ).state
	I0108 22:57:13.523298    5968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-852800 ).state
	I0108 22:57:13.523298    5968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-852800 ).state
	I0108 22:57:13.524323    5968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-852800 ).state
	I0108 22:57:13.524323    5968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-852800 ).state
	I0108 22:57:13.527012    5968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-852800 ).state
	I0108 22:57:13.527478    5968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-852800 ).state
	I0108 22:57:13.528319    5968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-852800 ).state
	I0108 22:57:13.528319    5968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-852800 ).state
	I0108 22:57:13.528319    5968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-852800 ).state
	I0108 22:57:14.262330    5968 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           172.24.96.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0108 22:57:14.593395    5968 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-852800" context rescaled to 1 replicas
	I0108 22:57:14.593395    5968 start.go:223] Will wait 6m0s for node &{Name: IP:172.24.111.87 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0108 22:57:14.614947    5968 out.go:177] * Verifying Kubernetes components...
	I0108 22:57:14.696222    5968 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0108 22:57:20.046900    5968 main.go:141] libmachine: [stdout =====>] : Running
	
	I0108 22:57:20.046900    5968 main.go:141] libmachine: [stderr =====>] : 
	I0108 22:57:20.057656    5968 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.14.3
	I0108 22:57:20.068028    5968 addons.go:429] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0108 22:57:20.068028    5968 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0108 22:57:20.068028    5968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-852800 ).state
	I0108 22:57:20.081856    5968 main.go:141] libmachine: [stdout =====>] : Running
	
	I0108 22:57:20.081856    5968 main.go:141] libmachine: [stderr =====>] : 
	I0108 22:57:20.087245    5968 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0108 22:57:20.092254    5968 addons.go:429] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0108 22:57:20.092254    5968 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0108 22:57:20.092254    5968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-852800 ).state
	I0108 22:57:20.088350    5968 main.go:141] libmachine: [stdout =====>] : Running
	
	I0108 22:57:20.093396    5968 main.go:141] libmachine: [stderr =====>] : 
	I0108 22:57:20.096088    5968 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.6.4
	I0108 22:57:20.094248    5968 main.go:141] libmachine: [stdout =====>] : Running
	
	I0108 22:57:20.099978    5968 main.go:141] libmachine: [stderr =====>] : 
	I0108 22:57:20.107986    5968 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	I0108 22:57:20.100978    5968 addons.go:429] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0108 22:57:20.110978    5968 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0108 22:57:20.111962    5968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-852800 ).state
	I0108 22:57:20.111962    5968 addons.go:429] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0108 22:57:20.111962    5968 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0108 22:57:20.111962    5968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-852800 ).state
	I0108 22:57:20.120984    5968 main.go:141] libmachine: [stdout =====>] : Running
	
	I0108 22:57:20.120984    5968 main.go:141] libmachine: [stderr =====>] : 
	I0108 22:57:20.128963    5968 addons.go:237] Setting addon default-storageclass=true in "addons-852800"
	I0108 22:57:20.128963    5968 host.go:66] Checking if "addons-852800" exists ...
	I0108 22:57:20.130653    5968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-852800 ).state
	I0108 22:57:20.206695    5968 main.go:141] libmachine: [stdout =====>] : Running
	
	I0108 22:57:20.206695    5968 main.go:141] libmachine: [stderr =====>] : 
	I0108 22:57:20.209509    5968 out.go:177]   - Using image docker.io/registry:2.8.3
	I0108 22:57:20.213020    5968 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.5
	I0108 22:57:20.215510    5968 addons.go:429] installing /etc/kubernetes/addons/registry-rc.yaml
	I0108 22:57:20.216143    5968 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (798 bytes)
	I0108 22:57:20.216143    5968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-852800 ).state
	I0108 22:57:20.213065    5968 main.go:141] libmachine: [stdout =====>] : Running
	
	I0108 22:57:20.216143    5968 main.go:141] libmachine: [stderr =====>] : 
	I0108 22:57:20.219551    5968 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0108 22:57:20.222172    5968 addons.go:429] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0108 22:57:20.223241    5968 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0108 22:57:20.223241    5968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-852800 ).state
	I0108 22:57:20.221334    5968 main.go:141] libmachine: [stdout =====>] : Running
	
	I0108 22:57:20.223241    5968 main.go:141] libmachine: [stderr =====>] : 
	I0108 22:57:20.223241    5968 host.go:66] Checking if "addons-852800" exists ...
	I0108 22:57:20.221334    5968 main.go:141] libmachine: [stdout =====>] : Running
	
	I0108 22:57:20.225133    5968 main.go:141] libmachine: [stderr =====>] : 
	I0108 22:57:20.230986    5968 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0108 22:57:20.242857    5968 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0108 22:57:20.257101    5968 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0108 22:57:20.285092    5968 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0108 22:57:20.296419    5968 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0108 22:57:20.351058    5968 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0108 22:57:20.383047    5968 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0108 22:57:20.409397    5968 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0108 22:57:20.415047    5968 addons.go:429] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0108 22:57:20.415047    5968 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0108 22:57:20.415047    5968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-852800 ).state
	I0108 22:57:20.426051    5968 main.go:141] libmachine: [stdout =====>] : Running
	
	I0108 22:57:20.426051    5968 main.go:141] libmachine: [stderr =====>] : 
	I0108 22:57:20.439712    5968 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.13
	I0108 22:57:20.445193    5968 addons.go:429] installing /etc/kubernetes/addons/deployment.yaml
	I0108 22:57:20.445193    5968 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0108 22:57:20.445193    5968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-852800 ).state
	I0108 22:57:20.483299    5968 main.go:141] libmachine: [stdout =====>] : Running
	
	I0108 22:57:20.483299    5968 main.go:141] libmachine: [stderr =====>] : 
	I0108 22:57:20.524957    5968 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0108 22:57:20.542904    5968 addons.go:429] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0108 22:57:20.542904    5968 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0108 22:57:20.542904    5968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-852800 ).state
	I0108 22:57:20.551157    5968 main.go:141] libmachine: [stdout =====>] : Running
	
	I0108 22:57:20.551157    5968 main.go:141] libmachine: [stderr =====>] : 
	I0108 22:57:20.561978    5968 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.4
	I0108 22:57:20.564981    5968 addons.go:429] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0108 22:57:20.564981    5968 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0108 22:57:20.564981    5968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-852800 ).state
	I0108 22:57:20.573144    5968 main.go:141] libmachine: [stdout =====>] : Running
	
	I0108 22:57:20.573996    5968 main.go:141] libmachine: [stderr =====>] : 
	I0108 22:57:20.578970    5968 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
	I0108 22:57:20.591977    5968 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (5.8957549s)
	I0108 22:57:20.591977    5968 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
	I0108 22:57:20.601147    5968 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.9.5
	I0108 22:57:20.594976    5968 node_ready.go:35] waiting up to 6m0s for node "addons-852800" to be "Ready" ...
	I0108 22:57:20.591977    5968 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           172.24.96.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (6.3296466s)
	I0108 22:57:20.607860    5968 start.go:929] {"host.minikube.internal": 172.24.96.1} host record injected into CoreDNS's ConfigMap
	I0108 22:57:20.609229    5968 addons.go:429] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0108 22:57:20.609229    5968 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16103 bytes)
	I0108 22:57:20.609229    5968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-852800 ).state
	I0108 22:57:20.724814    5968 node_ready.go:49] node "addons-852800" has status "Ready":"True"
	I0108 22:57:20.724814    5968 node_ready.go:38] duration metric: took 117.662ms waiting for node "addons-852800" to be "Ready" ...
	I0108 22:57:20.724814    5968 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0108 22:57:20.779123    5968 main.go:141] libmachine: [stdout =====>] : Running
	
	I0108 22:57:20.779123    5968 main.go:141] libmachine: [stderr =====>] : 
	I0108 22:57:20.782120    5968 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.23.1
	I0108 22:57:20.787122    5968 addons.go:429] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0108 22:57:20.787122    5968 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0108 22:57:20.787122    5968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-852800 ).state
	I0108 22:57:20.789132    5968 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-ppfkf" in "kube-system" namespace to be "Ready" ...
	I0108 22:57:20.818440    5968 main.go:141] libmachine: [stdout =====>] : Running
	
	I0108 22:57:20.818440    5968 main.go:141] libmachine: [stderr =====>] : 
	I0108 22:57:20.821443    5968 addons.go:237] Setting addon storage-provisioner-rancher=true in "addons-852800"
	I0108 22:57:20.821443    5968 host.go:66] Checking if "addons-852800" exists ...
	I0108 22:57:20.823062    5968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-852800 ).state
	I0108 22:57:23.092048    5968 pod_ready.go:102] pod "coredns-5dd5756b68-ppfkf" in "kube-system" namespace has status "Ready":"False"
	I0108 22:57:25.336236    5968 pod_ready.go:102] pod "coredns-5dd5756b68-ppfkf" in "kube-system" namespace has status "Ready":"False"
	I0108 22:57:25.518610    5968 main.go:141] libmachine: [stdout =====>] : Running
	
	I0108 22:57:25.518610    5968 main.go:141] libmachine: [stderr =====>] : 
	I0108 22:57:25.518610    5968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-852800 ).networkadapters[0]).ipaddresses[0]
	I0108 22:57:26.250104    5968 main.go:141] libmachine: [stdout =====>] : Running
	
	I0108 22:57:26.250104    5968 main.go:141] libmachine: [stderr =====>] : 
	I0108 22:57:26.250104    5968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-852800 ).networkadapters[0]).ipaddresses[0]
	I0108 22:57:26.280198    5968 main.go:141] libmachine: [stdout =====>] : Running
	
	I0108 22:57:26.280198    5968 main.go:141] libmachine: [stderr =====>] : 
	I0108 22:57:26.280198    5968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-852800 ).networkadapters[0]).ipaddresses[0]
	I0108 22:57:26.286342    5968 main.go:141] libmachine: [stdout =====>] : Running
	
	I0108 22:57:26.286342    5968 main.go:141] libmachine: [stderr =====>] : 
	I0108 22:57:26.286342    5968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-852800 ).networkadapters[0]).ipaddresses[0]
	I0108 22:57:26.397211    5968 main.go:141] libmachine: [stdout =====>] : Running
	
	I0108 22:57:26.397211    5968 main.go:141] libmachine: [stderr =====>] : 
	I0108 22:57:26.397211    5968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-852800 ).networkadapters[0]).ipaddresses[0]
	I0108 22:57:26.553248    5968 main.go:141] libmachine: [stdout =====>] : Running
	
	I0108 22:57:26.553248    5968 main.go:141] libmachine: [stderr =====>] : 
	I0108 22:57:26.553248    5968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-852800 ).networkadapters[0]).ipaddresses[0]
	I0108 22:57:26.625381    5968 main.go:141] libmachine: [stdout =====>] : Running
	
	I0108 22:57:26.625381    5968 main.go:141] libmachine: [stderr =====>] : 
	I0108 22:57:26.625381    5968 addons.go:429] installing /etc/kubernetes/addons/storageclass.yaml
	I0108 22:57:26.625381    5968 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0108 22:57:26.625381    5968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-852800 ).state
	I0108 22:57:26.679611    5968 main.go:141] libmachine: [stdout =====>] : Running
	
	I0108 22:57:26.679611    5968 main.go:141] libmachine: [stderr =====>] : 
	I0108 22:57:26.679611    5968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-852800 ).networkadapters[0]).ipaddresses[0]
	I0108 22:57:26.838063    5968 main.go:141] libmachine: [stdout =====>] : Running
	
	I0108 22:57:26.838063    5968 main.go:141] libmachine: [stderr =====>] : 
	I0108 22:57:26.838633    5968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-852800 ).networkadapters[0]).ipaddresses[0]
	I0108 22:57:26.851305    5968 main.go:141] libmachine: [stdout =====>] : Running
	
	I0108 22:57:26.851305    5968 main.go:141] libmachine: [stderr =====>] : 
	I0108 22:57:26.851305    5968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-852800 ).networkadapters[0]).ipaddresses[0]
	I0108 22:57:27.287997    5968 main.go:141] libmachine: [stdout =====>] : Running
	
	I0108 22:57:27.287997    5968 main.go:141] libmachine: [stderr =====>] : 
	I0108 22:57:27.319012    5968 out.go:177]   - Using image docker.io/busybox:stable
	I0108 22:57:27.296012    5968 main.go:141] libmachine: [stdout =====>] : Running
	
	I0108 22:57:27.344026    5968 main.go:141] libmachine: [stdout =====>] : Running
	
	I0108 22:57:27.364390    5968 main.go:141] libmachine: [stdout =====>] : Running
	
	I0108 22:57:27.365015    5968 pod_ready.go:102] pod "coredns-5dd5756b68-ppfkf" in "kube-system" namespace has status "Ready":"False"
	I0108 22:57:27.425154    5968 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0108 22:57:27.388137    5968 main.go:141] libmachine: [stderr =====>] : 
	I0108 22:57:27.388137    5968 main.go:141] libmachine: [stderr =====>] : 
	I0108 22:57:27.388137    5968 main.go:141] libmachine: [stderr =====>] : 
	I0108 22:57:27.439519    5968 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0108 22:57:27.471429    5968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-852800 ).state
	I0108 22:57:27.477410    5968 addons.go:429] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0108 22:57:27.477410    5968 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0108 22:57:27.477410    5968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-852800 ).state
	I0108 22:57:27.486103    5968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-852800 ).networkadapters[0]).ipaddresses[0]
	I0108 22:57:27.494105    5968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-852800 ).networkadapters[0]).ipaddresses[0]
	I0108 22:57:27.496105    5968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-852800 ).networkadapters[0]).ipaddresses[0]
	I0108 22:57:29.442977    5968 pod_ready.go:102] pod "coredns-5dd5756b68-ppfkf" in "kube-system" namespace has status "Ready":"False"
	I0108 22:57:31.939638    5968 pod_ready.go:102] pod "coredns-5dd5756b68-ppfkf" in "kube-system" namespace has status "Ready":"False"
	I0108 22:57:32.281858    5968 main.go:141] libmachine: [stdout =====>] : Running
	
	I0108 22:57:32.281858    5968 main.go:141] libmachine: [stderr =====>] : 
	I0108 22:57:32.281858    5968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-852800 ).networkadapters[0]).ipaddresses[0]
	I0108 22:57:32.397858    5968 pod_ready.go:92] pod "coredns-5dd5756b68-ppfkf" in "kube-system" namespace has status "Ready":"True"
	I0108 22:57:32.397858    5968 pod_ready.go:81] duration metric: took 11.6087244s waiting for pod "coredns-5dd5756b68-ppfkf" in "kube-system" namespace to be "Ready" ...
	I0108 22:57:32.397858    5968 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-vjs6l" in "kube-system" namespace to be "Ready" ...
	I0108 22:57:32.566486    5968 pod_ready.go:92] pod "coredns-5dd5756b68-vjs6l" in "kube-system" namespace has status "Ready":"True"
	I0108 22:57:32.566486    5968 pod_ready.go:81] duration metric: took 168.6287ms waiting for pod "coredns-5dd5756b68-vjs6l" in "kube-system" namespace to be "Ready" ...
	I0108 22:57:32.566486    5968 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-852800" in "kube-system" namespace to be "Ready" ...
	I0108 22:57:32.593506    5968 pod_ready.go:92] pod "etcd-addons-852800" in "kube-system" namespace has status "Ready":"True"
	I0108 22:57:32.593506    5968 pod_ready.go:81] duration metric: took 27.0194ms waiting for pod "etcd-addons-852800" in "kube-system" namespace to be "Ready" ...
	I0108 22:57:32.593506    5968 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-852800" in "kube-system" namespace to be "Ready" ...
	I0108 22:57:32.620458    5968 pod_ready.go:92] pod "kube-apiserver-addons-852800" in "kube-system" namespace has status "Ready":"True"
	I0108 22:57:32.620458    5968 pod_ready.go:81] duration metric: took 26.9519ms waiting for pod "kube-apiserver-addons-852800" in "kube-system" namespace to be "Ready" ...
	I0108 22:57:32.620458    5968 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-852800" in "kube-system" namespace to be "Ready" ...
	I0108 22:57:32.638456    5968 pod_ready.go:92] pod "kube-controller-manager-addons-852800" in "kube-system" namespace has status "Ready":"True"
	I0108 22:57:32.638456    5968 pod_ready.go:81] duration metric: took 17.9985ms waiting for pod "kube-controller-manager-addons-852800" in "kube-system" namespace to be "Ready" ...
	I0108 22:57:32.638456    5968 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-k5rgc" in "kube-system" namespace to be "Ready" ...
	I0108 22:57:32.739942    5968 pod_ready.go:92] pod "kube-proxy-k5rgc" in "kube-system" namespace has status "Ready":"True"
	I0108 22:57:32.740771    5968 pod_ready.go:81] duration metric: took 102.3149ms waiting for pod "kube-proxy-k5rgc" in "kube-system" namespace to be "Ready" ...
	I0108 22:57:32.743381    5968 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-852800" in "kube-system" namespace to be "Ready" ...
	I0108 22:57:33.146664    5968 pod_ready.go:92] pod "kube-scheduler-addons-852800" in "kube-system" namespace has status "Ready":"True"
	I0108 22:57:33.146664    5968 pod_ready.go:81] duration metric: took 403.2833ms waiting for pod "kube-scheduler-addons-852800" in "kube-system" namespace to be "Ready" ...
	I0108 22:57:33.146664    5968 pod_ready.go:38] duration metric: took 12.4218488s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0108 22:57:33.146664    5968 api_server.go:52] waiting for apiserver process to appear ...
	I0108 22:57:33.175315    5968 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 22:57:33.175593    5968 main.go:141] libmachine: [stdout =====>] : 172.24.111.87
	
	I0108 22:57:33.175593    5968 main.go:141] libmachine: [stderr =====>] : 
	I0108 22:57:33.175855    5968 sshutil.go:53] new ssh client: &{IP:172.24.111.87 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\addons-852800\id_rsa Username:docker}
	I0108 22:57:33.270313    5968 main.go:141] libmachine: [stdout =====>] : 172.24.111.87
	
	I0108 22:57:33.270313    5968 main.go:141] libmachine: [stderr =====>] : 
	I0108 22:57:33.270313    5968 sshutil.go:53] new ssh client: &{IP:172.24.111.87 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\addons-852800\id_rsa Username:docker}
	I0108 22:57:33.307342    5968 main.go:141] libmachine: [stdout =====>] : 172.24.111.87
	
	I0108 22:57:33.307342    5968 main.go:141] libmachine: [stderr =====>] : 
	I0108 22:57:33.307910    5968 sshutil.go:53] new ssh client: &{IP:172.24.111.87 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\addons-852800\id_rsa Username:docker}
	I0108 22:57:33.378030    5968 api_server.go:72] duration metric: took 18.7838844s to wait for apiserver process to appear ...
	I0108 22:57:33.378030    5968 api_server.go:88] waiting for apiserver healthz status ...
	I0108 22:57:33.378030    5968 api_server.go:253] Checking apiserver healthz at https://172.24.111.87:8443/healthz ...
	I0108 22:57:33.404640    5968 api_server.go:279] https://172.24.111.87:8443/healthz returned 200:
	ok
	I0108 22:57:33.411645    5968 api_server.go:141] control plane version: v1.28.4
	I0108 22:57:33.411645    5968 api_server.go:131] duration metric: took 33.6155ms to wait for apiserver health ...
	I0108 22:57:33.411645    5968 system_pods.go:43] waiting for kube-system pods to appear ...
	I0108 22:57:33.445153    5968 system_pods.go:59] 7 kube-system pods found
	I0108 22:57:33.445754    5968 system_pods.go:61] "coredns-5dd5756b68-ppfkf" [7a420ec8-5e68-418d-9389-820361d9e1c9] Running
	I0108 22:57:33.445754    5968 system_pods.go:61] "coredns-5dd5756b68-vjs6l" [a72d92fe-0b88-4b9c-a984-bfb2b81bb1d9] Running
	I0108 22:57:33.445754    5968 system_pods.go:61] "etcd-addons-852800" [544ed41c-0af2-47eb-8749-cee9b7ef5a7e] Running
	I0108 22:57:33.445754    5968 system_pods.go:61] "kube-apiserver-addons-852800" [5716c340-ef3c-4a1b-9bf7-551e24601fcf] Running
	I0108 22:57:33.445754    5968 system_pods.go:61] "kube-controller-manager-addons-852800" [af3231e1-47cc-4499-9eb8-b067607534d8] Running
	I0108 22:57:33.445754    5968 system_pods.go:61] "kube-proxy-k5rgc" [403e0403-2977-4efd-a448-915a73cc18a7] Running
	I0108 22:57:33.445754    5968 system_pods.go:61] "kube-scheduler-addons-852800" [c4631da3-2afd-47a3-ace5-2669df5826ff] Running
	I0108 22:57:33.445754    5968 system_pods.go:74] duration metric: took 34.1087ms to wait for pod list to return data ...
	I0108 22:57:33.445754    5968 default_sa.go:34] waiting for default service account to be created ...
	I0108 22:57:33.466167    5968 main.go:141] libmachine: [stdout =====>] : 172.24.111.87
	
	I0108 22:57:33.466167    5968 main.go:141] libmachine: [stderr =====>] : 
	I0108 22:57:33.466167    5968 sshutil.go:53] new ssh client: &{IP:172.24.111.87 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\addons-852800\id_rsa Username:docker}
	I0108 22:57:33.516500    5968 default_sa.go:45] found service account: "default"
	I0108 22:57:33.516801    5968 default_sa.go:55] duration metric: took 70.9662ms for default service account to be created ...
	I0108 22:57:33.516891    5968 system_pods.go:116] waiting for k8s-apps to be running ...
	I0108 22:57:33.536634    5968 main.go:141] libmachine: [stdout =====>] : 172.24.111.87
	
	I0108 22:57:33.536634    5968 main.go:141] libmachine: [stderr =====>] : 
	I0108 22:57:33.536838    5968 sshutil.go:53] new ssh client: &{IP:172.24.111.87 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\addons-852800\id_rsa Username:docker}
	I0108 22:57:33.635307    5968 main.go:141] libmachine: [stdout =====>] : 172.24.111.87
	
	I0108 22:57:33.635307    5968 main.go:141] libmachine: [stderr =====>] : 
	I0108 22:57:33.635406    5968 sshutil.go:53] new ssh client: &{IP:172.24.111.87 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\addons-852800\id_rsa Username:docker}
	I0108 22:57:33.662226    5968 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0108 22:57:33.708047    5968 main.go:141] libmachine: [stdout =====>] : 172.24.111.87
	
	I0108 22:57:33.709045    5968 main.go:141] libmachine: [stderr =====>] : 
	I0108 22:57:33.709045    5968 sshutil.go:53] new ssh client: &{IP:172.24.111.87 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\addons-852800\id_rsa Username:docker}
	I0108 22:57:33.722037    5968 system_pods.go:86] 7 kube-system pods found
	I0108 22:57:33.723036    5968 system_pods.go:89] "coredns-5dd5756b68-ppfkf" [7a420ec8-5e68-418d-9389-820361d9e1c9] Running
	I0108 22:57:33.723036    5968 system_pods.go:89] "coredns-5dd5756b68-vjs6l" [a72d92fe-0b88-4b9c-a984-bfb2b81bb1d9] Running
	I0108 22:57:33.723036    5968 system_pods.go:89] "etcd-addons-852800" [544ed41c-0af2-47eb-8749-cee9b7ef5a7e] Running
	I0108 22:57:33.723036    5968 system_pods.go:89] "kube-apiserver-addons-852800" [5716c340-ef3c-4a1b-9bf7-551e24601fcf] Running
	I0108 22:57:33.723036    5968 system_pods.go:89] "kube-controller-manager-addons-852800" [af3231e1-47cc-4499-9eb8-b067607534d8] Running
	I0108 22:57:33.723036    5968 system_pods.go:89] "kube-proxy-k5rgc" [403e0403-2977-4efd-a448-915a73cc18a7] Running
	I0108 22:57:33.723036    5968 system_pods.go:89] "kube-scheduler-addons-852800" [c4631da3-2afd-47a3-ace5-2669df5826ff] Running
	I0108 22:57:33.723036    5968 system_pods.go:126] duration metric: took 206.145ms to wait for k8s-apps to be running ...
	I0108 22:57:33.723036    5968 system_svc.go:44] waiting for kubelet service to be running ....
	I0108 22:57:33.740037    5968 main.go:141] libmachine: [stdout =====>] : 172.24.111.87
	
	I0108 22:57:33.740037    5968 main.go:141] libmachine: [stderr =====>] : 
	I0108 22:57:33.740037    5968 sshutil.go:53] new ssh client: &{IP:172.24.111.87 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\addons-852800\id_rsa Username:docker}
	I0108 22:57:33.748038    5968 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0108 22:57:33.752039    5968 addons.go:429] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0108 22:57:33.752039    5968 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0108 22:57:33.791037    5968 main.go:141] libmachine: [stdout =====>] : Running
	
	I0108 22:57:33.791037    5968 main.go:141] libmachine: [stderr =====>] : 
	I0108 22:57:33.791037    5968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-852800 ).networkadapters[0]).ipaddresses[0]
	I0108 22:57:33.819981    5968 addons.go:429] installing /etc/kubernetes/addons/registry-svc.yaml
	I0108 22:57:33.819981    5968 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0108 22:57:33.859765    5968 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0108 22:57:33.891755    5968 main.go:141] libmachine: [stdout =====>] : 172.24.111.87
	
	I0108 22:57:33.891755    5968 main.go:141] libmachine: [stderr =====>] : 
	I0108 22:57:33.892756    5968 sshutil.go:53] new ssh client: &{IP:172.24.111.87 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\addons-852800\id_rsa Username:docker}
	I0108 22:57:33.909934    5968 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0108 22:57:33.916936    5968 main.go:141] libmachine: [stdout =====>] : Running
	
	I0108 22:57:33.916936    5968 main.go:141] libmachine: [stderr =====>] : 
	I0108 22:57:33.916936    5968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-852800 ).networkadapters[0]).ipaddresses[0]
	I0108 22:57:33.917977    5968 addons.go:429] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0108 22:57:33.917977    5968 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0108 22:57:33.976330    5968 addons.go:429] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0108 22:57:33.976330    5968 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0108 22:57:34.000677    5968 addons.go:429] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0108 22:57:34.000677    5968 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0108 22:57:34.085531    5968 addons.go:429] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0108 22:57:34.085531    5968 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0108 22:57:34.085531    5968 addons.go:429] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0108 22:57:34.085531    5968 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0108 22:57:34.175878    5968 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0108 22:57:34.276953    5968 addons.go:429] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0108 22:57:34.277013    5968 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0108 22:57:34.292252    5968 addons.go:429] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0108 22:57:34.292392    5968 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0108 22:57:34.306186    5968 addons.go:429] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0108 22:57:34.306271    5968 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0108 22:57:34.390397    5968 main.go:141] libmachine: [stdout =====>] : 172.24.111.87
	
	I0108 22:57:34.390792    5968 main.go:141] libmachine: [stderr =====>] : 
	I0108 22:57:34.391091    5968 sshutil.go:53] new ssh client: &{IP:172.24.111.87 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\addons-852800\id_rsa Username:docker}
	I0108 22:57:34.463352    5968 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0108 22:57:34.484372    5968 main.go:141] libmachine: [stdout =====>] : 172.24.111.87
	
	I0108 22:57:34.484372    5968 main.go:141] libmachine: [stderr =====>] : 
	I0108 22:57:34.484372    5968 sshutil.go:53] new ssh client: &{IP:172.24.111.87 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\addons-852800\id_rsa Username:docker}
	I0108 22:57:34.491347    5968 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0108 22:57:34.510017    5968 addons.go:429] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0108 22:57:34.510017    5968 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0108 22:57:34.571614    5968 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0108 22:57:34.572618    5968 main.go:141] libmachine: [stdout =====>] : 172.24.111.87
	
	I0108 22:57:34.572618    5968 main.go:141] libmachine: [stderr =====>] : 
	I0108 22:57:34.572618    5968 sshutil.go:53] new ssh client: &{IP:172.24.111.87 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\addons-852800\id_rsa Username:docker}
	I0108 22:57:34.584567    5968 addons.go:429] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0108 22:57:34.584631    5968 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0108 22:57:34.782795    5968 addons.go:429] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0108 22:57:34.782925    5968 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0108 22:57:34.823337    5968 addons.go:429] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0108 22:57:34.823337    5968 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0108 22:57:35.030379    5968 addons.go:429] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0108 22:57:35.030379    5968 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0108 22:57:35.043570    5968 addons.go:429] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0108 22:57:35.043653    5968 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0108 22:57:35.063585    5968 addons.go:429] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0108 22:57:35.063659    5968 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0108 22:57:35.093018    5968 addons.go:429] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0108 22:57:35.093187    5968 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0108 22:57:35.161324    5968 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0108 22:57:35.236245    5968 addons.go:429] installing /etc/kubernetes/addons/ig-role.yaml
	I0108 22:57:35.236328    5968 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0108 22:57:35.273849    5968 addons.go:429] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0108 22:57:35.273849    5968 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0108 22:57:35.275847    5968 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0108 22:57:35.386422    5968 addons.go:429] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0108 22:57:35.386503    5968 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0108 22:57:35.423238    5968 addons.go:429] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0108 22:57:35.423238    5968 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0108 22:57:35.435128    5968 addons.go:429] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0108 22:57:35.435128    5968 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0108 22:57:35.604897    5968 addons.go:429] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0108 22:57:35.605011    5968 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0108 22:57:35.612708    5968 addons.go:429] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0108 22:57:35.612708    5968 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0108 22:57:35.619167    5968 addons.go:429] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0108 22:57:35.619167    5968 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0108 22:57:35.798761    5968 addons.go:429] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0108 22:57:35.798761    5968 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0108 22:57:35.805380    5968 addons.go:429] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0108 22:57:35.805478    5968 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0108 22:57:35.814470    5968 addons.go:429] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0108 22:57:35.814470    5968 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0108 22:57:35.999473    5968 main.go:141] libmachine: [stdout =====>] : 172.24.111.87
	
	I0108 22:57:35.999473    5968 main.go:141] libmachine: [stderr =====>] : 
	I0108 22:57:35.999856    5968 sshutil.go:53] new ssh client: &{IP:172.24.111.87 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\addons-852800\id_rsa Username:docker}
	I0108 22:57:36.068324    5968 addons.go:429] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0108 22:57:36.068324    5968 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0108 22:57:36.109820    5968 addons.go:429] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0108 22:57:36.109820    5968 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0108 22:57:36.153544    5968 addons.go:429] installing /etc/kubernetes/addons/ig-crd.yaml
	I0108 22:57:36.153604    5968 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0108 22:57:36.407903    5968 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0108 22:57:36.430900    5968 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0108 22:57:36.439608    5968 addons.go:429] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0108 22:57:36.439608    5968 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0108 22:57:36.597172    5968 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (2.8491337s)
	I0108 22:57:36.597288    5968 system_svc.go:56] duration metric: took 2.8742524s WaitForService to wait for kubelet.
	I0108 22:57:36.597288    5968 kubeadm.go:581] duration metric: took 22.0031427s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0108 22:57:36.597288    5968 node_conditions.go:102] verifying NodePressure condition ...
	I0108 22:57:36.597586    5968 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (2.9353595s)
	I0108 22:57:36.602313    5968 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0108 22:57:36.602413    5968 node_conditions.go:123] node cpu capacity is 2
	I0108 22:57:36.602413    5968 node_conditions.go:105] duration metric: took 5.1241ms to run NodePressure ...
	I0108 22:57:36.602413    5968 start.go:228] waiting for startup goroutines ...
	I0108 22:57:36.705121    5968 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0108 22:57:36.710123    5968 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0108 22:57:36.801719    5968 main.go:141] libmachine: [stdout =====>] : 172.24.111.87
	
	I0108 22:57:36.801719    5968 main.go:141] libmachine: [stderr =====>] : 
	I0108 22:57:36.801719    5968 sshutil.go:53] new ssh client: &{IP:172.24.111.87 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\addons-852800\id_rsa Username:docker}
	I0108 22:57:36.849854    5968 main.go:141] libmachine: [stdout =====>] : 172.24.111.87
	
	I0108 22:57:36.850023    5968 main.go:141] libmachine: [stderr =====>] : 
	I0108 22:57:36.850102    5968 sshutil.go:53] new ssh client: &{IP:172.24.111.87 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\addons-852800\id_rsa Username:docker}
	I0108 22:57:37.522547    5968 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0108 22:57:37.600649    5968 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0108 22:57:37.803614    5968 addons.go:237] Setting addon gcp-auth=true in "addons-852800"
	I0108 22:57:37.803910    5968 host.go:66] Checking if "addons-852800" exists ...
	I0108 22:57:37.805547    5968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-852800 ).state
	I0108 22:57:39.481479    5968 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (5.6217133s)
	I0108 22:57:40.100044    5968 main.go:141] libmachine: [stdout =====>] : Running
	
	I0108 22:57:40.100044    5968 main.go:141] libmachine: [stderr =====>] : 
	I0108 22:57:40.113042    5968 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0108 22:57:40.113042    5968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-852800 ).state
	I0108 22:57:40.545730    5968 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (6.3698515s)
	I0108 22:57:40.545730    5968 addons.go:473] Verifying addon registry=true in "addons-852800"
	I0108 22:57:40.548690    5968 out.go:177] * Verifying registry addon...
	I0108 22:57:40.550693    5968 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (6.6407579s)
	I0108 22:57:40.553692    5968 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0108 22:57:40.570217    5968 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0108 22:57:40.570282    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:57:41.198615    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:57:41.583507    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:57:41.942905    5968 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (7.4795524s)
	I0108 22:57:41.942905    5968 addons.go:473] Verifying addon metrics-server=true in "addons-852800"
	I0108 22:57:42.070520    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:57:42.570807    5968 main.go:141] libmachine: [stdout =====>] : Running
	
	I0108 22:57:42.571001    5968 main.go:141] libmachine: [stderr =====>] : 
	I0108 22:57:42.571047    5968 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-852800 ).networkadapters[0]).ipaddresses[0]
	I0108 22:57:42.571896    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:57:43.066112    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:57:43.574083    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:57:44.080749    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:57:44.623509    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:57:45.091979    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:57:45.338505    5968 main.go:141] libmachine: [stdout =====>] : 172.24.111.87
	
	I0108 22:57:45.338505    5968 main.go:141] libmachine: [stderr =====>] : 
	I0108 22:57:45.338505    5968 sshutil.go:53] new ssh client: &{IP:172.24.111.87 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\addons-852800\id_rsa Username:docker}
	I0108 22:57:45.568549    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:57:46.064208    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:57:46.584091    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:57:47.072713    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:57:47.564279    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:57:48.069799    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:57:48.574136    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:57:49.083823    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:57:49.176494    5968 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (14.604878s)
	I0108 22:57:49.176494    5968 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (14.0151682s)
	I0108 22:57:49.176494    5968 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (14.6851453s)
	I0108 22:57:49.176494    5968 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (13.9006458s)
	I0108 22:57:49.176494    5968 addons.go:473] Verifying addon ingress=true in "addons-852800"
	I0108 22:57:49.179494    5968 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-852800 service yakd-dashboard -n yakd-dashboard
	
	
	I0108 22:57:49.183531    5968 out.go:177] * Verifying ingress addon...
	I0108 22:57:49.187507    5968 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0108 22:57:49.197491    5968 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0108 22:57:49.197491    5968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:57:49.562523    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:57:49.798337    5968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:57:50.077741    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:57:50.196079    5968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:57:50.590228    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:57:50.795226    5968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:57:50.934635    5968 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (14.5267303s)
	I0108 22:57:50.934765    5968 addons.go:473] Verifying addon csi-hostpath-driver=true in "addons-852800"
	I0108 22:57:50.934822    5968 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (14.5039204s)
	I0108 22:57:50.937418    5968 out.go:177] * Verifying csi-hostpath-driver addon...
	I0108 22:57:50.934822    5968 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (14.2296994s)
	W0108 22:57:50.934822    5968 addons.go:455] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0108 22:57:50.934822    5968 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (14.2246969s)
	I0108 22:57:50.934822    5968 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (13.4116945s)
	I0108 22:57:50.938412    5968 retry.go:31] will retry after 140.320088ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0108 22:57:50.934822    5968 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (10.8217789s)
	I0108 22:57:50.941421    5968 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
	I0108 22:57:50.943420    5968 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0108 22:57:50.944420    5968 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.0
	I0108 22:57:50.946428    5968 addons.go:429] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0108 22:57:50.947433    5968 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	W0108 22:57:50.992160    5968 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class csi-hostpath-sc as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "csi-hostpath-sc": the object has been modified; please apply your changes to the latest version and try again]
	I0108 22:57:51.008410    5968 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0108 22:57:51.008410    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:57:51.050763    5968 addons.go:429] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0108 22:57:51.050831    5968 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0108 22:57:51.092313    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:57:51.104984    5968 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0108 22:57:51.207912    5968 addons.go:429] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0108 22:57:51.207983    5968 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5432 bytes)
	I0108 22:57:51.209441    5968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:57:51.421383    5968 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0108 22:57:51.472812    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:57:51.576170    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:57:51.698431    5968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:57:51.967257    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:57:52.086510    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:57:52.213505    5968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:57:52.508027    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:57:52.589792    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:57:52.706047    5968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:57:52.956678    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:57:53.070076    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:57:53.209445    5968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:57:53.464928    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:57:53.571881    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:57:53.697939    5968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:57:53.968664    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:57:54.067105    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:57:54.206235    5968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:57:54.466777    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:57:54.566771    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:57:54.709387    5968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:57:54.965315    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:57:55.074718    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:57:55.202077    5968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:57:55.229109    5968 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (4.1241242s)
	I0108 22:57:55.456444    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:57:55.562257    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:57:55.707331    5968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:57:55.864102    5968 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (4.4427179s)
	I0108 22:57:55.872495    5968 addons.go:473] Verifying addon gcp-auth=true in "addons-852800"
	I0108 22:57:55.876337    5968 out.go:177] * Verifying gcp-auth addon...
	I0108 22:57:55.882628    5968 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0108 22:57:55.898261    5968 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0108 22:57:55.898261    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:57:55.955951    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:57:56.068876    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:57:56.394893    5968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:57:56.500484    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:57:56.503492    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:57:56.570716    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:57:56.696091    5968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:57:56.897803    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:57:56.966166    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:57:57.074925    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:57:57.199312    5968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:57:57.404665    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:57:57.455523    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:57:57.563666    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:57:57.708623    5968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:57:57.894979    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:57:57.970152    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:57:58.073652    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:57:58.197848    5968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:57:58.401486    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:57:58.468696    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:57:58.575315    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:57:58.700418    5968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:57:58.903079    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:57:58.967456    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:57:59.063062    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:57:59.204952    5968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:57:59.395392    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:57:59.463024    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:57:59.573155    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:57:59.696588    5968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:57:59.900307    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:57:59.965060    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:58:00.073848    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:58:00.196414    5968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:58:00.402971    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:58:00.468310    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:58:00.561427    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:58:00.707516    5968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:58:00.906690    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:58:00.967864    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:58:01.075082    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:58:01.201015    5968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:58:01.389750    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:58:01.454850    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:58:01.564813    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:58:01.703579    5968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:58:01.895184    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:58:01.960535    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:58:02.073088    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:58:02.196447    5968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:58:02.401676    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:58:02.467212    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:58:02.575296    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:58:02.702577    5968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:58:02.888319    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:58:02.953120    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:58:03.064415    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:58:03.206738    5968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:58:03.395893    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:58:03.463576    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:58:03.570810    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:58:03.700033    5968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:58:03.900169    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:58:03.963793    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:58:04.075277    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:58:04.201072    5968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:58:04.390458    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:58:04.456592    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:58:04.569324    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:58:04.706325    5968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:58:04.892298    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:58:04.962001    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:58:05.066052    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:58:05.208811    5968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:58:05.397674    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:58:05.463076    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:58:05.574006    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:58:05.699972    5968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:58:05.889973    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:58:05.953545    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:58:06.063507    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:58:06.203062    5968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:58:06.395098    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:58:06.462324    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:58:06.571686    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:58:06.697646    5968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:58:06.898160    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:58:06.964724    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:58:07.630378    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:58:07.630378    5968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:58:07.633389    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:58:07.635379    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:58:07.638426    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:58:08.339429    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:58:08.339569    5968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:58:08.345063    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:58:08.346802    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:58:08.350498    5968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:58:08.404329    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:58:08.469455    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:58:08.575106    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:58:08.701420    5968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:58:08.891346    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:58:08.958456    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:58:09.071094    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:58:09.237229    5968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:58:09.392362    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:58:09.458877    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:58:09.568138    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:58:09.709003    5968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:58:09.896130    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:58:09.961570    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:58:10.080558    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:58:10.269805    5968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:58:10.390753    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:58:10.457749    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:58:10.563313    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:58:10.706474    5968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:58:10.896604    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:58:10.961237    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:58:11.072104    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:58:11.197624    5968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:58:11.403808    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:58:11.465607    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:58:11.575296    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:58:11.698908    5968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:58:11.890085    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:58:11.955738    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:58:12.064915    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:58:12.206956    5968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:58:12.392680    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:58:12.456488    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:58:12.567144    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:58:12.709103    5968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:58:12.900349    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:58:12.964864    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:58:13.076649    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:58:13.200820    5968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:58:13.392632    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:58:13.457474    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:58:13.566606    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:58:13.776791    5968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:58:13.898880    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:58:13.964562    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:58:14.078545    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:58:14.200726    5968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:58:14.396770    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:58:14.457989    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:58:14.565935    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:58:14.706480    5968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:58:14.894396    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:58:14.960058    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:58:15.072735    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:58:15.197674    5968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:58:15.401679    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:58:15.467016    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:58:15.560540    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:58:15.701842    5968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:58:15.892138    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:58:15.955963    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:58:16.064587    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:58:16.206001    5968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:58:16.395843    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:58:16.461145    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:58:16.573354    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:58:16.696785    5968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:58:16.902023    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:58:16.967449    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:58:17.076757    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:58:17.200286    5968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:58:17.389773    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:58:17.454314    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:58:17.564816    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:58:17.708356    5968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:58:17.896141    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:58:17.961141    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:58:18.068161    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:58:18.209141    5968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:58:18.397751    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:58:18.465722    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:58:18.572903    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:58:18.699613    5968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:58:18.902378    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:58:18.954026    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:58:19.064376    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:58:19.204738    5968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:58:19.392675    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:58:19.470318    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:58:19.576469    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:58:19.706621    5968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:58:19.902715    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:58:19.959157    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:58:20.076104    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:58:20.205282    5968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:58:20.388453    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:58:20.452846    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:58:20.561895    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:58:20.705117    5968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:58:20.894457    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:58:20.958144    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:58:21.069662    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:58:21.195373    5968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:58:21.402722    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:58:21.453047    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:58:21.561455    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:58:21.703775    5968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:58:21.888887    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:58:21.956168    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:58:22.067866    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:58:22.207497    5968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:58:22.397291    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:58:22.469282    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:58:22.572765    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:58:22.700785    5968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:58:22.890818    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:58:22.957054    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:58:23.068052    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:58:23.207538    5968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:58:23.397437    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:58:23.462379    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:58:23.574062    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:58:23.699029    5968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:58:23.891117    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:58:23.954909    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:58:24.066730    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:58:24.207352    5968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:58:24.397304    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:58:24.465055    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:58:24.574131    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:58:24.699660    5968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:58:24.900820    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:58:24.964372    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:58:25.076045    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:58:25.199659    5968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:58:25.389153    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:58:25.454340    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:58:25.560939    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:58:25.705050    5968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:58:25.894929    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:58:26.044637    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:58:26.070639    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:58:26.201653    5968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:58:26.400389    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:58:26.466605    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:58:26.561002    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:58:26.704511    5968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:58:26.890689    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:58:26.957223    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:58:27.067103    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:58:27.207716    5968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:58:27.396680    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:58:27.462635    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:58:27.573561    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:58:27.698942    5968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:58:27.888843    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:58:27.957804    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:58:28.066550    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:58:28.207969    5968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:58:28.400939    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:58:28.460530    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:58:28.569498    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:58:28.708481    5968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:58:28.898602    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:58:28.962829    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:58:29.072618    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:58:29.196742    5968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:58:29.401415    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:58:29.466328    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:58:29.572861    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:58:29.699396    5968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:58:29.900415    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:58:29.964493    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:58:30.070546    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:58:30.195244    5968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:58:30.532892    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:58:30.535716    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:58:30.574813    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:58:31.387495    5968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:58:31.388396    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:58:31.396867    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:58:31.396867    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:58:31.403375    5968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:58:31.404219    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:58:31.467559    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:58:31.560185    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:58:31.711106    5968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:58:31.892196    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:58:31.957919    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:58:32.070238    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:58:32.208034    5968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:58:32.398559    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:58:32.463355    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:58:32.572008    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:58:32.697992    5968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:58:32.889316    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:58:32.954920    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:58:33.066144    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:58:33.208311    5968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:58:33.394683    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:58:33.460852    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:58:33.570156    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:58:33.695490    5968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:58:33.902011    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:58:33.966875    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:58:34.061453    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:58:34.203324    5968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:58:34.390926    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:58:34.455557    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:58:34.569494    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:58:34.708304    5968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:58:34.899188    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:58:34.960134    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:58:35.066427    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:58:35.205668    5968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:58:35.391196    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:58:35.459358    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:58:35.568166    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:58:35.709025    5968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:58:35.896104    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:58:35.967546    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:58:36.070173    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:58:36.198620    5968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:58:36.399100    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:58:36.466122    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:58:36.575124    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:58:36.703228    5968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:58:36.891085    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:58:36.956976    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:58:37.067759    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:58:37.199747    5968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:58:37.401301    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:58:37.453307    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:58:37.561676    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:58:37.702313    5968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:58:37.893757    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:58:37.971063    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:58:38.069805    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:58:38.294589    5968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:58:38.404888    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:58:38.463265    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:58:38.575715    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:58:38.700162    5968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:58:38.904065    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:58:38.954030    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:58:39.063005    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:58:39.206430    5968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:58:39.392672    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:58:39.459886    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:58:39.570430    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:58:39.703529    5968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:58:39.904162    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:58:39.954476    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:58:40.064237    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:58:40.208325    5968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:58:40.395846    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:58:40.461836    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:58:40.574835    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:58:40.700939    5968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:58:40.891952    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:58:40.958514    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:58:41.069460    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:58:41.197403    5968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:58:41.403657    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:58:41.465666    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:58:41.576880    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:58:41.702245    5968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:58:41.887550    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:58:41.957578    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:58:42.067179    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:58:42.208294    5968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:58:42.397722    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:58:42.462880    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:58:42.573241    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:58:42.698520    5968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:58:42.901199    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:58:42.964618    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:58:43.075092    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:58:43.201028    5968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:58:43.388793    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:58:43.454794    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:58:43.565200    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:58:43.708545    5968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:58:43.896141    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:58:43.963621    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:58:44.072380    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:58:44.199421    5968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:58:44.402083    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:58:44.494175    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:58:44.563079    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:58:44.705142    5968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:58:44.889512    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:58:44.957324    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:58:45.066624    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:58:45.207678    5968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:58:45.398654    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:58:45.460830    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:58:45.570395    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:58:45.694908    5968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:58:46.396892    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:58:46.399585    5968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:58:46.409831    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:58:46.410002    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:58:46.410369    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:58:46.595322    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:58:46.596037    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:58:47.289780    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:58:47.291430    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:58:47.295538    5968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:58:47.297856    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:58:47.302846    5968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:58:47.396539    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:58:47.462400    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:58:47.572184    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:58:47.711958    5968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:58:47.901428    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:58:47.965423    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:58:48.062532    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:58:48.211137    5968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:58:48.402764    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:58:48.455099    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:58:48.566761    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:58:48.694731    5968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:58:48.899995    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:58:48.967192    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:58:49.075063    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:58:49.204610    5968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:58:49.390091    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:58:49.456827    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:58:49.567819    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:58:49.695022    5968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:58:49.899140    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:58:49.962714    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:58:50.074701    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:58:50.199450    5968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:58:50.403173    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:58:50.453243    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:58:50.561674    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:58:50.703189    5968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:58:50.895211    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:58:50.958731    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:58:51.480646    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:58:51.484212    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:58:51.490851    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:58:51.491026    5968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:58:51.569259    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:58:51.696688    5968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:58:51.897201    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:58:51.961865    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:58:52.070807    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:58:52.195071    5968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:58:52.404089    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:58:52.466802    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:58:52.575083    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:58:52.700495    5968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:58:52.900336    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:58:52.965891    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:58:53.062637    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:58:53.206232    5968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:58:53.392423    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:58:53.462431    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:58:53.568295    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:58:53.708619    5968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:58:53.896197    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:58:53.961277    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:58:54.069949    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:58:54.194940    5968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:58:54.399155    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:58:54.465270    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:58:54.575037    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:58:54.699658    5968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:58:54.902367    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:58:54.967725    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:58:55.074894    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:58:55.201486    5968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:58:55.403132    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:58:55.467588    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:58:55.563217    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:58:55.703599    5968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:58:55.893818    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:58:55.957206    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:58:56.070454    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:58:56.208970    5968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:58:56.402935    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:58:56.469486    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:58:56.563585    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:58:56.703389    5968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:58:56.891561    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:58:56.958833    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:58:57.070203    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:58:57.196745    5968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:58:57.403391    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:58:57.465727    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:58:57.564558    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:58:57.704946    5968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:58:57.896961    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:58:57.960637    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:58:58.070343    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:58:58.209664    5968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:58:58.399819    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:58:58.467056    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:58:58.575615    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:58:58.702635    5968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:58:58.892771    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:58:58.958625    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:58:59.069414    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:58:59.203518    5968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:58:59.400061    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:58:59.464055    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:58:59.573535    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:58:59.699350    5968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:58:59.895191    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:58:59.954552    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:59:00.065540    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:59:00.203330    5968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:59:00.394084    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:59:00.459519    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:59:00.568207    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:59:00.694877    5968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:59:00.901248    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:59:00.968702    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:59:01.075535    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:59:01.199976    5968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:59:01.388824    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:59:01.455239    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:59:01.565924    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:59:01.706252    5968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:59:01.895476    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:59:01.966283    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:59:02.070697    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:59:02.195857    5968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:59:02.400237    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:59:02.468657    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:59:02.575419    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:59:02.699850    5968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:59:03.326055    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:59:03.326753    5968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:59:03.330194    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:59:03.331931    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:59:03.404401    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:59:03.472600    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:59:03.574779    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:59:03.698424    5968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:59:03.899784    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:59:03.965561    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:59:04.071545    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:59:04.198699    5968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:59:04.399047    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:59:04.644165    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:59:04.645572    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:59:04.698896    5968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:59:04.901574    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:59:04.985631    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:59:05.076933    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:59:05.200303    5968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:59:05.399870    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:59:05.464154    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:59:05.571392    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:59:05.745776    5968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:59:05.914331    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:59:05.968142    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:59:06.063440    5968 kapi.go:107] duration metric: took 1m25.5096801s to wait for kubernetes.io/minikube-addons=registry ...
	I0108 22:59:06.204280    5968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:59:06.389553    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:59:06.457577    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:59:06.705421    5968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:59:06.894218    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:59:06.961058    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:59:07.197881    5968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:59:07.402032    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:59:07.466570    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:59:07.706521    5968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:59:07.892545    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:59:07.956798    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:59:08.194198    5968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:59:08.398937    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:59:08.463404    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:59:08.699788    5968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:59:08.902433    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:59:08.953509    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:59:09.201809    5968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:59:09.391379    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:59:09.457769    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:59:09.709499    5968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:59:09.899548    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:59:09.966449    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:59:10.199760    5968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:59:10.402829    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:59:10.466803    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:59:10.701716    5968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:59:10.889098    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:59:10.956771    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:59:11.207873    5968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:59:11.395172    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:59:11.459191    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:59:11.697122    5968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:59:11.887862    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:59:11.957016    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:59:12.207976    5968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:59:12.399415    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:59:12.467160    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:59:12.702564    5968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:59:12.986756    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:59:12.986756    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:59:13.195928    5968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:59:13.650836    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:59:13.664822    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:59:13.700087    5968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:59:13.900172    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:59:13.962991    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:59:14.198564    5968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:59:14.424747    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:59:14.880013    5968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:59:14.882873    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:59:14.889170    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:59:14.983152    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:59:15.200158    5968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:59:15.404419    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:59:15.472940    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:59:15.707897    5968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:59:15.892896    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:59:15.957358    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:59:16.209671    5968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:59:16.435657    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:59:16.465368    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:59:16.702186    5968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:59:16.889458    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:59:16.952358    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:59:17.205720    5968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:59:17.394167    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:59:17.459809    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:59:17.812646    5968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:59:17.899117    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:59:17.966362    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:59:18.204740    5968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:59:18.394962    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:59:18.458976    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:59:18.708270    5968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:59:18.896987    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:59:18.963667    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:59:19.200761    5968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:59:19.403433    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:59:19.456460    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:59:19.708859    5968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:59:19.896733    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:59:19.963315    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:59:20.198986    5968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:59:20.389805    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:59:20.456768    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:59:20.704862    5968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:59:20.893918    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:59:20.960710    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:59:21.290476    5968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:59:21.398926    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:59:21.638945    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:59:21.975216    5968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:59:21.976111    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:59:21.980853    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:59:22.210687    5968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:59:22.395012    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:59:22.460799    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:59:22.709947    5968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:59:22.895420    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:59:22.961886    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:59:23.205244    5968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:59:23.395641    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:59:23.462245    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:59:23.698212    5968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:59:23.902340    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:59:23.966439    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:59:24.208607    5968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:59:24.397965    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:59:24.465139    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:59:24.701633    5968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:59:24.889449    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:59:24.954807    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:59:25.206972    5968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:59:25.396445    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:59:25.460964    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:59:25.695223    5968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:59:25.901999    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:59:25.976053    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:59:27.038997    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:59:27.044282    5968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:59:27.045031    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:59:27.230408    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:59:27.230951    5968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:59:27.234894    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:59:27.484828    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:59:27.485183    5968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:59:27.486514    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:59:27.928937    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:59:27.933886    5968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:59:27.955384    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:59:28.206552    5968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:59:28.396207    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:59:28.471526    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:59:28.699022    5968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:59:28.903377    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:59:28.956919    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:59:29.207506    5968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:59:29.394778    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:59:29.459462    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:59:29.696039    5968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:59:29.902228    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:59:29.968932    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:59:30.201885    5968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:59:30.391171    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:59:30.456188    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:59:30.706588    5968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:59:30.892929    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:59:30.958121    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:59:31.209352    5968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:59:31.396782    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:59:31.464215    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:59:31.699173    5968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:59:31.900041    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:59:31.964304    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:59:32.203134    5968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:59:32.391743    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:59:32.458536    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:59:32.699239    5968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:59:32.902643    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:59:32.952603    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:59:33.206303    5968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:59:33.398332    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:59:33.463572    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:59:34.186836    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:59:34.189465    5968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:59:34.196756    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:59:34.199392    5968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:59:34.390231    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:59:34.458926    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:59:34.706328    5968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:59:34.893722    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:59:34.955905    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:59:35.195475    5968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:59:35.403089    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:59:35.467534    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:59:35.702764    5968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:59:35.890608    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:59:35.955724    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:59:36.208365    5968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:59:36.397759    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:59:36.605274    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:59:36.699362    5968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:59:36.902309    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:59:36.978412    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:59:37.202001    5968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:59:37.390608    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:59:37.455263    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:59:37.715221    5968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:59:37.898548    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:59:37.961914    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:59:38.198430    5968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:59:38.404467    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:59:38.451446    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:59:38.708920    5968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:59:38.897172    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:59:38.965085    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:59:39.200617    5968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:59:39.519595    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:59:39.520712    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:59:39.707255    5968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:59:39.898673    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:59:39.966017    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:59:40.202683    5968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:59:40.390994    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:59:40.458540    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:59:40.712476    5968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:59:40.898061    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:59:40.963713    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:59:41.196734    5968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:59:41.404919    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:59:41.466923    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:59:41.703938    5968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:59:41.894294    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:59:41.962792    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:59:42.195599    5968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:59:42.402033    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:59:42.468522    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:59:42.702464    5968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:59:42.983332    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:59:42.986864    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:59:43.205508    5968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:59:43.396931    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:59:43.460576    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:59:43.699002    5968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:59:43.889380    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:59:43.956178    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:59:44.205605    5968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:59:44.406363    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:59:44.467135    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:59:44.694679    5968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:59:44.903005    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:59:44.967372    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:59:45.204109    5968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:59:45.393112    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:59:45.459206    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:59:45.707368    5968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:59:45.896745    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:59:45.967584    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:59:46.195669    5968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:59:46.400671    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:59:46.468769    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:59:46.729308    5968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:59:46.888992    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:59:46.954541    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:59:47.206445    5968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:59:47.394982    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:59:47.458220    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:59:47.708382    5968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:59:47.893812    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:59:47.959028    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:59:48.208089    5968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:59:48.391786    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:59:48.459468    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:59:48.705422    5968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:59:48.895136    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:59:48.957197    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:59:49.197447    5968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:59:49.401379    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:59:49.468718    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:59:49.702641    5968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:59:49.903395    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:59:49.957329    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:59:50.208534    5968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:59:50.399083    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:59:50.462869    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:59:50.701163    5968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:59:50.891216    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:59:50.953296    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:59:51.588404    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:59:51.592092    5968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:59:51.597860    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:59:51.705278    5968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:59:51.893347    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:59:51.957359    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:59:52.195641    5968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:59:52.404447    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:59:52.467912    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:59:52.703582    5968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:59:52.892316    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:59:52.964200    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:59:53.194338    5968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:59:53.402765    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:59:53.453265    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:59:53.709994    5968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:59:53.898072    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:59:53.963427    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:59:54.199753    5968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:59:54.388271    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:59:54.455212    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:59:54.707830    5968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:59:54.897285    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:59:54.960425    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:59:55.202415    5968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:59:55.404500    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:59:55.455122    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:59:55.708648    5968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:59:55.920071    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:59:55.963282    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:59:56.250581    5968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:59:56.396830    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:59:56.463676    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:59:56.698935    5968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:59:56.904190    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:59:56.967679    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:59:57.203730    5968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:59:57.390587    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:59:57.454994    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:59:57.705748    5968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:59:57.894551    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:59:57.959464    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:59:58.195916    5968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:59:58.399643    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:59:58.466902    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:59:58.703191    5968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:59:58.890662    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:59:58.963292    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:59:59.218272    5968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:59:59.686350    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:59:59.689516    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:59:59.695297    5968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:59:59.896644    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:59:59.964365    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 23:00:00.198366    5968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 23:00:00.400659    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 23:00:00.465934    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 23:00:00.700641    5968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 23:00:00.903840    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 23:00:00.953761    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 23:00:01.206948    5968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 23:00:01.398238    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 23:00:01.463460    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 23:00:01.699504    5968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 23:00:01.968163    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 23:00:01.973081    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 23:00:02.203622    5968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 23:00:02.391579    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 23:00:02.456465    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 23:00:02.705947    5968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 23:00:02.894023    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 23:00:02.957947    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 23:00:03.195633    5968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 23:00:03.399292    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 23:00:03.465576    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 23:00:03.700789    5968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 23:00:03.889420    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 23:00:03.959160    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 23:00:04.194855    5968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 23:00:04.399282    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 23:00:04.463679    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 23:00:04.699753    5968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 23:00:04.889100    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 23:00:04.954991    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 23:00:05.205761    5968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 23:00:05.390888    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 23:00:05.733451    5968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 23:00:05.738775    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 23:00:05.889959    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 23:00:05.955048    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 23:00:06.201520    5968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 23:00:06.391198    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 23:00:06.459768    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 23:00:06.700052    5968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 23:00:06.901944    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 23:00:06.967005    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 23:00:07.202442    5968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 23:00:07.389291    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 23:00:07.459004    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 23:00:07.706345    5968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 23:00:07.898307    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 23:00:07.964001    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 23:00:08.201918    5968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 23:00:08.392655    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 23:00:08.456571    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 23:00:08.709934    5968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 23:00:08.904723    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 23:00:08.959923    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 23:00:09.216921    5968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 23:00:09.786992    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 23:00:09.786992    5968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 23:00:09.791006    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 23:00:09.899884    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 23:00:09.957167    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 23:00:10.215438    5968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 23:00:10.395283    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 23:00:10.461996    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 23:00:10.699954    5968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 23:00:10.902463    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 23:00:10.977452    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 23:00:11.202578    5968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 23:00:11.392238    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 23:00:11.463027    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 23:00:11.711795    5968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 23:00:11.896352    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 23:00:11.962660    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 23:00:12.200326    5968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 23:00:12.401895    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 23:00:12.456886    5968 kapi.go:107] duration metric: took 2m21.5134514s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0108 23:00:12.704968    5968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 23:00:12.895091    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 23:00:13.204110    5968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 23:00:13.398851    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 23:00:13.698306    5968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 23:00:13.906570    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 23:00:14.200432    5968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 23:00:14.393926    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 23:00:14.694366    5968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 23:00:14.900769    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 23:00:15.203892    5968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 23:00:15.391506    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 23:00:15.707712    5968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 23:00:15.939109    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 23:00:16.242739    5968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 23:00:16.388496    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 23:00:16.705359    5968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 23:00:16.893929    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 23:00:17.196724    5968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 23:00:17.404719    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 23:00:17.709047    5968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 23:00:17.899556    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 23:00:18.201785    5968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 23:00:18.388858    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 23:00:18.705559    5968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 23:00:18.893629    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 23:00:19.198264    5968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 23:00:19.405937    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 23:00:19.699841    5968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 23:00:19.892976    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 23:00:20.216893    5968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 23:00:20.405244    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 23:00:20.824063    5968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 23:00:20.975667    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 23:00:21.199003    5968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 23:00:21.401319    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 23:00:21.702119    5968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 23:00:21.903883    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 23:00:22.207812    5968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 23:00:22.398842    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 23:00:22.700176    5968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 23:00:22.888863    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 23:00:23.209070    5968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 23:00:23.396639    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 23:00:23.701130    5968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 23:00:23.902655    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 23:00:24.534708    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 23:00:24.535479    5968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 23:00:24.790044    5968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 23:00:25.339879    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 23:00:25.340234    5968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 23:00:25.998014    5968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 23:00:25.998187    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 23:00:26.005639    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 23:00:26.206467    5968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 23:00:26.398102    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 23:00:26.710579    5968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 23:00:26.911321    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 23:00:27.208245    5968 kapi.go:107] duration metric: took 2m38.0207222s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0108 23:00:27.397244    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 23:00:27.906140    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 23:00:28.398136    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 23:00:28.899427    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 23:00:29.415245    5968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 23:00:29.905330    5968 kapi.go:107] duration metric: took 2m34.0226857s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0108 23:00:29.908238    5968 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-852800 cluster.
	I0108 23:00:29.911139    5968 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0108 23:00:29.915073    5968 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0108 23:00:29.918875    5968 out.go:177] * Enabled addons: nvidia-device-plugin, ingress-dns, storage-provisioner, metrics-server, helm-tiller, cloud-spanner, yakd, inspektor-gadget, storage-provisioner-rancher, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I0108 23:00:29.923150    5968 addons.go:508] enable addons completed in 3m16.4068313s: enabled=[nvidia-device-plugin ingress-dns storage-provisioner metrics-server helm-tiller cloud-spanner yakd inspektor-gadget storage-provisioner-rancher volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I0108 23:00:29.923235    5968 start.go:233] waiting for cluster config update ...
	I0108 23:00:29.923345    5968 start.go:242] writing updated cluster config ...
	I0108 23:00:29.940045    5968 ssh_runner.go:195] Run: rm -f paused
	I0108 23:00:30.219605    5968 start.go:600] kubectl: 1.29.0, cluster: 1.28.4 (minor skew: 1)
	I0108 23:00:30.228452    5968 out.go:177] * Done! kubectl is now configured to use "addons-852800" cluster and "default" namespace by default
	
	
	==> Docker <==
	-- Journal begins at Mon 2024-01-08 22:55:02 UTC, ends at Mon 2024-01-08 23:01:26 UTC. --
	Jan 08 23:01:12 addons-852800 dockerd[1343]: time="2024-01-08T23:01:12.462633461Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 08 23:01:12 addons-852800 dockerd[1343]: time="2024-01-08T23:01:12.462670061Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jan 08 23:01:12 addons-852800 dockerd[1343]: time="2024-01-08T23:01:12.462688461Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 08 23:01:13 addons-852800 dockerd[1337]: time="2024-01-08T23:01:13.184213740Z" level=info msg="ignoring event" container=5836f7b87143e3625fe964ab0ef998a768faad536194beccf432c506fe531210 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 08 23:01:13 addons-852800 dockerd[1343]: time="2024-01-08T23:01:13.194316732Z" level=info msg="shim disconnected" id=5836f7b87143e3625fe964ab0ef998a768faad536194beccf432c506fe531210 namespace=moby
	Jan 08 23:01:13 addons-852800 dockerd[1343]: time="2024-01-08T23:01:13.194914531Z" level=warning msg="cleaning up after shim disconnected" id=5836f7b87143e3625fe964ab0ef998a768faad536194beccf432c506fe531210 namespace=moby
	Jan 08 23:01:13 addons-852800 dockerd[1343]: time="2024-01-08T23:01:13.195131831Z" level=info msg="cleaning up dead shim" namespace=moby
	Jan 08 23:01:13 addons-852800 dockerd[1337]: time="2024-01-08T23:01:13.457000715Z" level=info msg="ignoring event" container=54f7e22a8abd3f894f6e362262a52808bda070981e4327a42b2113e4894bd42d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 08 23:01:13 addons-852800 dockerd[1343]: time="2024-01-08T23:01:13.460442012Z" level=info msg="shim disconnected" id=54f7e22a8abd3f894f6e362262a52808bda070981e4327a42b2113e4894bd42d namespace=moby
	Jan 08 23:01:13 addons-852800 dockerd[1343]: time="2024-01-08T23:01:13.460763112Z" level=warning msg="cleaning up after shim disconnected" id=54f7e22a8abd3f894f6e362262a52808bda070981e4327a42b2113e4894bd42d namespace=moby
	Jan 08 23:01:13 addons-852800 dockerd[1343]: time="2024-01-08T23:01:13.460977712Z" level=info msg="cleaning up dead shim" namespace=moby
	Jan 08 23:01:23 addons-852800 dockerd[1343]: time="2024-01-08T23:01:23.262759381Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jan 08 23:01:23 addons-852800 dockerd[1343]: time="2024-01-08T23:01:23.262844181Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 08 23:01:23 addons-852800 dockerd[1343]: time="2024-01-08T23:01:23.262889981Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jan 08 23:01:23 addons-852800 dockerd[1343]: time="2024-01-08T23:01:23.263214981Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 08 23:01:23 addons-852800 cri-dockerd[1227]: time="2024-01-08T23:01:23Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/249f59157638efa23ee781f0c1786db10a86436d5c37d1df4c9ce164afd044e7/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Jan 08 23:01:24 addons-852800 cri-dockerd[1227]: time="2024-01-08T23:01:24Z" level=info msg="Stop pulling image docker.io/nginx:latest: Status: Image is up to date for nginx:latest"
	Jan 08 23:01:24 addons-852800 dockerd[1343]: time="2024-01-08T23:01:24.763013135Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jan 08 23:01:24 addons-852800 dockerd[1343]: time="2024-01-08T23:01:24.763451635Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 08 23:01:24 addons-852800 dockerd[1343]: time="2024-01-08T23:01:24.763500735Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jan 08 23:01:24 addons-852800 dockerd[1343]: time="2024-01-08T23:01:24.763534235Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 08 23:01:26 addons-852800 dockerd[1337]: time="2024-01-08T23:01:26.121368916Z" level=info msg="ignoring event" container=4cffb5f648e59602c478182273be3b84c24935eda84634d4e3779446eb245f24 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 08 23:01:26 addons-852800 dockerd[1343]: time="2024-01-08T23:01:26.127843514Z" level=info msg="shim disconnected" id=4cffb5f648e59602c478182273be3b84c24935eda84634d4e3779446eb245f24 namespace=moby
	Jan 08 23:01:26 addons-852800 dockerd[1343]: time="2024-01-08T23:01:26.127940914Z" level=warning msg="cleaning up after shim disconnected" id=4cffb5f648e59602c478182273be3b84c24935eda84634d4e3779446eb245f24 namespace=moby
	Jan 08 23:01:26 addons-852800 dockerd[1343]: time="2024-01-08T23:01:26.127958414Z" level=info msg="cleaning up dead shim" namespace=moby
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED              STATE               NAME                                     ATTEMPT             POD ID              POD
	7f1df47f1a275       nginx@sha256:2bdc49f2f8ae8d8dc50ed00f2ee56d00385c6f8bc8a8b320d0a294d9e3b49026                                                                2 seconds ago        Running             task-pv-container                        0                   249f59157638e       task-pv-pod-restore
	bc4293afca872       nginx@sha256:a59278fd22a9d411121e190b8cec8aa57b306aa3332459197777583beb728f59                                                                15 seconds ago       Running             nginx                                    0                   66956e83eaf87       nginx
	eccc2c07833aa       alpine/helm@sha256:9d9fab00e0680f1328924429925595dfe96a68531c8a9c1518d05ee2ad45c36f                                                          20 seconds ago       Exited              helm-test                                0                   06a7f981a1d6e       helm-test
	9afa8a999022b       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:d730651bb6584f969d95d8279a754cf9d8d31b5055c43dbdb8d7363a8c6371cf                                 58 seconds ago       Running             gcp-auth                                 0                   1d4f9b9d81914       gcp-auth-d4c87556c-d5p4b
	bc060170a2d2e       registry.k8s.io/ingress-nginx/controller@sha256:b3aba22b1da80e7acfc52b115cae1d4c687172cbf2b742d5b502419c25ff340e                             About a minute ago   Running             controller                               0                   badb411897e71       ingress-nginx-controller-69cff4fd79-qdfpm
	87aa9d243e2a4       registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f                          About a minute ago   Running             csi-snapshotter                          0                   26d0c1bd12edb       csi-hostpathplugin-8xkj7
	2d0fa0ad4547a       registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8                          About a minute ago   Running             csi-provisioner                          0                   26d0c1bd12edb       csi-hostpathplugin-8xkj7
	543736ebd1024       registry.k8s.io/sig-storage/livenessprobe@sha256:cacee2b5c36dd59d4c7e8469c05c9e4ef53ecb2df9025fa8c10cdaf61bce62f0                            About a minute ago   Running             liveness-probe                           0                   26d0c1bd12edb       csi-hostpathplugin-8xkj7
	55ea49c1d13ca       registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5                           About a minute ago   Running             hostpath                                 0                   26d0c1bd12edb       csi-hostpathplugin-8xkj7
	7e90bb8fb9295       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:f1c25991bac2fbb7f5fcf91ed9438df31e30edee6bed5a780464238aa09ad24c                About a minute ago   Running             node-driver-registrar                    0                   26d0c1bd12edb       csi-hostpathplugin-8xkj7
	2ff39293cb73a       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:80b9ba94aa2afe24553d69bd165a6a51552d1582d68618ec00d3b804a7d9193c   About a minute ago   Running             csi-external-health-monitor-controller   0                   26d0c1bd12edb       csi-hostpathplugin-8xkj7
	cab1023af3607       registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7                              About a minute ago   Running             csi-resizer                              0                   4f647cffa3085       csi-hostpath-resizer-0
	5824c2178a5ce       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a7943503b45d552785aa3b5e457f169a5661fb94d82b8a3373bcd9ebaf9aac80                   About a minute ago   Exited              patch                                    0                   6eada6a6bf089       ingress-nginx-admission-patch-zm9bs
	14551fbe4e99d       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a7943503b45d552785aa3b5e457f169a5661fb94d82b8a3373bcd9ebaf9aac80                   About a minute ago   Exited              create                                   0                   63638251b3eee       ingress-nginx-admission-create-csxrb
	6577705ddad5f       registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280                      About a minute ago   Running             volume-snapshot-controller               0                   258c655ac9a78       snapshot-controller-58dbcc7b99-sjg5r
	914188ff00355       registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280                      About a minute ago   Running             volume-snapshot-controller               0                   36adfaae2acc5       snapshot-controller-58dbcc7b99-889hm
	8fd05e3b819b1       rancher/local-path-provisioner@sha256:e34c88ae0affb1cdefbb874140d6339d4a27ec4ee420ae8199cd839997b05246                                       About a minute ago   Running             local-path-provisioner                   0                   5164c3b3017c0       local-path-provisioner-78b46b4d5c-994hb
	feccce69dc6ef       registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b                             About a minute ago   Running             csi-attacher                             0                   a30309218333b       csi-hostpath-attacher-0
	85ae79d0dc295       marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310                                                        2 minutes ago        Running             yakd                                     0                   323b81376b8b0       yakd-dashboard-9947fc6bf-9dwhs
	4cffb5f648e59       ghcr.io/helm/tiller@sha256:4c43eb385032945cad047d2350e4945d913b90b3ab43ee61cecb32a495c6df0f                                                  2 minutes ago        Exited              tiller                                   0                   77f7149cd4bfe       tiller-deploy-7b677967b9-zj9g8
	1f2c44947cc5b       gcr.io/cloud-spanner-emulator/emulator@sha256:5d905e581977bd3d543742e74ddb75c0ba65517cf19742089ae1be45b7b8aa49                               2 minutes ago        Running             cloud-spanner-emulator                   0                   559175bc75d62       cloud-spanner-emulator-64c8c85f65-4vgxj
	6a97e22338776       gcr.io/k8s-minikube/minikube-ingress-dns@sha256:4abe27f9fc03fedab1d655e2020e6b165faf3bf6de1088ce6cf215a75b78f05f                             2 minutes ago        Running             minikube-ingress-dns                     0                   fb1212508eefb       kube-ingress-dns-minikube
	85e4d66800645       nvcr.io/nvidia/k8s-device-plugin@sha256:339be23400f58c04f09b6ba1d4d2e0e7120648f2b114880513685b22093311f1                                     3 minutes ago        Running             nvidia-device-plugin-ctr                 0                   89577d01e5a76       nvidia-device-plugin-daemonset-f9bgn
	a93ee4631f194       6e38f40d628db                                                                                                                                3 minutes ago        Running             storage-provisioner                      0                   42201a70952d1       storage-provisioner
	970ba42f557e4       ead0a4a53df89                                                                                                                                4 minutes ago        Running             coredns                                  0                   ee2b1077708ba       coredns-5dd5756b68-vjs6l
	04b11bf1befc5       83f6cc407eed8                                                                                                                                4 minutes ago        Running             kube-proxy                               0                   2d065be395412       kube-proxy-k5rgc
	895edf79ccf55       73deb9a3f7025                                                                                                                                4 minutes ago        Running             etcd                                     0                   e21ec113acb09       etcd-addons-852800
	c86923f30b12f       d058aa5ab969c                                                                                                                                4 minutes ago        Running             kube-controller-manager                  0                   c302d4097ce92       kube-controller-manager-addons-852800
	b7eaecba37c0c       7fe0e6f37db33                                                                                                                                4 minutes ago        Running             kube-apiserver                           0                   a2da2618d3052       kube-apiserver-addons-852800
	01be9c881020f       e3db313c6dbc0                                                                                                                                4 minutes ago        Running             kube-scheduler                           0                   d569d7032bc67       kube-scheduler-addons-852800
	
	
	==> controller_ingress [bc060170a2d2] <==
	I0108 23:00:26.739492       7 event.go:298] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"udp-services", UID:"0567469e-293d-4fbe-b06c-5c641b492efc", APIVersion:"v1", ResourceVersion:"715", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/udp-services
	I0108 23:00:27.895441       7 nginx.go:303] "Starting NGINX process"
	I0108 23:00:27.895749       7 leaderelection.go:245] attempting to acquire leader lease ingress-nginx/ingress-nginx-leader...
	I0108 23:00:27.896166       7 nginx.go:323] "Starting validation webhook" address=":8443" certPath="/usr/local/certificates/cert" keyPath="/usr/local/certificates/key"
	I0108 23:00:27.896772       7 controller.go:190] "Configuration changes detected, backend reload required"
	I0108 23:00:27.921475       7 leaderelection.go:255] successfully acquired lease ingress-nginx/ingress-nginx-leader
	I0108 23:00:27.923194       7 status.go:84] "New leader elected" identity="ingress-nginx-controller-69cff4fd79-qdfpm"
	I0108 23:00:27.934522       7 status.go:219] "POD is not ready" pod="ingress-nginx/ingress-nginx-controller-69cff4fd79-qdfpm" node="addons-852800"
	I0108 23:00:28.034217       7 controller.go:210] "Backend successfully reloaded"
	I0108 23:00:28.034291       7 controller.go:221] "Initial sync, sleeping for 1 second"
	I0108 23:00:28.034700       7 event.go:298] Event(v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-69cff4fd79-qdfpm", UID:"6104ed21-6ec0-4d67-a0ec-f42584e886a5", APIVersion:"v1", ResourceVersion:"742", FieldPath:""}): type: 'Normal' reason: 'RELOAD' NGINX reload triggered due to a change in configuration
	W0108 23:00:59.054572       7 controller.go:1108] Error obtaining Endpoints for Service "default/nginx": no object matching key "default/nginx" in local store
	I0108 23:00:59.179746       7 admission.go:149] processed ingress via admission controller {testedIngressLength:1 testedIngressTime:0.125s renderingIngressLength:1 renderingIngressTime:0s admissionTime:18.0kBs testedConfigurationSize:0.125}
	I0108 23:00:59.179894       7 main.go:107] "successfully validated configuration, accepting" ingress="default/nginx-ingress"
	I0108 23:00:59.703527       7 store.go:440] "Found valid IngressClass" ingress="default/nginx-ingress" ingressclass="nginx"
	I0108 23:00:59.706486       7 event.go:298] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"default", Name:"nginx-ingress", UID:"863de593-0550-4c03-acb1-3d753d90ac15", APIVersion:"networking.k8s.io/v1", ResourceVersion:"1438", FieldPath:""}): type: 'Normal' reason: 'Sync' Scheduled for sync
	W0108 23:00:59.714134       7 controller.go:1108] Error obtaining Endpoints for Service "default/nginx": no object matching key "default/nginx" in local store
	I0108 23:00:59.714340       7 controller.go:190] "Configuration changes detected, backend reload required"
	I0108 23:00:59.837511       7 controller.go:210] "Backend successfully reloaded"
	I0108 23:00:59.837974       7 event.go:298] Event(v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-69cff4fd79-qdfpm", UID:"6104ed21-6ec0-4d67-a0ec-f42584e886a5", APIVersion:"v1", ResourceVersion:"742", FieldPath:""}): type: 'Normal' reason: 'RELOAD' NGINX reload triggered due to a change in configuration
	W0108 23:01:03.048837       7 controller.go:1214] Service "default/nginx" does not have any active Endpoint.
	I0108 23:01:03.048953       7 controller.go:190] "Configuration changes detected, backend reload required"
	I0108 23:01:03.352990       7 controller.go:210] "Backend successfully reloaded"
	I0108 23:01:03.354106       7 event.go:298] Event(v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-69cff4fd79-qdfpm", UID:"6104ed21-6ec0-4d67-a0ec-f42584e886a5", APIVersion:"v1", ResourceVersion:"742", FieldPath:""}): type: 'Normal' reason: 'RELOAD' NGINX reload triggered due to a change in configuration
	W0108 23:01:06.381551       7 controller.go:1214] Service "default/nginx" does not have any active Endpoint.
	
	
	==> coredns [970ba42f557e] <==
	[INFO] plugin/reload: Running configuration SHA512 = 8a94475fd8f6b5be74d16a1164f3817e7e3c9c869aad283bf9dc9abd5dea1e10b4b9491d20650a72f422eaef0ab2bbcc33a356e2ff9bbbd28022709e05d1c5d7
	[INFO] Reloading complete
	[INFO] 127.0.0.1:36195 - 40961 "HINFO IN 5721897267335087392.2030088144852450884. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.055985973s
	[INFO] 10.244.0.9:35902 - 10077 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000283602s
	[INFO] 10.244.0.9:35902 - 61783 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000311602s
	[INFO] 10.244.0.9:33543 - 57618 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000215201s
	[INFO] 10.244.0.9:33543 - 33297 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000239301s
	[INFO] 10.244.0.9:55737 - 2791 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000192302s
	[INFO] 10.244.0.9:55737 - 21498 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000149201s
	[INFO] 10.244.0.9:41491 - 35219 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000210202s
	[INFO] 10.244.0.9:41491 - 39837 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000119701s
	[INFO] 10.244.0.9:57345 - 13391 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000134601s
	[INFO] 10.244.0.9:34960 - 63432 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000130401s
	[INFO] 10.244.0.9:48664 - 18587 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000185301s
	[INFO] 10.244.0.9:44208 - 1030 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.0000869s
	[INFO] 10.244.0.22:52003 - 31060 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000502698s
	[INFO] 10.244.0.22:60142 - 26519 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000115499s
	[INFO] 10.244.0.22:37122 - 18791 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000196699s
	[INFO] 10.244.0.22:36541 - 8760 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.0001096s
	[INFO] 10.244.0.22:56004 - 15071 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000175599s
	[INFO] 10.244.0.22:53493 - 53368 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.0001095s
	[INFO] 10.244.0.22:60947 - 23002 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd 240 0.002117792s
	[INFO] 10.244.0.22:57338 - 54730 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd 230 0.001502294s
	[INFO] 10.244.0.23:55770 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000320999s
	[INFO] 10.244.0.23:38777 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000112s
	
	
	==> describe nodes <==
	Name:               addons-852800
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-852800
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a
	                    minikube.k8s.io/name=addons-852800
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_01_08T22_57_01_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-852800
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-852800"}
	                    kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 08 Jan 2024 22:56:56 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-852800
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 08 Jan 2024 23:01:17 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 08 Jan 2024 23:01:10 +0000   Mon, 08 Jan 2024 22:56:53 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 08 Jan 2024 23:01:10 +0000   Mon, 08 Jan 2024 22:56:53 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 08 Jan 2024 23:01:10 +0000   Mon, 08 Jan 2024 22:56:53 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 08 Jan 2024 23:01:10 +0000   Mon, 08 Jan 2024 22:57:03 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.24.111.87
	  Hostname:    addons-852800
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             3914588Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             3914588Ki
	  pods:               110
	System Info:
	  Machine ID:                 f1730bfc728e4c77bb2ef26f13993f42
	  System UUID:                d9aeea22-3ab9-614c-b6a1-6e49ecd94276
	  Boot ID:                    d10feca5-9ca0-44af-b5e6-63dacb86e718
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://24.0.7
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (22 in total)
	  Namespace                   Name                                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                          ------------  ----------  ---------------  -------------  ---
	  default                     cloud-spanner-emulator-64c8c85f65-4vgxj                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m47s
	  default                     nginx                                                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         26s
	  default                     task-pv-pod-restore                                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4s
	  gcp-auth                    gcp-auth-d4c87556c-d5p4b                                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m31s
	  ingress-nginx               ingress-nginx-controller-69cff4fd79-qdfpm                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      90Mi (2%!)(MISSING)        0 (0%!)(MISSING)         3m38s
	  kube-system                 coredns-5dd5756b68-vjs6l                                      100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     4m13s
	  kube-system                 csi-hostpath-attacher-0                                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m36s
	  kube-system                 csi-hostpath-resizer-0                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m36s
	  kube-system                 csi-hostpathplugin-8xkj7                                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m36s
	  kube-system                 etcd-addons-852800                                            100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         4m29s
	  kube-system                 kube-apiserver-addons-852800                                  250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m26s
	  kube-system                 kube-controller-manager-addons-852800                         200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m25s
	  kube-system                 kube-ingress-dns-minikube                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m47s
	  kube-system                 kube-proxy-k5rgc                                              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m13s
	  kube-system                 kube-scheduler-addons-852800                                  100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m29s
	  kube-system                 nvidia-device-plugin-daemonset-f9bgn                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m50s
	  kube-system                 snapshot-controller-58dbcc7b99-889hm                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m42s
	  kube-system                 snapshot-controller-58dbcc7b99-sjg5r                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m42s
	  kube-system                 storage-provisioner                                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m46s
	  local-path-storage          helper-pod-create-pvc-026a5e48-d0e7-4e79-b0a6-d014883a4060    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         0s
	  local-path-storage          local-path-provisioner-78b46b4d5c-994hb                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m40s
	  yakd-dashboard              yakd-dashboard-9947fc6bf-9dwhs                                0 (0%!)(MISSING)        0 (0%!)(MISSING)      128Mi (3%!)(MISSING)       256Mi (6%!)(MISSING)     3m42s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             388Mi (10%!)(MISSING)  426Mi (11%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m                     kube-proxy       
	  Normal  Starting                 4m36s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m36s (x8 over 4m36s)  kubelet          Node addons-852800 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m36s (x8 over 4m36s)  kubelet          Node addons-852800 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m36s (x7 over 4m36s)  kubelet          Node addons-852800 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m36s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 4m26s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m25s                  kubelet          Node addons-852800 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m25s                  kubelet          Node addons-852800 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m25s                  kubelet          Node addons-852800 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m25s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                4m23s                  kubelet          Node addons-852800 status is now: NodeReady
	  Normal  RegisteredNode           4m13s                  node-controller  Node addons-852800 event: Registered Node addons-852800 in Controller
	
	
	==> dmesg <==
	[  +0.165887] systemd-fstab-generator[1001]: Ignoring "noauto" for root device
	[  +0.195824] systemd-fstab-generator[1014]: Ignoring "noauto" for root device
	[  +1.385011] kauditd_printk_skb: 14 callbacks suppressed
	[  +0.374180] systemd-fstab-generator[1172]: Ignoring "noauto" for root device
	[  +0.179305] systemd-fstab-generator[1183]: Ignoring "noauto" for root device
	[  +0.160831] systemd-fstab-generator[1194]: Ignoring "noauto" for root device
	[  +0.175591] systemd-fstab-generator[1205]: Ignoring "noauto" for root device
	[  +0.209015] systemd-fstab-generator[1219]: Ignoring "noauto" for root device
	[ +10.111028] systemd-fstab-generator[1328]: Ignoring "noauto" for root device
	[  +5.667031] kauditd_printk_skb: 29 callbacks suppressed
	[  +6.275394] systemd-fstab-generator[1693]: Ignoring "noauto" for root device
	[  +0.569397] kauditd_printk_skb: 29 callbacks suppressed
	[Jan 8 22:57] systemd-fstab-generator[2688]: Ignoring "noauto" for root device
	[ +31.390762] kauditd_printk_skb: 24 callbacks suppressed
	[ +10.895667] kauditd_printk_skb: 9 callbacks suppressed
	[  +5.068362] kauditd_printk_skb: 1 callbacks suppressed
	[  +6.438706] kauditd_printk_skb: 38 callbacks suppressed
	[Jan 8 22:58] kauditd_printk_skb: 44 callbacks suppressed
	[Jan 8 22:59] hrtimer: interrupt took 4657124 ns
	[ +28.559021] kauditd_printk_skb: 20 callbacks suppressed
	[Jan 8 23:00] kauditd_printk_skb: 26 callbacks suppressed
	[ +25.640809] kauditd_printk_skb: 8 callbacks suppressed
	[ +15.220994] kauditd_printk_skb: 24 callbacks suppressed
	[  +5.806636] kauditd_printk_skb: 3 callbacks suppressed
	[Jan 8 23:01] kauditd_printk_skb: 20 callbacks suppressed
	
	
	==> etcd [895edf79ccf5] <==
	{"level":"warn","ts":"2024-01-08T23:00:59.700258Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-01-08T23:00:59.182287Z","time spent":"517.942887ms","remote":"127.0.0.1:51352","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":432,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/ingress/default/nginx-ingress\" mod_revision:0 > success:<request_put:<key:\"/registry/ingress/default/nginx-ingress\" value_size:385 >> failure:<>"}
	{"level":"warn","ts":"2024-01-08T23:00:59.701906Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"262.790088ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/default/\" range_end:\"/registry/pods/default0\" ","response":"range_response_count:2 size:6088"}
	{"level":"info","ts":"2024-01-08T23:00:59.701936Z","caller":"traceutil/trace.go:171","msg":"trace[1364781975] range","detail":"{range_begin:/registry/pods/default/; range_end:/registry/pods/default0; response_count:2; response_revision:1438; }","duration":"262.844188ms","start":"2024-01-08T23:00:59.439084Z","end":"2024-01-08T23:00:59.701928Z","steps":["trace[1364781975] 'agreement among raft nodes before linearized reading'  (duration: 262.707788ms)"],"step_count":1}
	{"level":"warn","ts":"2024-01-08T23:00:59.70218Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"514.222993ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/default/kubernetes\" ","response":"range_response_count:1 size:421"}
	{"level":"info","ts":"2024-01-08T23:00:59.702208Z","caller":"traceutil/trace.go:171","msg":"trace[970410748] range","detail":"{range_begin:/registry/services/endpoints/default/kubernetes; range_end:; response_count:1; response_revision:1438; }","duration":"514.254293ms","start":"2024-01-08T23:00:59.187946Z","end":"2024-01-08T23:00:59.702201Z","steps":["trace[970410748] 'agreement among raft nodes before linearized reading'  (duration: 512.958795ms)"],"step_count":1}
	{"level":"warn","ts":"2024-01-08T23:00:59.702229Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-01-08T23:00:59.18789Z","time spent":"514.333093ms","remote":"127.0.0.1:51324","response type":"/etcdserverpb.KV/Range","request count":0,"request size":49,"response count":1,"response size":445,"request content":"key:\"/registry/services/endpoints/default/kubernetes\" "}
	{"level":"info","ts":"2024-01-08T23:00:59.947386Z","caller":"traceutil/trace.go:171","msg":"trace[416230706] linearizableReadLoop","detail":"{readStateIndex:1514; appliedIndex:1513; }","duration":"155.938455ms","start":"2024-01-08T23:00:59.791428Z","end":"2024-01-08T23:00:59.947366Z","steps":["trace[416230706] 'read index received'  (duration: 95.037151ms)","trace[416230706] 'applied index is now lower than readState.Index'  (duration: 60.900304ms)"],"step_count":2}
	{"level":"info","ts":"2024-01-08T23:00:59.947481Z","caller":"traceutil/trace.go:171","msg":"trace[15715270] transaction","detail":"{read_only:false; response_revision:1439; number_of_response:1; }","duration":"234.475932ms","start":"2024-01-08T23:00:59.712996Z","end":"2024-01-08T23:00:59.947472Z","steps":["trace[15715270] 'process raft request'  (duration: 173.515728ms)","trace[15715270] 'compare'  (duration: 60.612305ms)"],"step_count":2}
	{"level":"warn","ts":"2024-01-08T23:00:59.947653Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"105.629634ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/events/ingress-nginx/ingress-nginx-controller-69cff4fd79-qdfpm.17a8817be21d6ac8\" ","response":"range_response_count:1 size:797"}
	{"level":"info","ts":"2024-01-08T23:00:59.947692Z","caller":"traceutil/trace.go:171","msg":"trace[1507435151] range","detail":"{range_begin:/registry/events/ingress-nginx/ingress-nginx-controller-69cff4fd79-qdfpm.17a8817be21d6ac8; range_end:; response_count:1; response_revision:1439; }","duration":"105.706534ms","start":"2024-01-08T23:00:59.841976Z","end":"2024-01-08T23:00:59.947683Z","steps":["trace[1507435151] 'agreement among raft nodes before linearized reading'  (duration: 105.556934ms)"],"step_count":1}
	{"level":"warn","ts":"2024-01-08T23:00:59.947739Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"156.329955ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1113"}
	{"level":"info","ts":"2024-01-08T23:00:59.947766Z","caller":"traceutil/trace.go:171","msg":"trace[1130801689] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:1439; }","duration":"156.360055ms","start":"2024-01-08T23:00:59.791398Z","end":"2024-01-08T23:00:59.947758Z","steps":["trace[1130801689] 'agreement among raft nodes before linearized reading'  (duration: 156.303655ms)"],"step_count":1}
	{"level":"warn","ts":"2024-01-08T23:01:03.386918Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"166.497777ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-01-08T23:01:03.387065Z","caller":"traceutil/trace.go:171","msg":"trace[34045190] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1455; }","duration":"166.666277ms","start":"2024-01-08T23:01:03.220387Z","end":"2024-01-08T23:01:03.387053Z","steps":["trace[34045190] 'range keys from in-memory index tree'  (duration: 166.245377ms)"],"step_count":1}
	{"level":"info","ts":"2024-01-08T23:01:12.169283Z","caller":"traceutil/trace.go:171","msg":"trace[754959038] linearizableReadLoop","detail":"{readStateIndex:1590; appliedIndex:1589; }","duration":"301.732331ms","start":"2024-01-08T23:01:11.867534Z","end":"2024-01-08T23:01:12.169266Z","steps":["trace[754959038] 'read index received'  (duration: 261.778765ms)","trace[754959038] 'applied index is now lower than readState.Index'  (duration: 39.952666ms)"],"step_count":2}
	{"level":"warn","ts":"2024-01-08T23:01:12.169513Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"301.97563ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/snapshot.storage.k8s.io/volumesnapshots/default/new-snapshot-demo\" ","response":"range_response_count:1 size:1602"}
	{"level":"info","ts":"2024-01-08T23:01:12.16955Z","caller":"traceutil/trace.go:171","msg":"trace[263299120] range","detail":"{range_begin:/registry/snapshot.storage.k8s.io/volumesnapshots/default/new-snapshot-demo; range_end:; response_count:1; response_revision:1509; }","duration":"302.03083ms","start":"2024-01-08T23:01:11.867511Z","end":"2024-01-08T23:01:12.169542Z","steps":["trace[263299120] 'agreement among raft nodes before linearized reading'  (duration: 301.84823ms)"],"step_count":1}
	{"level":"warn","ts":"2024-01-08T23:01:12.169581Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-01-08T23:01:11.867499Z","time spent":"302.07243ms","remote":"127.0.0.1:51994","response type":"/etcdserverpb.KV/Range","request count":0,"request size":77,"response count":1,"response size":1626,"request content":"key:\"/registry/snapshot.storage.k8s.io/volumesnapshots/default/new-snapshot-demo\" "}
	{"level":"info","ts":"2024-01-08T23:01:12.169936Z","caller":"traceutil/trace.go:171","msg":"trace[298728926] transaction","detail":"{read_only:false; response_revision:1509; number_of_response:1; }","duration":"312.91492ms","start":"2024-01-08T23:01:11.856993Z","end":"2024-01-08T23:01:12.169908Z","steps":["trace[298728926] 'process raft request'  (duration: 272.399855ms)","trace[298728926] 'compare'  (duration: 39.577166ms)"],"step_count":2}
	{"level":"warn","ts":"2024-01-08T23:01:12.170064Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-01-08T23:01:11.856979Z","time spent":"313.00232ms","remote":"127.0.0.1:52002","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":2312,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/snapshot.storage.k8s.io/volumesnapshotcontents/snapcontent-03f4cd06-9e9e-4c8e-b271-b4e2f9413502\" mod_revision:1506 > success:<request_put:<key:\"/registry/snapshot.storage.k8s.io/volumesnapshotcontents/snapcontent-03f4cd06-9e9e-4c8e-b271-b4e2f9413502\" value_size:2199 >> failure:<request_range:<key:\"/registry/snapshot.storage.k8s.io/volumesnapshotcontents/snapcontent-03f4cd06-9e9e-4c8e-b271-b4e2f9413502\" > >"}
	{"level":"info","ts":"2024-01-08T23:01:12.294682Z","caller":"traceutil/trace.go:171","msg":"trace[1844338772] transaction","detail":"{read_only:false; response_revision:1510; number_of_response:1; }","duration":"112.256202ms","start":"2024-01-08T23:01:12.1824Z","end":"2024-01-08T23:01:12.294656Z","steps":["trace[1844338772] 'process raft request'  (duration: 104.229209ms)"],"step_count":1}
	{"level":"info","ts":"2024-01-08T23:01:12.304987Z","caller":"traceutil/trace.go:171","msg":"trace[927489586] linearizableReadLoop","detail":"{readStateIndex:1592; appliedIndex:1590; }","duration":"122.135493ms","start":"2024-01-08T23:01:12.18283Z","end":"2024-01-08T23:01:12.304966Z","steps":["trace[927489586] 'read index received'  (duration: 103.811109ms)","trace[927489586] 'applied index is now lower than readState.Index'  (duration: 18.323184ms)"],"step_count":2}
	{"level":"info","ts":"2024-01-08T23:01:12.305712Z","caller":"traceutil/trace.go:171","msg":"trace[1311343380] transaction","detail":"{read_only:false; response_revision:1511; number_of_response:1; }","duration":"122.997093ms","start":"2024-01-08T23:01:12.182698Z","end":"2024-01-08T23:01:12.305695Z","steps":["trace[1311343380] 'process raft request'  (duration: 122.067593ms)"],"step_count":1}
	{"level":"warn","ts":"2024-01-08T23:01:12.307468Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"124.640391ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/snapshot.storage.k8s.io/volumesnapshots/default/new-snapshot-demo\" ","response":"range_response_count:1 size:1602"}
	{"level":"info","ts":"2024-01-08T23:01:12.308792Z","caller":"traceutil/trace.go:171","msg":"trace[1078084529] range","detail":"{range_begin:/registry/snapshot.storage.k8s.io/volumesnapshots/default/new-snapshot-demo; range_end:; response_count:1; response_revision:1511; }","duration":"125.97269ms","start":"2024-01-08T23:01:12.182799Z","end":"2024-01-08T23:01:12.308771Z","steps":["trace[1078084529] 'agreement among raft nodes before linearized reading'  (duration: 124.433991ms)"],"step_count":1}
	
	
	==> gcp-auth [9afa8a999022] <==
	2024/01/08 23:00:29 GCP Auth Webhook started!
	2024/01/08 23:00:40 Ready to marshal response ...
	2024/01/08 23:00:40 Ready to write response ...
	2024/01/08 23:00:46 Ready to marshal response ...
	2024/01/08 23:00:46 Ready to write response ...
	2024/01/08 23:00:58 Ready to marshal response ...
	2024/01/08 23:00:58 Ready to write response ...
	2024/01/08 23:01:00 Ready to marshal response ...
	2024/01/08 23:01:00 Ready to write response ...
	2024/01/08 23:01:22 Ready to marshal response ...
	2024/01/08 23:01:22 Ready to write response ...
	2024/01/08 23:01:26 Ready to marshal response ...
	2024/01/08 23:01:26 Ready to write response ...
	2024/01/08 23:01:26 Ready to marshal response ...
	2024/01/08 23:01:26 Ready to write response ...
	
	
	==> kernel <==
	 23:01:27 up 6 min,  0 users,  load average: 4.46, 3.28, 1.51
	Linux addons-852800 5.10.57 #1 SMP Sat Dec 16 11:03:54 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	
	==> kube-apiserver [b7eaecba37c0] <==
	Trace[681325673]: [629.158106ms] [629.158106ms] END
	I0108 22:59:56.586941       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0108 23:00:25.995440       1 trace.go:236] Trace[366618642]: "Update" accept:application/json, */*,audit-id:5de6ce59-b4ef-4c4d-818e-265155f93b84,client:10.244.0.13,protocol:HTTP/2.0,resource:leases,scope:resource,url:/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/snapshot-controller-leader,user-agent:snapshot-controller/v0.0.0 (linux/amd64) kubernetes/$Format,verb:PUT (08-Jan-2024 23:00:25.349) (total time: 645ms):
	Trace[366618642]: ["GuaranteedUpdate etcd3" audit-id:5de6ce59-b4ef-4c4d-818e-265155f93b84,key:/leases/kube-system/snapshot-controller-leader,type:*coordination.Lease,resource:leases.coordination.k8s.io 645ms (23:00:25.350)
	Trace[366618642]:  ---"Txn call completed" 644ms (23:00:25.995)]
	Trace[366618642]: [645.639767ms] [645.639767ms] END
	I0108 23:00:25.997817       1 trace.go:236] Trace[1488296116]: "List" accept:application/json, */*,audit-id:8d1b4d8f-bf8e-46d9-8418-7e781ef9c546,client:172.24.96.1,protocol:HTTP/2.0,resource:pods,scope:namespace,url:/api/v1/namespaces/gcp-auth/pods,user-agent:minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format,verb:LIST (08-Jan-2024 23:00:25.390) (total time: 607ms):
	Trace[1488296116]: ["List(recursive=true) etcd3" audit-id:8d1b4d8f-bf8e-46d9-8418-7e781ef9c546,key:/pods/gcp-auth,resourceVersion:,resourceVersionMatch:,limit:0,continue: 607ms (23:00:25.390)]
	Trace[1488296116]: [607.244829ms] [607.244829ms] END
	I0108 23:00:52.873822       1 trace.go:236] Trace[980007749]: "Delete" accept:application/json,audit-id:28a89861-8e7d-41f8-a189-48e3857280d6,client:127.0.0.1,protocol:HTTP/2.0,resource:deployments,scope:resource,url:/apis/apps/v1/namespaces/kube-system/deployments/metrics-server,user-agent:kubectl/v1.28.4 (linux/amd64) kubernetes/bae2c62,verb:DELETE (08-Jan-2024 23:00:52.232) (total time: 641ms):
	Trace[980007749]: ---"Object deleted from database" 640ms (23:00:52.873)
	Trace[980007749]: [641.216013ms] [641.216013ms] END
	I0108 23:00:53.160068       1 handler.go:232] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	I0108 23:00:53.186199       1 handler.go:232] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0108 23:00:54.246589       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0108 23:00:59.181148       1 controller.go:624] quota admission added evaluator for: ingresses.networking.k8s.io
	I0108 23:00:59.702823       1 trace.go:236] Trace[1196447032]: "Get" accept:application/vnd.kubernetes.protobuf, */*,audit-id:f0e46916-8eb3-4c51-b3c9-3bea69f0e4b4,client:127.0.0.1,protocol:HTTP/2.0,resource:endpoints,scope:resource,url:/api/v1/namespaces/default/endpoints/kubernetes,user-agent:kube-apiserver/v1.28.4 (linux/amd64) kubernetes/bae2c62,verb:GET (08-Jan-2024 23:00:59.187) (total time: 515ms):
	Trace[1196447032]: ---"About to write a response" 515ms (23:00:59.702)
	Trace[1196447032]: [515.481391ms] [515.481391ms] END
	I0108 23:00:59.703784       1 trace.go:236] Trace[1783192397]: "Create" accept:application/json,audit-id:2f15f919-cde7-4f1b-bc9c-1143b11953b2,client:172.24.96.1,protocol:HTTP/2.0,resource:ingresses,scope:resource,url:/apis/networking.k8s.io/v1/namespaces/default/ingresses,user-agent:kubectl/v1.29.0 (windows/amd64) kubernetes/3f7a50f,verb:POST (08-Jan-2024 23:00:59.047) (total time: 656ms):
	Trace[1783192397]: ["Create etcd3" audit-id:2f15f919-cde7-4f1b-bc9c-1143b11953b2,key:/ingress/default/nginx-ingress,type:*networking.Ingress,resource:ingresses.networking.k8s.io 522ms (23:00:59.181)
	Trace[1783192397]:  ---"Txn call succeeded" 522ms (23:00:59.703)]
	Trace[1783192397]: [656.66107ms] [656.66107ms] END
	I0108 23:01:00.276911       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.96.235.183"}
	I0108 23:01:11.619617       1 controller.go:624] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	
	
	==> kube-controller-manager [c86923f30b12] <==
	I0108 23:00:45.265209       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-69cff4fd79" duration="39.027402ms"
	I0108 23:00:45.265911       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-69cff4fd79" duration="227.2µs"
	I0108 23:00:45.544173       1 event.go:307] "Event occurred" object="default/hpvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="Waiting for a volume to be created either by the external provisioner 'hostpath.csi.k8s.io' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered."
	I0108 23:00:52.955366       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-7c66d45ddc" duration="23µs"
	E0108 23:00:54.250796       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource
	W0108 23:00:55.265984       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0108 23:00:55.266101       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0108 23:00:57.343858       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0108 23:00:57.343914       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0108 23:01:03.257320       1 namespace_controller.go:182] "Namespace has been deleted" namespace="gadget"
	W0108 23:01:03.369264       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0108 23:01:03.369525       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0108 23:01:03.871805       1 replica_set.go:676] "Finished syncing" kind="ReplicationController" key="kube-system/registry" duration="12.1µs"
	W0108 23:01:10.265586       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0108 23:01:10.265676       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0108 23:01:13.449225       1 shared_informer.go:311] Waiting for caches to sync for resource quota
	I0108 23:01:13.449263       1 shared_informer.go:318] Caches are synced for resource quota
	I0108 23:01:13.924254       1 shared_informer.go:311] Waiting for caches to sync for garbage collector
	I0108 23:01:13.924370       1 shared_informer.go:318] Caches are synced for garbage collector
	I0108 23:01:14.989267       1 event.go:307] "Event occurred" object="default/hpvc-restore" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="Waiting for a volume to be created either by the external provisioner 'hostpath.csi.k8s.io' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered."
	I0108 23:01:21.170654       1 event.go:307] "Event occurred" object="default/hpvc-restore" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="Waiting for a volume to be created either by the external provisioner 'hostpath.csi.k8s.io' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered."
	I0108 23:01:25.472914       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/tiller-deploy-7b677967b9" duration="7µs"
	I0108 23:01:26.140709       1 event.go:307] "Event occurred" object="default/test-pvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="WaitForFirstConsumer" message="waiting for first consumer to be created before binding"
	I0108 23:01:26.582000       1 event.go:307] "Event occurred" object="default/test-pvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="Waiting for a volume to be created either by the external provisioner 'rancher.io/local-path' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered."
	I0108 23:01:26.582096       1 event.go:307] "Event occurred" object="default/test-pvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="Waiting for a volume to be created either by the external provisioner 'rancher.io/local-path' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered."
	
	
	==> kube-proxy [04b11bf1befc] <==
	I0108 22:57:26.303396       1 server_others.go:69] "Using iptables proxy"
	I0108 22:57:26.488985       1 node.go:141] Successfully retrieved node IP: 172.24.111.87
	I0108 22:57:26.927598       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0108 22:57:26.927755       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0108 22:57:26.944986       1 server_others.go:152] "Using iptables Proxier"
	I0108 22:57:26.945510       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0108 22:57:26.945903       1 server.go:846] "Version info" version="v1.28.4"
	I0108 22:57:26.946173       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0108 22:57:26.957834       1 config.go:188] "Starting service config controller"
	I0108 22:57:26.957941       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0108 22:57:26.958153       1 config.go:97] "Starting endpoint slice config controller"
	I0108 22:57:26.958217       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0108 22:57:26.958963       1 config.go:315] "Starting node config controller"
	I0108 22:57:26.959229       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0108 22:57:27.059381       1 shared_informer.go:318] Caches are synced for node config
	I0108 22:57:27.059853       1 shared_informer.go:318] Caches are synced for service config
	I0108 22:57:27.068483       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [01be9c881020] <==
	W0108 22:56:57.925627       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0108 22:56:57.925693       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0108 22:56:58.091882       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0108 22:56:58.091917       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0108 22:56:58.119877       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0108 22:56:58.119963       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0108 22:56:58.184792       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0108 22:56:58.184958       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0108 22:56:58.234745       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0108 22:56:58.235058       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0108 22:56:58.245924       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0108 22:56:58.245970       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0108 22:56:58.276652       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0108 22:56:58.276879       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0108 22:56:58.280187       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0108 22:56:58.280345       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0108 22:56:58.304948       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0108 22:56:58.305051       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0108 22:56:58.321966       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0108 22:56:58.321997       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0108 22:56:58.334439       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0108 22:56:58.334469       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0108 22:56:58.352521       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0108 22:56:58.352598       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0108 22:57:00.958101       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	-- Journal begins at Mon 2024-01-08 22:55:02 UTC, ends at Mon 2024-01-08 23:01:27 UTC. --
	Jan 08 23:01:22 addons-852800 kubelet[2701]: E0108 23:01:22.659167    2701 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="11b6e0ae-7db1-42a9-849b-57bb9ee9a175" containerName="registry-proxy"
	Jan 08 23:01:22 addons-852800 kubelet[2701]: E0108 23:01:22.659177    2701 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="64becf8e-0c80-4b86-ad31-1fac01c460c7" containerName="registry"
	Jan 08 23:01:22 addons-852800 kubelet[2701]: E0108 23:01:22.659189    2701 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="ca7c2c47-0e34-4971-a686-a3e06dde7376" containerName="helm-test"
	Jan 08 23:01:22 addons-852800 kubelet[2701]: I0108 23:01:22.659375    2701 memory_manager.go:346] "RemoveStaleState removing state" podUID="11b6e0ae-7db1-42a9-849b-57bb9ee9a175" containerName="registry-proxy"
	Jan 08 23:01:22 addons-852800 kubelet[2701]: I0108 23:01:22.659403    2701 memory_manager.go:346] "RemoveStaleState removing state" podUID="64becf8e-0c80-4b86-ad31-1fac01c460c7" containerName="registry"
	Jan 08 23:01:22 addons-852800 kubelet[2701]: I0108 23:01:22.659414    2701 memory_manager.go:346] "RemoveStaleState removing state" podUID="ca7c2c47-0e34-4971-a686-a3e06dde7376" containerName="helm-test"
	Jan 08 23:01:22 addons-852800 kubelet[2701]: I0108 23:01:22.659426    2701 memory_manager.go:346] "RemoveStaleState removing state" podUID="d2c57ef4-3f04-4f04-9fa9-bc0a5c9dadc3" containerName="task-pv-container"
	Jan 08 23:01:22 addons-852800 kubelet[2701]: I0108 23:01:22.790409    2701 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-385076d9-c303-4ad9-ba7b-83dce4de89af\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^d6c77613-ae79-11ee-b20f-c6c911bf2f99\") pod \"task-pv-pod-restore\" (UID: \"11d82d90-19cc-45bf-822c-4284032324f2\") " pod="default/task-pv-pod-restore"
	Jan 08 23:01:22 addons-852800 kubelet[2701]: I0108 23:01:22.790733    2701 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/11d82d90-19cc-45bf-822c-4284032324f2-gcp-creds\") pod \"task-pv-pod-restore\" (UID: \"11d82d90-19cc-45bf-822c-4284032324f2\") " pod="default/task-pv-pod-restore"
	Jan 08 23:01:22 addons-852800 kubelet[2701]: I0108 23:01:22.790841    2701 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-slfdz\" (UniqueName: \"kubernetes.io/projected/11d82d90-19cc-45bf-822c-4284032324f2-kube-api-access-slfdz\") pod \"task-pv-pod-restore\" (UID: \"11d82d90-19cc-45bf-822c-4284032324f2\") " pod="default/task-pv-pod-restore"
	Jan 08 23:01:22 addons-852800 kubelet[2701]: I0108 23:01:22.905241    2701 operation_generator.go:665] "MountVolume.MountDevice succeeded for volume \"pvc-385076d9-c303-4ad9-ba7b-83dce4de89af\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^d6c77613-ae79-11ee-b20f-c6c911bf2f99\") pod \"task-pv-pod-restore\" (UID: \"11d82d90-19cc-45bf-822c-4284032324f2\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/hostpath.csi.k8s.io/5f628d3f17a3743ea0c53cad3cc31fb40bc98e88ed82f21bd08c3766d400f0c1/globalmount\"" pod="default/task-pv-pod-restore"
	Jan 08 23:01:23 addons-852800 kubelet[2701]: I0108 23:01:23.921243    2701 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="249f59157638efa23ee781f0c1786db10a86436d5c37d1df4c9ce164afd044e7"
	Jan 08 23:01:25 addons-852800 kubelet[2701]: I0108 23:01:25.500073    2701 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/task-pv-pod-restore" podStartSLOduration=2.961361986 podCreationTimestamp="2024-01-08 23:01:22 +0000 UTC" firstStartedPulling="2024-01-08 23:01:24.010187194 +0000 UTC m=+263.253755179" lastFinishedPulling="2024-01-08 23:01:24.548755409 +0000 UTC m=+263.792323394" observedRunningTime="2024-01-08 23:01:25.000391953 +0000 UTC m=+264.243960038" watchObservedRunningTime="2024-01-08 23:01:25.499930201 +0000 UTC m=+264.743498286"
	Jan 08 23:01:26 addons-852800 kubelet[2701]: I0108 23:01:26.683099    2701 topology_manager.go:215] "Topology Admit Handler" podUID="2225c791-0e8d-49a3-88c6-084003b0187a" podNamespace="local-path-storage" podName="helper-pod-create-pvc-026a5e48-d0e7-4e79-b0a6-d014883a4060"
	Jan 08 23:01:26 addons-852800 kubelet[2701]: I0108 23:01:26.738125    2701 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"script\" (UniqueName: \"kubernetes.io/configmap/2225c791-0e8d-49a3-88c6-084003b0187a-script\") pod \"helper-pod-create-pvc-026a5e48-d0e7-4e79-b0a6-d014883a4060\" (UID: \"2225c791-0e8d-49a3-88c6-084003b0187a\") " pod="local-path-storage/helper-pod-create-pvc-026a5e48-d0e7-4e79-b0a6-d014883a4060"
	Jan 08 23:01:26 addons-852800 kubelet[2701]: I0108 23:01:26.738482    2701 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2945f\" (UniqueName: \"kubernetes.io/projected/2225c791-0e8d-49a3-88c6-084003b0187a-kube-api-access-2945f\") pod \"helper-pod-create-pvc-026a5e48-d0e7-4e79-b0a6-d014883a4060\" (UID: \"2225c791-0e8d-49a3-88c6-084003b0187a\") " pod="local-path-storage/helper-pod-create-pvc-026a5e48-d0e7-4e79-b0a6-d014883a4060"
	Jan 08 23:01:26 addons-852800 kubelet[2701]: I0108 23:01:26.738757    2701 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/host-path/2225c791-0e8d-49a3-88c6-084003b0187a-data\") pod \"helper-pod-create-pvc-026a5e48-d0e7-4e79-b0a6-d014883a4060\" (UID: \"2225c791-0e8d-49a3-88c6-084003b0187a\") " pod="local-path-storage/helper-pod-create-pvc-026a5e48-d0e7-4e79-b0a6-d014883a4060"
	Jan 08 23:01:26 addons-852800 kubelet[2701]: I0108 23:01:26.738902    2701 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/2225c791-0e8d-49a3-88c6-084003b0187a-gcp-creds\") pod \"helper-pod-create-pvc-026a5e48-d0e7-4e79-b0a6-d014883a4060\" (UID: \"2225c791-0e8d-49a3-88c6-084003b0187a\") " pod="local-path-storage/helper-pod-create-pvc-026a5e48-d0e7-4e79-b0a6-d014883a4060"
	Jan 08 23:01:26 addons-852800 kubelet[2701]: I0108 23:01:26.840127    2701 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xpj9s\" (UniqueName: \"kubernetes.io/projected/c6b2b083-2aed-4744-ba3e-316f467280b9-kube-api-access-xpj9s\") pod \"c6b2b083-2aed-4744-ba3e-316f467280b9\" (UID: \"c6b2b083-2aed-4744-ba3e-316f467280b9\") "
	Jan 08 23:01:26 addons-852800 kubelet[2701]: I0108 23:01:26.855189    2701 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c6b2b083-2aed-4744-ba3e-316f467280b9-kube-api-access-xpj9s" (OuterVolumeSpecName: "kube-api-access-xpj9s") pod "c6b2b083-2aed-4744-ba3e-316f467280b9" (UID: "c6b2b083-2aed-4744-ba3e-316f467280b9"). InnerVolumeSpecName "kube-api-access-xpj9s". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Jan 08 23:01:26 addons-852800 kubelet[2701]: I0108 23:01:26.942432    2701 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-xpj9s\" (UniqueName: \"kubernetes.io/projected/c6b2b083-2aed-4744-ba3e-316f467280b9-kube-api-access-xpj9s\") on node \"addons-852800\" DevicePath \"\""
	Jan 08 23:01:27 addons-852800 kubelet[2701]: I0108 23:01:27.085675    2701 scope.go:117] "RemoveContainer" containerID="4cffb5f648e59602c478182273be3b84c24935eda84634d4e3779446eb245f24"
	Jan 08 23:01:27 addons-852800 kubelet[2701]: I0108 23:01:27.209570    2701 scope.go:117] "RemoveContainer" containerID="4cffb5f648e59602c478182273be3b84c24935eda84634d4e3779446eb245f24"
	Jan 08 23:01:27 addons-852800 kubelet[2701]: E0108 23:01:27.212113    2701 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error response from daemon: No such container: 4cffb5f648e59602c478182273be3b84c24935eda84634d4e3779446eb245f24" containerID="4cffb5f648e59602c478182273be3b84c24935eda84634d4e3779446eb245f24"
	Jan 08 23:01:27 addons-852800 kubelet[2701]: I0108 23:01:27.212229    2701 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"4cffb5f648e59602c478182273be3b84c24935eda84634d4e3779446eb245f24"} err="failed to get container status \"4cffb5f648e59602c478182273be3b84c24935eda84634d4e3779446eb245f24\": rpc error: code = Unknown desc = Error response from daemon: No such container: 4cffb5f648e59602c478182273be3b84c24935eda84634d4e3779446eb245f24"
	
	
	==> storage-provisioner [a93ee4631f19] <==
	I0108 22:57:52.309681       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0108 22:57:52.600690       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0108 22:57:52.600747       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0108 22:57:52.631855       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0108 22:57:52.637180       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-852800_351b5154-457c-45e8-8b8b-667f89def651!
	I0108 22:57:52.677796       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"6027063a-ac1c-4e40-bf60-685164be6767", APIVersion:"v1", ResourceVersion:"837", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-852800_351b5154-457c-45e8-8b8b-667f89def651 became leader
	I0108 22:57:53.039248       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-852800_351b5154-457c-45e8-8b8b-667f89def651!
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0108 23:01:17.384161   14456 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p addons-852800 -n addons-852800
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p addons-852800 -n addons-852800: (13.9713744s)
helpers_test.go:261: (dbg) Run:  kubectl --context addons-852800 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: test-local-path ingress-nginx-admission-create-csxrb ingress-nginx-admission-patch-zm9bs
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Registry]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-852800 describe pod test-local-path ingress-nginx-admission-create-csxrb ingress-nginx-admission-patch-zm9bs
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-852800 describe pod test-local-path ingress-nginx-admission-create-csxrb ingress-nginx-admission-patch-zm9bs: exit status 1 (317.1159ms)

                                                
                                                
-- stdout --
	Name:             test-local-path
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-852800/172.24.111.87
	Start Time:       Mon, 08 Jan 2024 23:01:35 +0000
	Labels:           run=test-local-path
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.30
	IPs:
	  IP:  10.244.0.30
	Containers:
	  busybox:
	    Container ID:  docker://48aeb51f00da5963f7f66cdf927fc80d02613e63831de8da5fc3004ad59109fc
	    Image:         busybox:stable
	    Image ID:      docker-pullable://busybox@sha256:ba76950ac9eaa407512c9d859cea48114eeff8a6f12ebaa5d32ce79d4a017dd8
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      sh
	      -c
	      echo 'local-path-provisioner' > /test/file1
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Mon, 08 Jan 2024 23:01:39 +0000
	      Finished:     Mon, 08 Jan 2024 23:01:39 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /test from data (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-jgzbf (ro)
	Conditions:
	  Type              Status
	  Initialized       True 
	  Ready             False 
	  ContainersReady   False 
	  PodScheduled      True 
	Volumes:
	  data:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  test-pvc
	    ReadOnly:   false
	  kube-api-access-jgzbf:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  8s    default-scheduler  Successfully assigned default/test-local-path to addons-852800
	  Normal  Pulling    6s    kubelet            Pulling image "busybox:stable"
	  Normal  Pulled     5s    kubelet            Successfully pulled image "busybox:stable" in 1.715s (1.715s including waiting)
	  Normal  Created    5s    kubelet            Created container busybox
	  Normal  Started    4s    kubelet            Started container busybox

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-csxrb" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-zm9bs" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-852800 describe pod test-local-path ingress-nginx-admission-create-csxrb ingress-nginx-admission-patch-zm9bs: exit status 1
--- FAIL: TestAddons/parallel/Registry (73.05s)

                                                
                                    
x
+
TestErrorSpam/setup (194.53s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-windows-amd64.exe start -p nospam-827500 -n=1 --memory=2250 --wait=false --log_dir=C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-827500 --driver=hyperv
E0108 23:05:30.308774   14288 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-852800\client.crt: The system cannot find the path specified.
E0108 23:05:30.319027   14288 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-852800\client.crt: The system cannot find the path specified.
E0108 23:05:30.338593   14288 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-852800\client.crt: The system cannot find the path specified.
E0108 23:05:30.370091   14288 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-852800\client.crt: The system cannot find the path specified.
E0108 23:05:30.418222   14288 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-852800\client.crt: The system cannot find the path specified.
E0108 23:05:30.512917   14288 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-852800\client.crt: The system cannot find the path specified.
E0108 23:05:30.674782   14288 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-852800\client.crt: The system cannot find the path specified.
E0108 23:05:31.008257   14288 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-852800\client.crt: The system cannot find the path specified.
E0108 23:05:31.663362   14288 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-852800\client.crt: The system cannot find the path specified.
E0108 23:05:32.952047   14288 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-852800\client.crt: The system cannot find the path specified.
E0108 23:05:35.527131   14288 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-852800\client.crt: The system cannot find the path specified.
E0108 23:05:40.655612   14288 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-852800\client.crt: The system cannot find the path specified.
E0108 23:05:50.902976   14288 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-852800\client.crt: The system cannot find the path specified.
E0108 23:06:11.396348   14288 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-852800\client.crt: The system cannot find the path specified.
E0108 23:06:52.357282   14288 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-852800\client.crt: The system cannot find the path specified.
E0108 23:08:14.280810   14288 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-852800\client.crt: The system cannot find the path specified.
error_spam_test.go:81: (dbg) Done: out/minikube-windows-amd64.exe start -p nospam-827500 -n=1 --memory=2250 --wait=false --log_dir=C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-827500 --driver=hyperv: (3m14.5305049s)
error_spam_test.go:96: unexpected stderr: "W0108 23:05:01.951345    9656 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube1\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified."
error_spam_test.go:110: minikube stdout:
* [nospam-827500] minikube v1.32.0 on Microsoft Windows 10 Enterprise N 10.0.19045.3803 Build 19045.3803
- KUBECONFIG=C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=C:\Users\jenkins.minikube1\minikube-integration\.minikube
- MINIKUBE_LOCATION=17830
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
* Using the hyperv driver based on user configuration
* Starting control plane node nospam-827500 in cluster nospam-827500
* Creating hyperv VM (CPUs=2, Memory=2250MB, Disk=20000MB) ...
* Preparing Kubernetes v1.28.4 on Docker 24.0.7 ...
- Generating certificates and keys ...
- Booting up control plane ...
- Configuring RBAC rules ...
* Configuring bridge CNI (Container Networking Interface) ...
* Verifying Kubernetes components...
- Using image gcr.io/k8s-minikube/storage-provisioner:v5
* Enabled addons: storage-provisioner, default-storageclass
* Done! kubectl is now configured to use "nospam-827500" cluster and "default" namespace by default
error_spam_test.go:111: minikube stderr:
W0108 23:05:01.951345    9656 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
--- FAIL: TestErrorSpam/setup (194.53s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (1.9s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-838800 config unset cpus
functional_test.go:1206: expected config error for "out/minikube-windows-amd64.exe -p functional-838800 config unset cpus" to be -""- but got *"W0108 23:20:43.266215    3440 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube1\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified."*
functional_test.go:1195: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-838800 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-838800 config get cpus: exit status 14 (299.2764ms)

                                                
                                                
** stderr ** 
	W0108 23:20:43.617112    8664 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1206: expected config error for "out/minikube-windows-amd64.exe -p functional-838800 config get cpus" to be -"Error: specified key could not be found in config"- but got *"W0108 23:20:43.617112    8664 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube1\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified.\nError: specified key could not be found in config"*
functional_test.go:1195: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-838800 config set cpus 2
functional_test.go:1206: expected config error for "out/minikube-windows-amd64.exe -p functional-838800 config set cpus 2" to be -"! These changes will take effect upon a minikube delete and then a minikube start"- but got *"W0108 23:20:43.915457   10528 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube1\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified.\n! These changes will take effect upon a minikube delete and then a minikube start"*
functional_test.go:1195: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-838800 config get cpus
functional_test.go:1206: expected config error for "out/minikube-windows-amd64.exe -p functional-838800 config get cpus" to be -""- but got *"W0108 23:20:44.251324   15280 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube1\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified."*
functional_test.go:1195: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-838800 config unset cpus
functional_test.go:1206: expected config error for "out/minikube-windows-amd64.exe -p functional-838800 config unset cpus" to be -""- but got *"W0108 23:20:44.558846    2024 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube1\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified."*
functional_test.go:1195: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-838800 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-838800 config get cpus: exit status 14 (282.9744ms)

                                                
                                                
** stderr ** 
	W0108 23:20:44.865897   11052 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1206: expected config error for "out/minikube-windows-amd64.exe -p functional-838800 config get cpus" to be -"Error: specified key could not be found in config"- but got *"W0108 23:20:44.865897   11052 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube1\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified.\nError: specified key could not be found in config"*
--- FAIL: TestFunctional/parallel/ConfigCmd (1.90s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (15.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1508: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-838800 service --namespace=default --https --url hello-node
functional_test.go:1508: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-838800 service --namespace=default --https --url hello-node: exit status 1 (15.0569964s)

                                                
                                                
** stderr ** 
	W0108 23:21:32.575251    2436 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
functional_test.go:1510: failed to get service url. args "out/minikube-windows-amd64.exe -p functional-838800 service --namespace=default --https --url hello-node" : exit status 1
--- FAIL: TestFunctional/parallel/ServiceCmd/HTTPS (15.06s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (15.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1539: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-838800 service hello-node --url --format={{.IP}}
functional_test.go:1539: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-838800 service hello-node --url --format={{.IP}}: exit status 1 (15.0569173s)

                                                
                                                
** stderr ** 
	W0108 23:21:47.640475    8612 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
functional_test.go:1541: failed to get service url with custom format. args "out/minikube-windows-amd64.exe -p functional-838800 service hello-node --url --format={{.IP}}": exit status 1
functional_test.go:1547: "" is not a valid IP
--- FAIL: TestFunctional/parallel/ServiceCmd/Format (15.06s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (15.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1558: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-838800 service hello-node --url
functional_test.go:1558: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-838800 service hello-node --url: exit status 1 (15.0343089s)

                                                
                                                
** stderr ** 
	W0108 23:22:02.677907    7176 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
functional_test.go:1560: failed to get service url. args: "out/minikube-windows-amd64.exe -p functional-838800 service hello-node --url": exit status 1
functional_test.go:1564: found endpoint for hello-node: 
functional_test.go:1572: expected scheme to be -"http"- got scheme: *""*
--- FAIL: TestFunctional/parallel/ServiceCmd/URL (15.03s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (58.14s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:580: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-173500 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:588: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-173500 -- exec busybox-5bc68d56bd-cfnc7 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:599: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-173500 -- exec busybox-5bc68d56bd-cfnc7 -- sh -c "ping -c 1 172.24.96.1"
multinode_test.go:599: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p multinode-173500 -- exec busybox-5bc68d56bd-cfnc7 -- sh -c "ping -c 1 172.24.96.1": exit status 1 (10.5184724s)

                                                
                                                
-- stdout --
	PING 172.24.96.1 (172.24.96.1): 56 data bytes
	
	--- 172.24.96.1 ping statistics ---
	1 packets transmitted, 0 packets received, 100% packet loss

                                                
                                                
-- /stdout --
** stderr ** 
	W0109 00:09:41.914103    1980 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	command terminated with exit code 1

                                                
                                                
** /stderr **
multinode_test.go:600: Failed to ping host (172.24.96.1) from pod (busybox-5bc68d56bd-cfnc7): exit status 1
multinode_test.go:588: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-173500 -- exec busybox-5bc68d56bd-txtnl -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:599: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-173500 -- exec busybox-5bc68d56bd-txtnl -- sh -c "ping -c 1 172.24.96.1"
multinode_test.go:599: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p multinode-173500 -- exec busybox-5bc68d56bd-txtnl -- sh -c "ping -c 1 172.24.96.1": exit status 1 (10.5230282s)

                                                
                                                
-- stdout --
	PING 172.24.96.1 (172.24.96.1): 56 data bytes
	
	--- 172.24.96.1 ping statistics ---
	1 packets transmitted, 0 packets received, 100% packet loss

                                                
                                                
-- /stdout --
** stderr ** 
	W0109 00:09:52.971687    8476 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	command terminated with exit code 1

                                                
                                                
** /stderr **
multinode_test.go:600: Failed to ping host (172.24.96.1) from pod (busybox-5bc68d56bd-txtnl): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-173500 -n multinode-173500
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-173500 -n multinode-173500: (12.2923742s)
helpers_test.go:244: <<< TestMultiNode/serial/PingHostFrom2Pods FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/PingHostFrom2Pods]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-173500 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-173500 logs -n 25: (8.7634239s)
helpers_test.go:252: TestMultiNode/serial/PingHostFrom2Pods logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	| Command |                       Args                        |       Profile        |       User        | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	| ssh     | mount-start-2-966700 ssh -- ls                    | mount-start-2-966700 | minikube1\jenkins | v1.32.0 | 08 Jan 24 23:58 UTC | 08 Jan 24 23:58 UTC |
	|         | /minikube-host                                    |                      |                   |         |                     |                     |
	| delete  | -p mount-start-1-878400                           | mount-start-1-878400 | minikube1\jenkins | v1.32.0 | 08 Jan 24 23:58 UTC | 08 Jan 24 23:59 UTC |
	|         | --alsologtostderr -v=5                            |                      |                   |         |                     |                     |
	| ssh     | mount-start-2-966700 ssh -- ls                    | mount-start-2-966700 | minikube1\jenkins | v1.32.0 | 08 Jan 24 23:59 UTC | 08 Jan 24 23:59 UTC |
	|         | /minikube-host                                    |                      |                   |         |                     |                     |
	| stop    | -p mount-start-2-966700                           | mount-start-2-966700 | minikube1\jenkins | v1.32.0 | 08 Jan 24 23:59 UTC | 08 Jan 24 23:59 UTC |
	| start   | -p mount-start-2-966700                           | mount-start-2-966700 | minikube1\jenkins | v1.32.0 | 08 Jan 24 23:59 UTC | 09 Jan 24 00:01 UTC |
	| mount   | C:\Users\jenkins.minikube1:/minikube-host         | mount-start-2-966700 | minikube1\jenkins | v1.32.0 | 09 Jan 24 00:01 UTC |                     |
	|         | --profile mount-start-2-966700 --v 0              |                      |                   |         |                     |                     |
	|         | --9p-version 9p2000.L --gid 0 --ip                |                      |                   |         |                     |                     |
	|         | --msize 6543 --port 46465 --type 9p --uid         |                      |                   |         |                     |                     |
	|         |                                                 0 |                      |                   |         |                     |                     |
	| ssh     | mount-start-2-966700 ssh -- ls                    | mount-start-2-966700 | minikube1\jenkins | v1.32.0 | 09 Jan 24 00:01 UTC | 09 Jan 24 00:01 UTC |
	|         | /minikube-host                                    |                      |                   |         |                     |                     |
	| delete  | -p mount-start-2-966700                           | mount-start-2-966700 | minikube1\jenkins | v1.32.0 | 09 Jan 24 00:01 UTC | 09 Jan 24 00:02 UTC |
	| delete  | -p mount-start-1-878400                           | mount-start-1-878400 | minikube1\jenkins | v1.32.0 | 09 Jan 24 00:02 UTC | 09 Jan 24 00:02 UTC |
	| start   | -p multinode-173500                               | multinode-173500     | minikube1\jenkins | v1.32.0 | 09 Jan 24 00:02 UTC | 09 Jan 24 00:09 UTC |
	|         | --wait=true --memory=2200                         |                      |                   |         |                     |                     |
	|         | --nodes=2 -v=8                                    |                      |                   |         |                     |                     |
	|         | --alsologtostderr                                 |                      |                   |         |                     |                     |
	|         | --driver=hyperv                                   |                      |                   |         |                     |                     |
	| kubectl | -p multinode-173500 -- apply -f                   | multinode-173500     | minikube1\jenkins | v1.32.0 | 09 Jan 24 00:09 UTC | 09 Jan 24 00:09 UTC |
	|         | ./testdata/multinodes/multinode-pod-dns-test.yaml |                      |                   |         |                     |                     |
	| kubectl | -p multinode-173500 -- rollout                    | multinode-173500     | minikube1\jenkins | v1.32.0 | 09 Jan 24 00:09 UTC | 09 Jan 24 00:09 UTC |
	|         | status deployment/busybox                         |                      |                   |         |                     |                     |
	| kubectl | -p multinode-173500 -- get pods -o                | multinode-173500     | minikube1\jenkins | v1.32.0 | 09 Jan 24 00:09 UTC | 09 Jan 24 00:09 UTC |
	|         | jsonpath='{.items[*].status.podIP}'               |                      |                   |         |                     |                     |
	| kubectl | -p multinode-173500 -- get pods -o                | multinode-173500     | minikube1\jenkins | v1.32.0 | 09 Jan 24 00:09 UTC | 09 Jan 24 00:09 UTC |
	|         | jsonpath='{.items[*].metadata.name}'              |                      |                   |         |                     |                     |
	| kubectl | -p multinode-173500 -- exec                       | multinode-173500     | minikube1\jenkins | v1.32.0 | 09 Jan 24 00:09 UTC | 09 Jan 24 00:09 UTC |
	|         | busybox-5bc68d56bd-cfnc7 --                       |                      |                   |         |                     |                     |
	|         | nslookup kubernetes.io                            |                      |                   |         |                     |                     |
	| kubectl | -p multinode-173500 -- exec                       | multinode-173500     | minikube1\jenkins | v1.32.0 | 09 Jan 24 00:09 UTC | 09 Jan 24 00:09 UTC |
	|         | busybox-5bc68d56bd-txtnl --                       |                      |                   |         |                     |                     |
	|         | nslookup kubernetes.io                            |                      |                   |         |                     |                     |
	| kubectl | -p multinode-173500 -- exec                       | multinode-173500     | minikube1\jenkins | v1.32.0 | 09 Jan 24 00:09 UTC | 09 Jan 24 00:09 UTC |
	|         | busybox-5bc68d56bd-cfnc7 --                       |                      |                   |         |                     |                     |
	|         | nslookup kubernetes.default                       |                      |                   |         |                     |                     |
	| kubectl | -p multinode-173500 -- exec                       | multinode-173500     | minikube1\jenkins | v1.32.0 | 09 Jan 24 00:09 UTC | 09 Jan 24 00:09 UTC |
	|         | busybox-5bc68d56bd-txtnl --                       |                      |                   |         |                     |                     |
	|         | nslookup kubernetes.default                       |                      |                   |         |                     |                     |
	| kubectl | -p multinode-173500 -- exec                       | multinode-173500     | minikube1\jenkins | v1.32.0 | 09 Jan 24 00:09 UTC | 09 Jan 24 00:09 UTC |
	|         | busybox-5bc68d56bd-cfnc7 -- nslookup              |                      |                   |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |                   |         |                     |                     |
	| kubectl | -p multinode-173500 -- exec                       | multinode-173500     | minikube1\jenkins | v1.32.0 | 09 Jan 24 00:09 UTC | 09 Jan 24 00:09 UTC |
	|         | busybox-5bc68d56bd-txtnl -- nslookup              |                      |                   |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |                   |         |                     |                     |
	| kubectl | -p multinode-173500 -- get pods -o                | multinode-173500     | minikube1\jenkins | v1.32.0 | 09 Jan 24 00:09 UTC | 09 Jan 24 00:09 UTC |
	|         | jsonpath='{.items[*].metadata.name}'              |                      |                   |         |                     |                     |
	| kubectl | -p multinode-173500 -- exec                       | multinode-173500     | minikube1\jenkins | v1.32.0 | 09 Jan 24 00:09 UTC | 09 Jan 24 00:09 UTC |
	|         | busybox-5bc68d56bd-cfnc7                          |                      |                   |         |                     |                     |
	|         | -- sh -c nslookup                                 |                      |                   |         |                     |                     |
	|         | host.minikube.internal | awk                      |                      |                   |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                      |                   |         |                     |                     |
	| kubectl | -p multinode-173500 -- exec                       | multinode-173500     | minikube1\jenkins | v1.32.0 | 09 Jan 24 00:09 UTC |                     |
	|         | busybox-5bc68d56bd-cfnc7 -- sh                    |                      |                   |         |                     |                     |
	|         | -c ping -c 1 172.24.96.1                          |                      |                   |         |                     |                     |
	| kubectl | -p multinode-173500 -- exec                       | multinode-173500     | minikube1\jenkins | v1.32.0 | 09 Jan 24 00:09 UTC | 09 Jan 24 00:09 UTC |
	|         | busybox-5bc68d56bd-txtnl                          |                      |                   |         |                     |                     |
	|         | -- sh -c nslookup                                 |                      |                   |         |                     |                     |
	|         | host.minikube.internal | awk                      |                      |                   |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                      |                   |         |                     |                     |
	| kubectl | -p multinode-173500 -- exec                       | multinode-173500     | minikube1\jenkins | v1.32.0 | 09 Jan 24 00:09 UTC |                     |
	|         | busybox-5bc68d56bd-txtnl -- sh                    |                      |                   |         |                     |                     |
	|         | -c ping -c 1 172.24.96.1                          |                      |                   |         |                     |                     |
	|---------|---------------------------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/01/09 00:02:26
	Running on machine: minikube1
	Binary: Built with gc go1.21.5 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0109 00:02:26.591910     124 out.go:296] Setting OutFile to fd 732 ...
	I0109 00:02:26.592665     124 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0109 00:02:26.592665     124 out.go:309] Setting ErrFile to fd 1008...
	I0109 00:02:26.592665     124 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0109 00:02:26.617268     124 out.go:303] Setting JSON to false
	I0109 00:02:26.620519     124 start.go:128] hostinfo: {"hostname":"minikube1","uptime":6041,"bootTime":1704752505,"procs":193,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.3803 Build 19045.3803","kernelVersion":"10.0.19045.3803 Build 19045.3803","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f3192dc9-6fb2-4797-bdaa-5f567903ef41"}
	W0109 00:02:26.620519     124 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0109 00:02:26.626784     124 out.go:177] * [multinode-173500] minikube v1.32.0 on Microsoft Windows 10 Enterprise N 10.0.19045.3803 Build 19045.3803
	I0109 00:02:26.631802     124 notify.go:220] Checking for updates...
	I0109 00:02:26.634402     124 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0109 00:02:26.637642     124 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0109 00:02:26.640184     124 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube1\minikube-integration\.minikube
	I0109 00:02:26.643816     124 out.go:177]   - MINIKUBE_LOCATION=17830
	I0109 00:02:26.649915     124 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0109 00:02:26.653024     124 driver.go:392] Setting default libvirt URI to qemu:///system
	I0109 00:02:32.143527     124 out.go:177] * Using the hyperv driver based on user configuration
	I0109 00:02:32.147985     124 start.go:298] selected driver: hyperv
	I0109 00:02:32.147985     124 start.go:902] validating driver "hyperv" against <nil>
	I0109 00:02:32.147985     124 start.go:913] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0109 00:02:32.202013     124 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I0109 00:02:32.203523     124 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0109 00:02:32.203523     124 cni.go:84] Creating CNI manager for ""
	I0109 00:02:32.204072     124 cni.go:136] 0 nodes found, recommending kindnet
	I0109 00:02:32.204303     124 start_flags.go:318] Found "CNI" CNI - setting NetworkPlugin=cni
	I0109 00:02:32.204303     124 start_flags.go:323] config:
	{Name:multinode-173500 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704751654-17830@sha256:cabd32f8d9e8d804966eb117ed5366660f6363a4d1415f0b5480de6e396be617 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-173500 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:do
cker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I0109 00:02:32.204869     124 iso.go:125] acquiring lock: {Name:mk6e44fd4f974e035b521383471f58bfbae3f4f1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0109 00:02:32.209160     124 out.go:177] * Starting control plane node multinode-173500 in cluster multinode-173500
	I0109 00:02:32.212053     124 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0109 00:02:32.212053     124 preload.go:148] Found local preload: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4
	I0109 00:02:32.212053     124 cache.go:56] Caching tarball of preloaded images
	I0109 00:02:32.212863     124 preload.go:174] Found C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0109 00:02:32.212863     124 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on docker
	I0109 00:02:32.213444     124 profile.go:148] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-173500\config.json ...
	I0109 00:02:32.213806     124 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-173500\config.json: {Name:mkc6c35849c4068b87217dc01995022a3d37e425 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0109 00:02:32.214543     124 start.go:365] acquiring machines lock for multinode-173500: {Name:mk1756dfe9622c208593bf9b718faf6c1651aea2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0109 00:02:32.214543     124 start.go:369] acquired machines lock for "multinode-173500" in 0s
	I0109 00:02:32.214543     124 start.go:93] Provisioning new machine with config: &{Name:multinode-173500 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17806/minikube-v1.32.1-1702708929-17806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704751654-17830@sha256:cabd32f8d9e8d804966eb117ed5366660f6363a4d1415f0b5480de6e396be617 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-173500 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP:
MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0109 00:02:32.215615     124 start.go:125] createHost starting for "" (driver="hyperv")
	I0109 00:02:32.217496     124 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0109 00:02:32.218484     124 start.go:159] libmachine.API.Create for "multinode-173500" (driver="hyperv")
	I0109 00:02:32.218484     124 client.go:168] LocalClient.Create starting
	I0109 00:02:32.218484     124 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem
	I0109 00:02:32.218484     124 main.go:141] libmachine: Decoding PEM data...
	I0109 00:02:32.218484     124 main.go:141] libmachine: Parsing certificate...
	I0109 00:02:32.219681     124 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem
	I0109 00:02:32.219681     124 main.go:141] libmachine: Decoding PEM data...
	I0109 00:02:32.219681     124 main.go:141] libmachine: Parsing certificate...
	I0109 00:02:32.219681     124 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0109 00:02:34.382986     124 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0109 00:02:34.383101     124 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:02:34.383101     124 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0109 00:02:36.198775     124 main.go:141] libmachine: [stdout =====>] : False
	
	I0109 00:02:36.198775     124 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:02:36.198775     124 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0109 00:02:37.725337     124 main.go:141] libmachine: [stdout =====>] : True
	
	I0109 00:02:37.725337     124 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:02:37.725337     124 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0109 00:02:41.351671     124 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0109 00:02:41.351671     124 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:02:41.354686     124 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube1/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.32.1-1702708929-17806-amd64.iso...
	I0109 00:02:41.828011     124 main.go:141] libmachine: Creating SSH key...
	I0109 00:02:41.955374     124 main.go:141] libmachine: Creating VM...
	I0109 00:02:41.955374     124 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0109 00:02:44.813526     124 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0109 00:02:44.813717     124 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:02:44.813870     124 main.go:141] libmachine: Using switch "Default Switch"
	I0109 00:02:44.813946     124 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0109 00:02:46.690418     124 main.go:141] libmachine: [stdout =====>] : True
	
	I0109 00:02:46.827531     124 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:02:46.827696     124 main.go:141] libmachine: Creating VHD
	I0109 00:02:46.827791     124 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-173500\fixed.vhd' -SizeBytes 10MB -Fixed
	I0109 00:02:50.604713     124 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube1
	Path                    : C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-173500\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : AA71D70B-9208-44A9-9DBC-0DEC854C7339
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0109 00:02:50.604899     124 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:02:50.605003     124 main.go:141] libmachine: Writing magic tar header
	I0109 00:02:50.605089     124 main.go:141] libmachine: Writing SSH key tar header
	I0109 00:02:50.614850     124 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-173500\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-173500\disk.vhd' -VHDType Dynamic -DeleteSource
	I0109 00:02:53.852824     124 main.go:141] libmachine: [stdout =====>] : 
	I0109 00:02:53.852824     124 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:02:53.852929     124 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-173500\disk.vhd' -SizeBytes 20000MB
	I0109 00:02:56.532701     124 main.go:141] libmachine: [stdout =====>] : 
	I0109 00:02:56.532701     124 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:02:56.532701     124 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM multinode-173500 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-173500' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0109 00:03:00.379153     124 main.go:141] libmachine: [stdout =====>] : 
	Name             State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----             ----- ----------- ----------------- ------   ------             -------
	multinode-173500 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0109 00:03:00.379288     124 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:03:00.379288     124 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName multinode-173500 -DynamicMemoryEnabled $false
	I0109 00:03:02.731320     124 main.go:141] libmachine: [stdout =====>] : 
	I0109 00:03:02.731320     124 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:03:02.731320     124 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor multinode-173500 -Count 2
	I0109 00:03:04.965011     124 main.go:141] libmachine: [stdout =====>] : 
	I0109 00:03:04.965011     124 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:03:04.965011     124 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName multinode-173500 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-173500\boot2docker.iso'
	I0109 00:03:07.677910     124 main.go:141] libmachine: [stdout =====>] : 
	I0109 00:03:07.678010     124 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:03:07.678010     124 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName multinode-173500 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-173500\disk.vhd'
	I0109 00:03:10.450234     124 main.go:141] libmachine: [stdout =====>] : 
	I0109 00:03:10.450234     124 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:03:10.450234     124 main.go:141] libmachine: Starting VM...
	I0109 00:03:10.450335     124 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-173500
	I0109 00:03:13.657269     124 main.go:141] libmachine: [stdout =====>] : 
	I0109 00:03:13.657309     124 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:03:13.657309     124 main.go:141] libmachine: Waiting for host to start...
	I0109 00:03:13.657403     124 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-173500 ).state
	I0109 00:03:15.997911     124 main.go:141] libmachine: [stdout =====>] : Running
	
	I0109 00:03:15.997911     124 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:03:15.997911     124 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-173500 ).networkadapters[0]).ipaddresses[0]
	I0109 00:03:18.658939     124 main.go:141] libmachine: [stdout =====>] : 
	I0109 00:03:18.658939     124 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:03:19.673832     124 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-173500 ).state
	I0109 00:03:21.962310     124 main.go:141] libmachine: [stdout =====>] : Running
	
	I0109 00:03:21.962492     124 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:03:21.962595     124 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-173500 ).networkadapters[0]).ipaddresses[0]
	I0109 00:03:24.535701     124 main.go:141] libmachine: [stdout =====>] : 
	I0109 00:03:24.535744     124 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:03:25.538615     124 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-173500 ).state
	I0109 00:03:27.810328     124 main.go:141] libmachine: [stdout =====>] : Running
	
	I0109 00:03:27.810328     124 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:03:27.810480     124 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-173500 ).networkadapters[0]).ipaddresses[0]
	I0109 00:03:30.411650     124 main.go:141] libmachine: [stdout =====>] : 
	I0109 00:03:30.411824     124 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:03:31.425812     124 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-173500 ).state
	I0109 00:03:33.675340     124 main.go:141] libmachine: [stdout =====>] : Running
	
	I0109 00:03:33.675340     124 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:03:33.675340     124 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-173500 ).networkadapters[0]).ipaddresses[0]
	I0109 00:03:36.289547     124 main.go:141] libmachine: [stdout =====>] : 
	I0109 00:03:36.289547     124 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:03:37.290660     124 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-173500 ).state
	I0109 00:03:39.577144     124 main.go:141] libmachine: [stdout =====>] : Running
	
	I0109 00:03:39.577431     124 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:03:39.577431     124 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-173500 ).networkadapters[0]).ipaddresses[0]
	I0109 00:03:42.221878     124 main.go:141] libmachine: [stdout =====>] : 172.24.100.178
	
	I0109 00:03:42.221878     124 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:03:42.222089     124 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-173500 ).state
	I0109 00:03:44.406997     124 main.go:141] libmachine: [stdout =====>] : Running
	
	I0109 00:03:44.406997     124 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:03:44.407109     124 machine.go:88] provisioning docker machine ...
	I0109 00:03:44.407249     124 buildroot.go:166] provisioning hostname "multinode-173500"
	I0109 00:03:44.407330     124 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-173500 ).state
	I0109 00:03:46.651230     124 main.go:141] libmachine: [stdout =====>] : Running
	
	I0109 00:03:46.651230     124 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:03:46.651348     124 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-173500 ).networkadapters[0]).ipaddresses[0]
	I0109 00:03:49.253351     124 main.go:141] libmachine: [stdout =====>] : 172.24.100.178
	
	I0109 00:03:49.253829     124 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:03:49.260906     124 main.go:141] libmachine: Using SSH client type: native
	I0109 00:03:49.271143     124 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa46120] 0xa48c60 <nil>  [] 0s} 172.24.100.178 22 <nil> <nil>}
	I0109 00:03:49.271143     124 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-173500 && echo "multinode-173500" | sudo tee /etc/hostname
	I0109 00:03:49.455364     124 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-173500
	
	I0109 00:03:49.455908     124 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-173500 ).state
	I0109 00:03:51.634015     124 main.go:141] libmachine: [stdout =====>] : Running
	
	I0109 00:03:51.634015     124 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:03:51.634263     124 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-173500 ).networkadapters[0]).ipaddresses[0]
	I0109 00:03:54.237331     124 main.go:141] libmachine: [stdout =====>] : 172.24.100.178
	
	I0109 00:03:54.237402     124 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:03:54.244322     124 main.go:141] libmachine: Using SSH client type: native
	I0109 00:03:54.244903     124 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa46120] 0xa48c60 <nil>  [] 0s} 172.24.100.178 22 <nil> <nil>}
	I0109 00:03:54.245079     124 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-173500' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-173500/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-173500' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0109 00:03:54.413420     124 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0109 00:03:54.413420     124 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube1\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube1\minikube-integration\.minikube}
	I0109 00:03:54.413420     124 buildroot.go:174] setting up certificates
	I0109 00:03:54.413420     124 provision.go:83] configureAuth start
	I0109 00:03:54.413420     124 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-173500 ).state
	I0109 00:03:56.588141     124 main.go:141] libmachine: [stdout =====>] : Running
	
	I0109 00:03:56.588141     124 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:03:56.588141     124 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-173500 ).networkadapters[0]).ipaddresses[0]
	I0109 00:03:59.208674     124 main.go:141] libmachine: [stdout =====>] : 172.24.100.178
	
	I0109 00:03:59.208674     124 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:03:59.208919     124 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-173500 ).state
	I0109 00:04:01.391717     124 main.go:141] libmachine: [stdout =====>] : Running
	
	I0109 00:04:01.391717     124 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:04:01.391867     124 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-173500 ).networkadapters[0]).ipaddresses[0]
	I0109 00:04:03.982835     124 main.go:141] libmachine: [stdout =====>] : 172.24.100.178
	
	I0109 00:04:03.982835     124 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:04:03.982835     124 provision.go:138] copyHostCerts
	I0109 00:04:03.983292     124 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem
	I0109 00:04:03.983678     124 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem, removing ...
	I0109 00:04:03.983800     124 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cert.pem
	I0109 00:04:03.983967     124 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0109 00:04:03.985515     124 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem
	I0109 00:04:03.985602     124 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem, removing ...
	I0109 00:04:03.985602     124 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\key.pem
	I0109 00:04:03.986912     124 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem (1675 bytes)
	I0109 00:04:03.987738     124 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem
	I0109 00:04:03.988289     124 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem, removing ...
	I0109 00:04:03.988489     124 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.pem
	I0109 00:04:03.988521     124 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0109 00:04:03.989798     124 provision.go:112] generating server cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-173500 san=[172.24.100.178 172.24.100.178 localhost 127.0.0.1 minikube multinode-173500]
	I0109 00:04:04.213955     124 provision.go:172] copyRemoteCerts
	I0109 00:04:04.231716     124 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0109 00:04:04.231793     124 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-173500 ).state
	I0109 00:04:06.401502     124 main.go:141] libmachine: [stdout =====>] : Running
	
	I0109 00:04:06.401502     124 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:04:06.401502     124 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-173500 ).networkadapters[0]).ipaddresses[0]
	I0109 00:04:09.021877     124 main.go:141] libmachine: [stdout =====>] : 172.24.100.178
	
	I0109 00:04:09.022000     124 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:04:09.022082     124 sshutil.go:53] new ssh client: &{IP:172.24.100.178 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-173500\id_rsa Username:docker}
	I0109 00:04:09.132623     124 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.9008302s)
	I0109 00:04:09.132748     124 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0109 00:04:09.133285     124 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0109 00:04:09.172559     124 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0109 00:04:09.173032     124 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1224 bytes)
	I0109 00:04:09.217871     124 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0109 00:04:09.217929     124 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0109 00:04:09.263047     124 provision.go:86] duration metric: configureAuth took 14.8495533s
	I0109 00:04:09.263114     124 buildroot.go:189] setting minikube options for container-runtime
	I0109 00:04:09.263739     124 config.go:182] Loaded profile config "multinode-173500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0109 00:04:09.263798     124 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-173500 ).state
	I0109 00:04:11.437689     124 main.go:141] libmachine: [stdout =====>] : Running
	
	I0109 00:04:11.437956     124 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:04:11.437956     124 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-173500 ).networkadapters[0]).ipaddresses[0]
	I0109 00:04:14.026940     124 main.go:141] libmachine: [stdout =====>] : 172.24.100.178
	
	I0109 00:04:14.026940     124 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:04:14.033001     124 main.go:141] libmachine: Using SSH client type: native
	I0109 00:04:14.033001     124 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa46120] 0xa48c60 <nil>  [] 0s} 172.24.100.178 22 <nil> <nil>}
	I0109 00:04:14.033001     124 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0109 00:04:14.188709     124 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0109 00:04:14.188709     124 buildroot.go:70] root file system type: tmpfs
	I0109 00:04:14.188709     124 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0109 00:04:14.188709     124 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-173500 ).state
	I0109 00:04:16.344291     124 main.go:141] libmachine: [stdout =====>] : Running
	
	I0109 00:04:16.344291     124 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:04:16.344373     124 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-173500 ).networkadapters[0]).ipaddresses[0]
	I0109 00:04:18.930031     124 main.go:141] libmachine: [stdout =====>] : 172.24.100.178
	
	I0109 00:04:18.930433     124 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:04:18.936951     124 main.go:141] libmachine: Using SSH client type: native
	I0109 00:04:18.937547     124 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa46120] 0xa48c60 <nil>  [] 0s} 172.24.100.178 22 <nil> <nil>}
	I0109 00:04:18.937694     124 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0109 00:04:19.116331     124 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0109 00:04:19.116598     124 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-173500 ).state
	I0109 00:04:21.300734     124 main.go:141] libmachine: [stdout =====>] : Running
	
	I0109 00:04:21.300734     124 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:04:21.300853     124 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-173500 ).networkadapters[0]).ipaddresses[0]
	I0109 00:04:23.915591     124 main.go:141] libmachine: [stdout =====>] : 172.24.100.178
	
	I0109 00:04:23.915777     124 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:04:23.922671     124 main.go:141] libmachine: Using SSH client type: native
	I0109 00:04:23.923364     124 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa46120] 0xa48c60 <nil>  [] 0s} 172.24.100.178 22 <nil> <nil>}
	I0109 00:04:23.923364     124 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0109 00:04:25.141650     124 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0109 00:04:25.141706     124 machine.go:91] provisioned docker machine in 40.7345932s
	I0109 00:04:25.141706     124 client.go:171] LocalClient.Create took 1m52.9232113s
	I0109 00:04:25.141786     124 start.go:167] duration metric: libmachine.API.Create for "multinode-173500" took 1m52.9232905s
	I0109 00:04:25.141853     124 start.go:300] post-start starting for "multinode-173500" (driver="hyperv")
	I0109 00:04:25.141853     124 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0109 00:04:25.156634     124 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0109 00:04:25.157627     124 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-173500 ).state
	I0109 00:04:27.327032     124 main.go:141] libmachine: [stdout =====>] : Running
	
	I0109 00:04:27.327247     124 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:04:27.327373     124 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-173500 ).networkadapters[0]).ipaddresses[0]
	I0109 00:04:29.932396     124 main.go:141] libmachine: [stdout =====>] : 172.24.100.178
	
	I0109 00:04:29.932563     124 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:04:29.932941     124 sshutil.go:53] new ssh client: &{IP:172.24.100.178 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-173500\id_rsa Username:docker}
	I0109 00:04:30.042354     124 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.8846657s)
	I0109 00:04:30.059663     124 ssh_runner.go:195] Run: cat /etc/os-release
	I0109 00:04:30.067493     124 command_runner.go:130] > NAME=Buildroot
	I0109 00:04:30.067750     124 command_runner.go:130] > VERSION=2021.02.12-1-gae27a7b-dirty
	I0109 00:04:30.067750     124 command_runner.go:130] > ID=buildroot
	I0109 00:04:30.067750     124 command_runner.go:130] > VERSION_ID=2021.02.12
	I0109 00:04:30.067750     124 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
	I0109 00:04:30.070266     124 info.go:137] Remote host: Buildroot 2021.02.12
	I0109 00:04:30.070348     124 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\addons for local assets ...
	I0109 00:04:30.070428     124 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\files for local assets ...
	I0109 00:04:30.072487     124 filesync.go:149] local asset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\142882.pem -> 142882.pem in /etc/ssl/certs
	I0109 00:04:30.072612     124 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\142882.pem -> /etc/ssl/certs/142882.pem
	I0109 00:04:30.088357     124 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0109 00:04:30.106424     124 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\142882.pem --> /etc/ssl/certs/142882.pem (1708 bytes)
	I0109 00:04:30.146260     124 start.go:303] post-start completed in 5.0044071s
	I0109 00:04:30.147162     124 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-173500 ).state
	I0109 00:04:32.318735     124 main.go:141] libmachine: [stdout =====>] : Running
	
	I0109 00:04:32.318735     124 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:04:32.318873     124 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-173500 ).networkadapters[0]).ipaddresses[0]
	I0109 00:04:34.920682     124 main.go:141] libmachine: [stdout =====>] : 172.24.100.178
	
	I0109 00:04:34.920682     124 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:04:34.920967     124 profile.go:148] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-173500\config.json ...
	I0109 00:04:34.924734     124 start.go:128] duration metric: createHost completed in 2m2.7090666s
	I0109 00:04:34.924804     124 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-173500 ).state
	I0109 00:04:37.103732     124 main.go:141] libmachine: [stdout =====>] : Running
	
	I0109 00:04:37.103950     124 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:04:37.103950     124 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-173500 ).networkadapters[0]).ipaddresses[0]
	I0109 00:04:39.737951     124 main.go:141] libmachine: [stdout =====>] : 172.24.100.178
	
	I0109 00:04:39.737951     124 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:04:39.744319     124 main.go:141] libmachine: Using SSH client type: native
	I0109 00:04:39.745081     124 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa46120] 0xa48c60 <nil>  [] 0s} 172.24.100.178 22 <nil> <nil>}
	I0109 00:04:39.745081     124 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0109 00:04:39.899556     124 main.go:141] libmachine: SSH cmd err, output: <nil>: 1704758679.899726678
	
	I0109 00:04:39.899556     124 fix.go:206] guest clock: 1704758679.899726678
	I0109 00:04:39.899556     124 fix.go:219] Guest: 2024-01-09 00:04:39.899726678 +0000 UTC Remote: 2024-01-09 00:04:34.9248041 +0000 UTC m=+128.515898201 (delta=4.974922578s)
	I0109 00:04:39.899775     124 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-173500 ).state
	I0109 00:04:42.086612     124 main.go:141] libmachine: [stdout =====>] : Running
	
	I0109 00:04:42.086892     124 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:04:42.086892     124 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-173500 ).networkadapters[0]).ipaddresses[0]
	I0109 00:04:44.720596     124 main.go:141] libmachine: [stdout =====>] : 172.24.100.178
	
	I0109 00:04:44.720596     124 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:04:44.727046     124 main.go:141] libmachine: Using SSH client type: native
	I0109 00:04:44.727980     124 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa46120] 0xa48c60 <nil>  [] 0s} 172.24.100.178 22 <nil> <nil>}
	I0109 00:04:44.727980     124 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1704758679
	I0109 00:04:44.891635     124 main.go:141] libmachine: SSH cmd err, output: <nil>: Tue Jan  9 00:04:39 UTC 2024
	
	I0109 00:04:44.891696     124 fix.go:226] clock set: Tue Jan  9 00:04:39 UTC 2024
	 (err=<nil>)
	I0109 00:04:44.891696     124 start.go:83] releasing machines lock for "multinode-173500", held for 2m12.6771454s
	I0109 00:04:44.891807     124 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-173500 ).state
	I0109 00:04:47.053077     124 main.go:141] libmachine: [stdout =====>] : Running
	
	I0109 00:04:47.053405     124 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:04:47.053405     124 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-173500 ).networkadapters[0]).ipaddresses[0]
	I0109 00:04:49.644367     124 main.go:141] libmachine: [stdout =====>] : 172.24.100.178
	
	I0109 00:04:49.644667     124 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:04:49.649015     124 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0109 00:04:49.649015     124 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-173500 ).state
	I0109 00:04:49.665759     124 ssh_runner.go:195] Run: cat /version.json
	I0109 00:04:49.665839     124 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-173500 ).state
	I0109 00:04:51.887949     124 main.go:141] libmachine: [stdout =====>] : Running
	
	I0109 00:04:51.887949     124 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:04:51.887949     124 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-173500 ).networkadapters[0]).ipaddresses[0]
	I0109 00:04:51.893065     124 main.go:141] libmachine: [stdout =====>] : Running
	
	I0109 00:04:51.893065     124 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:04:51.893065     124 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-173500 ).networkadapters[0]).ipaddresses[0]
	I0109 00:04:54.619656     124 main.go:141] libmachine: [stdout =====>] : 172.24.100.178
	
	I0109 00:04:54.619740     124 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:04:54.619926     124 sshutil.go:53] new ssh client: &{IP:172.24.100.178 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-173500\id_rsa Username:docker}
	I0109 00:04:54.638786     124 main.go:141] libmachine: [stdout =====>] : 172.24.100.178
	
	I0109 00:04:54.638786     124 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:04:54.639816     124 sshutil.go:53] new ssh client: &{IP:172.24.100.178 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-173500\id_rsa Username:docker}
	I0109 00:04:54.730455     124 command_runner.go:130] > {"iso_version": "v1.32.1-1702708929-17806", "kicbase_version": "v0.0.42-1702660877-17806", "minikube_version": "v1.32.0", "commit": "957da21b08687cca2533dd65b67e68ead277b79e"}
	I0109 00:04:54.730570     124 ssh_runner.go:235] Completed: cat /version.json: (5.0647301s)
	I0109 00:04:54.744733     124 ssh_runner.go:195] Run: systemctl --version
	I0109 00:04:54.850387     124 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0109 00:04:54.850387     124 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (5.2013713s)
	I0109 00:04:54.850387     124 command_runner.go:130] > systemd 247 (247)
	I0109 00:04:54.850387     124 command_runner.go:130] > -PAM -AUDIT -SELINUX -IMA -APPARMOR -SMACK -SYSVINIT -UTMP -LIBCRYPTSETUP -GCRYPT -GNUTLS +ACL +XZ +LZ4 -ZSTD +SECCOMP +BLKID -ELFUTILS +KMOD -IDN2 -IDN -PCRE2 default-hierarchy=hybrid
	I0109 00:04:54.866767     124 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0109 00:04:54.874666     124 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0109 00:04:54.875570     124 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0109 00:04:54.889432     124 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0109 00:04:54.914910     124 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0109 00:04:54.915501     124 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0109 00:04:54.915501     124 start.go:475] detecting cgroup driver to use...
	I0109 00:04:54.915829     124 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0109 00:04:54.948524     124 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0109 00:04:54.965247     124 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0109 00:04:54.996520     124 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0109 00:04:55.012384     124 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0109 00:04:55.026073     124 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0109 00:04:55.057775     124 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0109 00:04:55.088834     124 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0109 00:04:55.122840     124 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0109 00:04:55.157108     124 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0109 00:04:55.186658     124 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0109 00:04:55.220683     124 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0109 00:04:55.236549     124 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0109 00:04:55.249354     124 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0109 00:04:55.279473     124 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0109 00:04:55.452267     124 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0109 00:04:55.480037     124 start.go:475] detecting cgroup driver to use...
	I0109 00:04:55.494706     124 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0109 00:04:55.513890     124 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0109 00:04:55.513890     124 command_runner.go:130] > [Unit]
	I0109 00:04:55.513890     124 command_runner.go:130] > Description=Docker Application Container Engine
	I0109 00:04:55.513890     124 command_runner.go:130] > Documentation=https://docs.docker.com
	I0109 00:04:55.513890     124 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0109 00:04:55.513890     124 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0109 00:04:55.513890     124 command_runner.go:130] > StartLimitBurst=3
	I0109 00:04:55.513890     124 command_runner.go:130] > StartLimitIntervalSec=60
	I0109 00:04:55.513890     124 command_runner.go:130] > [Service]
	I0109 00:04:55.513890     124 command_runner.go:130] > Type=notify
	I0109 00:04:55.513890     124 command_runner.go:130] > Restart=on-failure
	I0109 00:04:55.513890     124 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0109 00:04:55.513890     124 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0109 00:04:55.513890     124 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0109 00:04:55.513890     124 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0109 00:04:55.513890     124 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0109 00:04:55.513890     124 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0109 00:04:55.513890     124 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0109 00:04:55.513890     124 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0109 00:04:55.513890     124 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0109 00:04:55.513890     124 command_runner.go:130] > ExecStart=
	I0109 00:04:55.513890     124 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0109 00:04:55.513890     124 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0109 00:04:55.513890     124 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0109 00:04:55.513890     124 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0109 00:04:55.513890     124 command_runner.go:130] > LimitNOFILE=infinity
	I0109 00:04:55.513890     124 command_runner.go:130] > LimitNPROC=infinity
	I0109 00:04:55.513890     124 command_runner.go:130] > LimitCORE=infinity
	I0109 00:04:55.513890     124 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0109 00:04:55.513890     124 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0109 00:04:55.513890     124 command_runner.go:130] > TasksMax=infinity
	I0109 00:04:55.513890     124 command_runner.go:130] > TimeoutStartSec=0
	I0109 00:04:55.513890     124 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0109 00:04:55.513890     124 command_runner.go:130] > Delegate=yes
	I0109 00:04:55.514439     124 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0109 00:04:55.514439     124 command_runner.go:130] > KillMode=process
	I0109 00:04:55.514439     124 command_runner.go:130] > [Install]
	I0109 00:04:55.514439     124 command_runner.go:130] > WantedBy=multi-user.target
	I0109 00:04:55.530249     124 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0109 00:04:55.565599     124 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0109 00:04:55.600879     124 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0109 00:04:55.630896     124 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0109 00:04:55.666695     124 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0109 00:04:55.726258     124 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0109 00:04:55.744248     124 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0109 00:04:55.776018     124 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0109 00:04:55.792061     124 ssh_runner.go:195] Run: which cri-dockerd
	I0109 00:04:55.797649     124 command_runner.go:130] > /usr/bin/cri-dockerd
	I0109 00:04:55.810913     124 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0109 00:04:55.826363     124 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0109 00:04:55.874602     124 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0109 00:04:56.042464     124 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0109 00:04:56.196548     124 docker.go:560] configuring docker to use "cgroupfs" as cgroup driver...
	I0109 00:04:56.196833     124 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0109 00:04:56.247224     124 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0109 00:04:56.410939     124 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0109 00:04:57.971568     124 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.5605429s)
	I0109 00:04:57.985674     124 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0109 00:04:58.165092     124 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0109 00:04:58.337311     124 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0109 00:04:58.503995     124 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0109 00:04:58.678542     124 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0109 00:04:58.719420     124 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0109 00:04:58.885753     124 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I0109 00:04:58.990975     124 start.go:522] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0109 00:04:59.005444     124 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0109 00:04:59.012445     124 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0109 00:04:59.012445     124 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0109 00:04:59.012445     124 command_runner.go:130] > Device: 16h/22d	Inode: 954         Links: 1
	I0109 00:04:59.012445     124 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0109 00:04:59.012445     124 command_runner.go:130] > Access: 2024-01-09 00:04:58.904653024 +0000
	I0109 00:04:59.012445     124 command_runner.go:130] > Modify: 2024-01-09 00:04:58.904653024 +0000
	I0109 00:04:59.012445     124 command_runner.go:130] > Change: 2024-01-09 00:04:58.907653024 +0000
	I0109 00:04:59.012445     124 command_runner.go:130] >  Birth: -
	I0109 00:04:59.012445     124 start.go:543] Will wait 60s for crictl version
	I0109 00:04:59.027397     124 ssh_runner.go:195] Run: which crictl
	I0109 00:04:59.032365     124 command_runner.go:130] > /usr/bin/crictl
	I0109 00:04:59.045868     124 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0109 00:04:59.124476     124 command_runner.go:130] > Version:  0.1.0
	I0109 00:04:59.124523     124 command_runner.go:130] > RuntimeName:  docker
	I0109 00:04:59.124523     124 command_runner.go:130] > RuntimeVersion:  24.0.7
	I0109 00:04:59.124523     124 command_runner.go:130] > RuntimeApiVersion:  v1
	I0109 00:04:59.124523     124 start.go:559] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  24.0.7
	RuntimeApiVersion:  v1
	I0109 00:04:59.135424     124 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0109 00:04:59.176672     124 command_runner.go:130] > 24.0.7
	I0109 00:04:59.186669     124 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0109 00:04:59.221669     124 command_runner.go:130] > 24.0.7
	I0109 00:04:59.226574     124 out.go:204] * Preparing Kubernetes v1.28.4 on Docker 24.0.7 ...
	I0109 00:04:59.226574     124 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0109 00:04:59.232109     124 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0109 00:04:59.232109     124 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0109 00:04:59.232109     124 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0109 00:04:59.232202     124 ip.go:207] Found interface: {Index:13 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:c4:61:0a Flags:up|broadcast|multicast|running}
	I0109 00:04:59.235552     124 ip.go:210] interface addr: fe80::3fa5:15f5:46dc:dc8f/64
	I0109 00:04:59.235552     124 ip.go:210] interface addr: 172.24.96.1/20
	I0109 00:04:59.249842     124 ssh_runner.go:195] Run: grep 172.24.96.1	host.minikube.internal$ /etc/hosts
	I0109 00:04:59.255525     124 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.24.96.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0109 00:04:59.278643     124 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0109 00:04:59.290126     124 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0109 00:04:59.316248     124 docker.go:671] Got preloaded images: 
	I0109 00:04:59.316248     124 docker.go:677] registry.k8s.io/kube-apiserver:v1.28.4 wasn't preloaded
	I0109 00:04:59.330733     124 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0109 00:04:59.346187     124 command_runner.go:139] > {"Repositories":{}}
	I0109 00:04:59.360426     124 ssh_runner.go:195] Run: which lz4
	I0109 00:04:59.365580     124 command_runner.go:130] > /usr/bin/lz4
	I0109 00:04:59.366416     124 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0109 00:04:59.380625     124 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0109 00:04:59.387995     124 command_runner.go:130] ! stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0109 00:04:59.388897     124 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0109 00:04:59.389113     124 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (423165415 bytes)
	I0109 00:05:02.313776     124 docker.go:635] Took 2.946445 seconds to copy over tarball
	I0109 00:05:02.327558     124 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0109 00:05:11.289059     124 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (8.9614996s)
	I0109 00:05:11.289129     124 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0109 00:05:11.359909     124 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0109 00:05:11.376396     124 command_runner.go:139] > {"Repositories":{"gcr.io/k8s-minikube/storage-provisioner":{"gcr.io/k8s-minikube/storage-provisioner:v5":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562"},"registry.k8s.io/coredns/coredns":{"registry.k8s.io/coredns/coredns:v1.10.1":"sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc","registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e":"sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc"},"registry.k8s.io/etcd":{"registry.k8s.io/etcd:3.5.9-0":"sha256:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9","registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3":"sha256:73deb9a3f702532592a4167455f8
bf2e5f5d900bcc959ba2fd2d35c321de1af9"},"registry.k8s.io/kube-apiserver":{"registry.k8s.io/kube-apiserver:v1.28.4":"sha256:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257","registry.k8s.io/kube-apiserver@sha256:5b28a364467cf7e134343bb3ee2c6d40682b473a743a72142c7bbe25767d36eb":"sha256:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257"},"registry.k8s.io/kube-controller-manager":{"registry.k8s.io/kube-controller-manager:v1.28.4":"sha256:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591","registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c":"sha256:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591"},"registry.k8s.io/kube-proxy":{"registry.k8s.io/kube-proxy:v1.28.4":"sha256:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e","registry.k8s.io/kube-proxy@sha256:e63408a0f5068a7e9d4b34fd72b4a2b0e5512509b53cd2123a37fc991b0ef532":"sha256:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021
a3a2899304398e"},"registry.k8s.io/kube-scheduler":{"registry.k8s.io/kube-scheduler:v1.28.4":"sha256:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1","registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba":"sha256:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1"},"registry.k8s.io/pause":{"registry.k8s.io/pause:3.9":"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097":"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c"}}}
	I0109 00:05:11.376651     124 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2629 bytes)
	I0109 00:05:11.423489     124 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0109 00:05:11.620754     124 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0109 00:05:14.587594     124 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.8063287s)
	I0109 00:05:14.599078     124 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0109 00:05:14.629600     124 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.28.4
	I0109 00:05:14.630560     124 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.28.4
	I0109 00:05:14.630560     124 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.28.4
	I0109 00:05:14.630560     124 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.28.4
	I0109 00:05:14.630636     124 command_runner.go:130] > registry.k8s.io/etcd:3.5.9-0
	I0109 00:05:14.630636     124 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.10.1
	I0109 00:05:14.630636     124 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0109 00:05:14.630636     124 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0109 00:05:14.630705     124 docker.go:671] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.28.4
	registry.k8s.io/kube-proxy:v1.28.4
	registry.k8s.io/kube-controller-manager:v1.28.4
	registry.k8s.io/kube-scheduler:v1.28.4
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0109 00:05:14.630797     124 cache_images.go:84] Images are preloaded, skipping loading
	I0109 00:05:14.642313     124 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0109 00:05:14.677470     124 command_runner.go:130] > cgroupfs
	I0109 00:05:14.678673     124 cni.go:84] Creating CNI manager for ""
	I0109 00:05:14.678827     124 cni.go:136] 1 nodes found, recommending kindnet
	I0109 00:05:14.678915     124 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0109 00:05:14.678985     124 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.24.100.178 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-173500 NodeName:multinode-173500 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.24.100.178"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.24.100.178 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0109 00:05:14.679232     124 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.24.100.178
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "multinode-173500"
	  kubeletExtraArgs:
	    node-ip: 172.24.100.178
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.24.100.178"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0109 00:05:14.679421     124 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=multinode-173500 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.24.100.178
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:multinode-173500 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0109 00:05:14.693497     124 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0109 00:05:14.708633     124 command_runner.go:130] > kubeadm
	I0109 00:05:14.708949     124 command_runner.go:130] > kubectl
	I0109 00:05:14.708949     124 command_runner.go:130] > kubelet
	I0109 00:05:14.708949     124 binaries.go:44] Found k8s binaries, skipping transfer
	I0109 00:05:14.722128     124 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0109 00:05:14.738845     124 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (379 bytes)
	I0109 00:05:14.767139     124 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0109 00:05:14.798157     124 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2105 bytes)
	I0109 00:05:14.849931     124 ssh_runner.go:195] Run: grep 172.24.100.178	control-plane.minikube.internal$ /etc/hosts
	I0109 00:05:14.857618     124 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.24.100.178	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0109 00:05:14.881041     124 certs.go:56] Setting up C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-173500 for IP: 172.24.100.178
	I0109 00:05:14.881129     124 certs.go:190] acquiring lock for shared ca certs: {Name:mk7484196a709b348d442d7deac4228c8c4b804e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0109 00:05:14.882018     124 certs.go:199] skipping minikubeCA CA generation: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key
	I0109 00:05:14.882185     124 certs.go:199] skipping proxyClientCA CA generation: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key
	I0109 00:05:14.883651     124 certs.go:319] generating minikube-user signed cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-173500\client.key
	I0109 00:05:14.883713     124 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-173500\client.crt with IP's: []
	I0109 00:05:15.078988     124 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-173500\client.crt ...
	I0109 00:05:15.078988     124 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-173500\client.crt: {Name:mk52b31d0919010031de12863ef5f09902f505b7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0109 00:05:15.080356     124 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-173500\client.key ...
	I0109 00:05:15.080356     124 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-173500\client.key: {Name:mk92be9b855eccd10eb091c7f4798ac5c15bedde Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0109 00:05:15.081562     124 certs.go:319] generating minikube signed cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-173500\apiserver.key.cf5ac2a7
	I0109 00:05:15.082005     124 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-173500\apiserver.crt.cf5ac2a7 with IP's: [172.24.100.178 10.96.0.1 127.0.0.1 10.0.0.1]
	I0109 00:05:15.455462     124 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-173500\apiserver.crt.cf5ac2a7 ...
	I0109 00:05:15.455462     124 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-173500\apiserver.crt.cf5ac2a7: {Name:mk14396f60e3b33355175a465ade244d09b3f453 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0109 00:05:15.456731     124 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-173500\apiserver.key.cf5ac2a7 ...
	I0109 00:05:15.456731     124 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-173500\apiserver.key.cf5ac2a7: {Name:mk151917c601fd5e175e143cbfd48822e4d86843 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0109 00:05:15.457750     124 certs.go:337] copying C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-173500\apiserver.crt.cf5ac2a7 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-173500\apiserver.crt
	I0109 00:05:15.469742     124 certs.go:341] copying C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-173500\apiserver.key.cf5ac2a7 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-173500\apiserver.key
	I0109 00:05:15.470754     124 certs.go:319] generating aggregator signed cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-173500\proxy-client.key
	I0109 00:05:15.470754     124 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-173500\proxy-client.crt with IP's: []
	I0109 00:05:15.761807     124 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-173500\proxy-client.crt ...
	I0109 00:05:15.761807     124 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-173500\proxy-client.crt: {Name:mkd2c584151a599968e45d087fd5e3262f263dd5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0109 00:05:15.763945     124 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-173500\proxy-client.key ...
	I0109 00:05:15.763945     124 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-173500\proxy-client.key: {Name:mk505d4a3fca64c8c617078c1815ec1122f28337 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0109 00:05:15.764450     124 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-173500\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0109 00:05:15.765533     124 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-173500\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0109 00:05:15.765744     124 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-173500\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0109 00:05:15.774932     124 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-173500\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0109 00:05:15.774932     124 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0109 00:05:15.775753     124 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0109 00:05:15.775912     124 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0109 00:05:15.776083     124 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0109 00:05:15.776244     124 certs.go:437] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\14288.pem (1338 bytes)
	W0109 00:05:15.776950     124 certs.go:433] ignoring C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\14288_empty.pem, impossibly tiny 0 bytes
	I0109 00:05:15.776950     124 certs.go:437] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I0109 00:05:15.777517     124 certs.go:437] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0109 00:05:15.777877     124 certs.go:437] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0109 00:05:15.778070     124 certs.go:437] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I0109 00:05:15.778325     124 certs.go:437] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\142882.pem (1708 bytes)
	I0109 00:05:15.778325     124 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\142882.pem -> /usr/share/ca-certificates/142882.pem
	I0109 00:05:15.779013     124 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0109 00:05:15.779172     124 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\14288.pem -> /usr/share/ca-certificates/14288.pem
	I0109 00:05:15.779444     124 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-173500\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0109 00:05:15.818244     124 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-173500\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0109 00:05:15.860188     124 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-173500\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0109 00:05:15.900663     124 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-173500\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0109 00:05:15.938601     124 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0109 00:05:15.977376     124 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0109 00:05:16.016592     124 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0109 00:05:16.057073     124 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0109 00:05:16.098239     124 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\142882.pem --> /usr/share/ca-certificates/142882.pem (1708 bytes)
	I0109 00:05:16.136299     124 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0109 00:05:16.176932     124 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\14288.pem --> /usr/share/ca-certificates/14288.pem (1338 bytes)
	I0109 00:05:16.215871     124 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0109 00:05:16.255998     124 ssh_runner.go:195] Run: openssl version
	I0109 00:05:16.264559     124 command_runner.go:130] > OpenSSL 1.1.1n  15 Mar 2022
	I0109 00:05:16.278971     124 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/142882.pem && ln -fs /usr/share/ca-certificates/142882.pem /etc/ssl/certs/142882.pem"
	I0109 00:05:16.311005     124 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/142882.pem
	I0109 00:05:16.317077     124 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Jan  8 23:11 /usr/share/ca-certificates/142882.pem
	I0109 00:05:16.317180     124 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan  8 23:11 /usr/share/ca-certificates/142882.pem
	I0109 00:05:16.329912     124 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/142882.pem
	I0109 00:05:16.337296     124 command_runner.go:130] > 3ec20f2e
	I0109 00:05:16.350854     124 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/142882.pem /etc/ssl/certs/3ec20f2e.0"
	I0109 00:05:16.381933     124 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0109 00:05:16.413113     124 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0109 00:05:16.420221     124 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Jan  8 22:56 /usr/share/ca-certificates/minikubeCA.pem
	I0109 00:05:16.420221     124 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan  8 22:56 /usr/share/ca-certificates/minikubeCA.pem
	I0109 00:05:16.432726     124 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0109 00:05:16.440639     124 command_runner.go:130] > b5213941
	I0109 00:05:16.454840     124 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0109 00:05:16.485822     124 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14288.pem && ln -fs /usr/share/ca-certificates/14288.pem /etc/ssl/certs/14288.pem"
	I0109 00:05:16.516550     124 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14288.pem
	I0109 00:05:16.523994     124 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Jan  8 23:11 /usr/share/ca-certificates/14288.pem
	I0109 00:05:16.524121     124 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan  8 23:11 /usr/share/ca-certificates/14288.pem
	I0109 00:05:16.536833     124 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14288.pem
	I0109 00:05:16.543916     124 command_runner.go:130] > 51391683
	I0109 00:05:16.558864     124 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14288.pem /etc/ssl/certs/51391683.0"
	I0109 00:05:16.589584     124 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0109 00:05:16.597195     124 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0109 00:05:16.598748     124 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0109 00:05:16.599263     124 kubeadm.go:404] StartCluster: {Name:multinode-173500 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17806/minikube-v1.32.1-1702708929-17806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704751654-17830@sha256:cabd32f8d9e8d804966eb117ed5366660f6363a4d1415f0b5480de6e396be617 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersi
on:v1.28.4 ClusterName:multinode-173500 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:172.24.100.178 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSi
ze:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I0109 00:05:16.609398     124 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0109 00:05:16.648994     124 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0109 00:05:16.664509     124 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/kubeadm-flags.env': No such file or directory
	I0109 00:05:16.664993     124 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/config.yaml': No such file or directory
	I0109 00:05:16.664993     124 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/etcd': No such file or directory
	I0109 00:05:16.679990     124 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0109 00:05:16.710248     124 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0109 00:05:16.724054     124 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I0109 00:05:16.724054     124 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I0109 00:05:16.725088     124 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I0109 00:05:16.725111     124 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0109 00:05:16.725464     124 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0109 00:05:16.725564     124 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0109 00:05:17.523967     124 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0109 00:05:17.523967     124 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0109 00:05:31.509392     124 kubeadm.go:322] [init] Using Kubernetes version: v1.28.4
	I0109 00:05:31.509501     124 command_runner.go:130] > [init] Using Kubernetes version: v1.28.4
	I0109 00:05:31.509711     124 kubeadm.go:322] [preflight] Running pre-flight checks
	I0109 00:05:31.509711     124 command_runner.go:130] > [preflight] Running pre-flight checks
	I0109 00:05:31.509945     124 command_runner.go:130] > [preflight] Pulling images required for setting up a Kubernetes cluster
	I0109 00:05:31.509945     124 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0109 00:05:31.510231     124 command_runner.go:130] > [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0109 00:05:31.510289     124 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0109 00:05:31.510348     124 command_runner.go:130] > [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0109 00:05:31.510348     124 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0109 00:05:31.510348     124 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0109 00:05:31.510348     124 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0109 00:05:31.515100     124 out.go:204]   - Generating certificates and keys ...
	I0109 00:05:31.515354     124 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0109 00:05:31.515354     124 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0109 00:05:31.515498     124 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0109 00:05:31.515498     124 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0109 00:05:31.515498     124 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0109 00:05:31.515498     124 command_runner.go:130] > [certs] Generating "apiserver-kubelet-client" certificate and key
	I0109 00:05:31.515498     124 command_runner.go:130] > [certs] Generating "front-proxy-ca" certificate and key
	I0109 00:05:31.515498     124 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0109 00:05:31.516077     124 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0109 00:05:31.516077     124 command_runner.go:130] > [certs] Generating "front-proxy-client" certificate and key
	I0109 00:05:31.516290     124 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0109 00:05:31.516290     124 command_runner.go:130] > [certs] Generating "etcd/ca" certificate and key
	I0109 00:05:31.516290     124 command_runner.go:130] > [certs] Generating "etcd/server" certificate and key
	I0109 00:05:31.516290     124 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0109 00:05:31.516290     124 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [localhost multinode-173500] and IPs [172.24.100.178 127.0.0.1 ::1]
	I0109 00:05:31.516290     124 command_runner.go:130] > [certs] etcd/server serving cert is signed for DNS names [localhost multinode-173500] and IPs [172.24.100.178 127.0.0.1 ::1]
	I0109 00:05:31.516988     124 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0109 00:05:31.517033     124 command_runner.go:130] > [certs] Generating "etcd/peer" certificate and key
	I0109 00:05:31.517090     124 command_runner.go:130] > [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-173500] and IPs [172.24.100.178 127.0.0.1 ::1]
	I0109 00:05:31.517090     124 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-173500] and IPs [172.24.100.178 127.0.0.1 ::1]
	I0109 00:05:31.517090     124 command_runner.go:130] > [certs] Generating "etcd/healthcheck-client" certificate and key
	I0109 00:05:31.517090     124 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0109 00:05:31.518054     124 command_runner.go:130] > [certs] Generating "apiserver-etcd-client" certificate and key
	I0109 00:05:31.518054     124 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0109 00:05:31.518054     124 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0109 00:05:31.518054     124 command_runner.go:130] > [certs] Generating "sa" key and public key
	I0109 00:05:31.518054     124 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0109 00:05:31.518054     124 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0109 00:05:31.518054     124 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0109 00:05:31.518054     124 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I0109 00:05:31.518054     124 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0109 00:05:31.518623     124 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0109 00:05:31.518683     124 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0109 00:05:31.518683     124 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0109 00:05:31.518888     124 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0109 00:05:31.518888     124 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0109 00:05:31.519049     124 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0109 00:05:31.519073     124 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0109 00:05:31.519168     124 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0109 00:05:31.522858     124 out.go:204]   - Booting up control plane ...
	I0109 00:05:31.519242     124 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0109 00:05:31.522858     124 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0109 00:05:31.522858     124 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0109 00:05:31.522858     124 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0109 00:05:31.522858     124 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0109 00:05:31.522858     124 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0109 00:05:31.522858     124 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0109 00:05:31.523922     124 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0109 00:05:31.523922     124 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0109 00:05:31.524225     124 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0109 00:05:31.524265     124 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0109 00:05:31.524376     124 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0109 00:05:31.524376     124 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0109 00:05:31.524967     124 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0109 00:05:31.525023     124 command_runner.go:130] > [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0109 00:05:31.525106     124 kubeadm.go:322] [apiclient] All control plane components are healthy after 9.003630 seconds
	I0109 00:05:31.525106     124 command_runner.go:130] > [apiclient] All control plane components are healthy after 9.003630 seconds
	I0109 00:05:31.525106     124 command_runner.go:130] > [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0109 00:05:31.525106     124 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0109 00:05:31.525106     124 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0109 00:05:31.525644     124 command_runner.go:130] > [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0109 00:05:31.525793     124 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0109 00:05:31.525793     124 command_runner.go:130] > [upload-certs] Skipping phase. Please see --upload-certs
	I0109 00:05:31.526028     124 kubeadm.go:322] [mark-control-plane] Marking the node multinode-173500 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0109 00:05:31.526028     124 command_runner.go:130] > [mark-control-plane] Marking the node multinode-173500 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0109 00:05:31.526028     124 kubeadm.go:322] [bootstrap-token] Using token: z25ipb.xptwhm3gof9b27yq
	I0109 00:05:31.526028     124 command_runner.go:130] > [bootstrap-token] Using token: z25ipb.xptwhm3gof9b27yq
	I0109 00:05:31.529570     124 out.go:204]   - Configuring RBAC rules ...
	I0109 00:05:31.529795     124 command_runner.go:130] > [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0109 00:05:31.529795     124 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0109 00:05:31.529931     124 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0109 00:05:31.530021     124 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0109 00:05:31.530195     124 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0109 00:05:31.530195     124 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0109 00:05:31.530509     124 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0109 00:05:31.530509     124 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0109 00:05:31.530509     124 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0109 00:05:31.530509     124 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0109 00:05:31.531048     124 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0109 00:05:31.531084     124 command_runner.go:130] > [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0109 00:05:31.531312     124 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0109 00:05:31.531312     124 command_runner.go:130] > [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0109 00:05:31.531312     124 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0109 00:05:31.531551     124 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0109 00:05:31.531622     124 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0109 00:05:31.531622     124 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0109 00:05:31.531622     124 kubeadm.go:322] 
	I0109 00:05:31.531844     124 command_runner.go:130] > Your Kubernetes control-plane has initialized successfully!
	I0109 00:05:31.531844     124 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0109 00:05:31.531844     124 kubeadm.go:322] 
	I0109 00:05:31.531844     124 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0109 00:05:31.531844     124 command_runner.go:130] > To start using your cluster, you need to run the following as a regular user:
	I0109 00:05:31.531844     124 kubeadm.go:322] 
	I0109 00:05:31.531844     124 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0109 00:05:31.531844     124 command_runner.go:130] >   mkdir -p $HOME/.kube
	I0109 00:05:31.532428     124 command_runner.go:130] >   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0109 00:05:31.532428     124 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0109 00:05:31.532519     124 command_runner.go:130] >   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0109 00:05:31.532519     124 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0109 00:05:31.532519     124 kubeadm.go:322] 
	I0109 00:05:31.532519     124 command_runner.go:130] > Alternatively, if you are the root user, you can run:
	I0109 00:05:31.532519     124 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0109 00:05:31.532519     124 kubeadm.go:322] 
	I0109 00:05:31.532519     124 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0109 00:05:31.532519     124 command_runner.go:130] >   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0109 00:05:31.532519     124 kubeadm.go:322] 
	I0109 00:05:31.532519     124 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0109 00:05:31.532519     124 command_runner.go:130] > You should now deploy a pod network to the cluster.
	I0109 00:05:31.533217     124 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0109 00:05:31.533255     124 command_runner.go:130] > Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0109 00:05:31.533514     124 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0109 00:05:31.533514     124 command_runner.go:130] >   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0109 00:05:31.533514     124 kubeadm.go:322] 
	I0109 00:05:31.533798     124 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0109 00:05:31.533798     124 command_runner.go:130] > You can now join any number of control-plane nodes by copying certificate authorities
	I0109 00:05:31.533988     124 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0109 00:05:31.533988     124 command_runner.go:130] > and service account keys on each node and then running the following as root:
	I0109 00:05:31.533988     124 kubeadm.go:322] 
	I0109 00:05:31.534347     124 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token z25ipb.xptwhm3gof9b27yq \
	I0109 00:05:31.534347     124 command_runner.go:130] >   kubeadm join control-plane.minikube.internal:8443 --token z25ipb.xptwhm3gof9b27yq \
	I0109 00:05:31.534507     124 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:6a12e94bf3397e7db59fa944f4e20c2c2c34b5794397b381e3c5134eb1900391 \
	I0109 00:05:31.534507     124 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:6a12e94bf3397e7db59fa944f4e20c2c2c34b5794397b381e3c5134eb1900391 \
	I0109 00:05:31.534507     124 command_runner.go:130] > 	--control-plane 
	I0109 00:05:31.534507     124 kubeadm.go:322] 	--control-plane 
	I0109 00:05:31.534688     124 kubeadm.go:322] 
	I0109 00:05:31.534817     124 command_runner.go:130] > Then you can join any number of worker nodes by running the following on each as root:
	I0109 00:05:31.534817     124 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0109 00:05:31.534817     124 kubeadm.go:322] 
	I0109 00:05:31.535087     124 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token z25ipb.xptwhm3gof9b27yq \
	I0109 00:05:31.535087     124 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token z25ipb.xptwhm3gof9b27yq \
	I0109 00:05:31.535295     124 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:6a12e94bf3397e7db59fa944f4e20c2c2c34b5794397b381e3c5134eb1900391 
	I0109 00:05:31.535295     124 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:6a12e94bf3397e7db59fa944f4e20c2c2c34b5794397b381e3c5134eb1900391 
	I0109 00:05:31.535295     124 cni.go:84] Creating CNI manager for ""
	I0109 00:05:31.535295     124 cni.go:136] 1 nodes found, recommending kindnet
	I0109 00:05:31.538655     124 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0109 00:05:31.556437     124 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0109 00:05:31.564882     124 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0109 00:05:31.565010     124 command_runner.go:130] >   Size: 2685752   	Blocks: 5248       IO Block: 4096   regular file
	I0109 00:05:31.565010     124 command_runner.go:130] > Device: 11h/17d	Inode: 3544        Links: 1
	I0109 00:05:31.565010     124 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0109 00:05:31.565010     124 command_runner.go:130] > Access: 2024-01-09 00:03:39.631411700 +0000
	I0109 00:05:31.565101     124 command_runner.go:130] > Modify: 2023-12-16 11:53:47.000000000 +0000
	I0109 00:05:31.565101     124 command_runner.go:130] > Change: 2024-01-09 00:03:29.422000000 +0000
	I0109 00:05:31.565101     124 command_runner.go:130] >  Birth: -
	I0109 00:05:31.565234     124 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I0109 00:05:31.565263     124 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0109 00:05:31.620374     124 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0109 00:05:33.236918     124 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet created
	I0109 00:05:33.237655     124 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet created
	I0109 00:05:33.237655     124 command_runner.go:130] > serviceaccount/kindnet created
	I0109 00:05:33.237655     124 command_runner.go:130] > daemonset.apps/kindnet created
	I0109 00:05:33.237655     124 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.6172816s)
	I0109 00:05:33.237785     124 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0109 00:05:33.252884     124 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a minikube.k8s.io/name=multinode-173500 minikube.k8s.io/updated_at=2024_01_09T00_05_33_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:05:33.256903     124 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:05:33.257340     124 command_runner.go:130] > -16
	I0109 00:05:33.257514     124 ops.go:34] apiserver oom_adj: -16
	I0109 00:05:33.429440     124 command_runner.go:130] > node/multinode-173500 labeled
	I0109 00:05:33.438845     124 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/minikube-rbac created
	I0109 00:05:33.454420     124 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:05:33.577543     124 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0109 00:05:33.953859     124 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:05:34.080030     124 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0109 00:05:34.457395     124 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:05:34.563448     124 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0109 00:05:34.959675     124 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:05:35.079271     124 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0109 00:05:35.459131     124 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:05:35.572250     124 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0109 00:05:35.965452     124 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:05:36.083889     124 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0109 00:05:36.467698     124 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:05:36.581610     124 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0109 00:05:36.952736     124 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:05:37.079555     124 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0109 00:05:37.459312     124 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:05:37.574016     124 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0109 00:05:37.966736     124 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:05:38.088847     124 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0109 00:05:38.466576     124 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:05:38.592025     124 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0109 00:05:38.969684     124 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:05:39.094894     124 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0109 00:05:39.454801     124 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:05:39.573434     124 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0109 00:05:39.962047     124 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:05:40.095458     124 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0109 00:05:40.465224     124 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:05:40.626960     124 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0109 00:05:40.967552     124 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:05:41.135294     124 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0109 00:05:41.457722     124 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:05:41.596741     124 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0109 00:05:41.962405     124 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:05:42.133393     124 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0109 00:05:42.457136     124 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:05:42.595879     124 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0109 00:05:42.955573     124 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:05:43.103950     124 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0109 00:05:43.466288     124 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:05:43.581287     124 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0109 00:05:43.965467     124 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:05:44.198426     124 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0109 00:05:44.456640     124 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:05:44.617152     124 command_runner.go:130] > NAME      SECRETS   AGE
	I0109 00:05:44.617217     124 command_runner.go:130] > default   0         0s
	I0109 00:05:44.617562     124 kubeadm.go:1088] duration metric: took 11.3795329s to wait for elevateKubeSystemPrivileges.
	I0109 00:05:44.617599     124 kubeadm.go:406] StartCluster complete in 28.0183338s
	I0109 00:05:44.617599     124 settings.go:142] acquiring lock: {Name:mkab556291043b7dcd90a9d60c03aa7fa181e125 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0109 00:05:44.617599     124 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0109 00:05:44.618538     124 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\kubeconfig: {Name:mk2e2755bd0ffee2cfcc8bbf22c26f99d53697ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0109 00:05:44.620884     124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0109 00:05:44.621127     124 addons.go:505] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0109 00:05:44.621318     124 addons.go:69] Setting storage-provisioner=true in profile "multinode-173500"
	I0109 00:05:44.621431     124 addons.go:237] Setting addon storage-provisioner=true in "multinode-173500"
	I0109 00:05:44.621528     124 config.go:182] Loaded profile config "multinode-173500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0109 00:05:44.621431     124 addons.go:69] Setting default-storageclass=true in profile "multinode-173500"
	I0109 00:05:44.621657     124 host.go:66] Checking if "multinode-173500" exists ...
	I0109 00:05:44.621657     124 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "multinode-173500"
	I0109 00:05:44.622865     124 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-173500 ).state
	I0109 00:05:44.623535     124 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-173500 ).state
	I0109 00:05:44.636654     124 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0109 00:05:44.637572     124 kapi.go:59] client config for multinode-173500: &rest.Config{Host:"https://172.24.100.178:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\multinode-173500\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\multinode-173500\\client.key", CAFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CA
Data:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1e2c140), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0109 00:05:44.639866     124 cert_rotation.go:137] Starting client certificate rotation controller
	I0109 00:05:44.640165     124 round_trippers.go:463] GET https://172.24.100.178:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0109 00:05:44.640165     124 round_trippers.go:469] Request Headers:
	I0109 00:05:44.640165     124 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:05:44.640165     124 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0109 00:05:44.656328     124 round_trippers.go:574] Response Status: 200 OK in 16 milliseconds
	I0109 00:05:44.657158     124 round_trippers.go:577] Response Headers:
	I0109 00:05:44.657158     124 round_trippers.go:580]     Content-Length: 291
	I0109 00:05:44.657158     124 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:05:44 GMT
	I0109 00:05:44.657158     124 round_trippers.go:580]     Audit-Id: a92932d0-f37d-4821-8f4d-87595fc93d1c
	I0109 00:05:44.657276     124 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:05:44.657276     124 round_trippers.go:580]     Content-Type: application/json
	I0109 00:05:44.657276     124 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b65f6d3f-e21a-46b0-805d-9b71486b6c1c
	I0109 00:05:44.657336     124 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 160c1d93-30c8-4413-9986-7aa0d5f0fdd1
	I0109 00:05:44.657440     124 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"40c365d2-2414-4cb8-9731-fc615f6d2dcd","resourceVersion":"223","creationTimestamp":"2024-01-09T00:05:31Z"},"spec":{"replicas":2},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I0109 00:05:44.658387     124 request.go:1212] Request Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"40c365d2-2414-4cb8-9731-fc615f6d2dcd","resourceVersion":"223","creationTimestamp":"2024-01-09T00:05:31Z"},"spec":{"replicas":1},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I0109 00:05:44.658593     124 round_trippers.go:463] PUT https://172.24.100.178:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0109 00:05:44.658666     124 round_trippers.go:469] Request Headers:
	I0109 00:05:44.658666     124 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0109 00:05:44.658742     124 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:05:44.658813     124 round_trippers.go:473]     Content-Type: application/json
	I0109 00:05:44.677281     124 round_trippers.go:574] Response Status: 200 OK in 18 milliseconds
	I0109 00:05:44.678283     124 round_trippers.go:577] Response Headers:
	I0109 00:05:44.678283     124 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b65f6d3f-e21a-46b0-805d-9b71486b6c1c
	I0109 00:05:44.678283     124 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 160c1d93-30c8-4413-9986-7aa0d5f0fdd1
	I0109 00:05:44.678283     124 round_trippers.go:580]     Content-Length: 291
	I0109 00:05:44.678366     124 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:05:44 GMT
	I0109 00:05:44.678366     124 round_trippers.go:580]     Audit-Id: 5ed5245d-d9b7-4daf-b8de-9a5fdbd84605
	I0109 00:05:44.678366     124 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:05:44.678366     124 round_trippers.go:580]     Content-Type: application/json
	I0109 00:05:44.678991     124 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"40c365d2-2414-4cb8-9731-fc615f6d2dcd","resourceVersion":"317","creationTimestamp":"2024-01-09T00:05:31Z"},"spec":{"replicas":1},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I0109 00:05:44.906297     124 command_runner.go:130] > apiVersion: v1
	I0109 00:05:44.906297     124 command_runner.go:130] > data:
	I0109 00:05:44.906297     124 command_runner.go:130] >   Corefile: |
	I0109 00:05:44.906297     124 command_runner.go:130] >     .:53 {
	I0109 00:05:44.906297     124 command_runner.go:130] >         errors
	I0109 00:05:44.906297     124 command_runner.go:130] >         health {
	I0109 00:05:44.906297     124 command_runner.go:130] >            lameduck 5s
	I0109 00:05:44.906297     124 command_runner.go:130] >         }
	I0109 00:05:44.906297     124 command_runner.go:130] >         ready
	I0109 00:05:44.906297     124 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I0109 00:05:44.906297     124 command_runner.go:130] >            pods insecure
	I0109 00:05:44.906297     124 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I0109 00:05:44.906297     124 command_runner.go:130] >            ttl 30
	I0109 00:05:44.906297     124 command_runner.go:130] >         }
	I0109 00:05:44.906297     124 command_runner.go:130] >         prometheus :9153
	I0109 00:05:44.906297     124 command_runner.go:130] >         forward . /etc/resolv.conf {
	I0109 00:05:44.906297     124 command_runner.go:130] >            max_concurrent 1000
	I0109 00:05:44.906297     124 command_runner.go:130] >         }
	I0109 00:05:44.906297     124 command_runner.go:130] >         cache 30
	I0109 00:05:44.906297     124 command_runner.go:130] >         loop
	I0109 00:05:44.906297     124 command_runner.go:130] >         reload
	I0109 00:05:44.906297     124 command_runner.go:130] >         loadbalance
	I0109 00:05:44.906297     124 command_runner.go:130] >     }
	I0109 00:05:44.906297     124 command_runner.go:130] > kind: ConfigMap
	I0109 00:05:44.907295     124 command_runner.go:130] > metadata:
	I0109 00:05:44.907295     124 command_runner.go:130] >   creationTimestamp: "2024-01-09T00:05:31Z"
	I0109 00:05:44.907295     124 command_runner.go:130] >   name: coredns
	I0109 00:05:44.907295     124 command_runner.go:130] >   namespace: kube-system
	I0109 00:05:44.907295     124 command_runner.go:130] >   resourceVersion: "219"
	I0109 00:05:44.907295     124 command_runner.go:130] >   uid: 3f96b20d-2896-4a3f-95df-633f61fcd852
	I0109 00:05:44.907295     124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           172.24.96.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0109 00:05:45.153607     124 round_trippers.go:463] GET https://172.24.100.178:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0109 00:05:45.153607     124 round_trippers.go:469] Request Headers:
	I0109 00:05:45.153713     124 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:05:45.153713     124 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0109 00:05:45.174461     124 round_trippers.go:574] Response Status: 200 OK in 20 milliseconds
	I0109 00:05:45.174461     124 round_trippers.go:577] Response Headers:
	I0109 00:05:45.174582     124 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:05:45.174582     124 round_trippers.go:580]     Content-Type: application/json
	I0109 00:05:45.174582     124 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b65f6d3f-e21a-46b0-805d-9b71486b6c1c
	I0109 00:05:45.174582     124 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 160c1d93-30c8-4413-9986-7aa0d5f0fdd1
	I0109 00:05:45.174582     124 round_trippers.go:580]     Content-Length: 291
	I0109 00:05:45.174582     124 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:05:45 GMT
	I0109 00:05:45.174692     124 round_trippers.go:580]     Audit-Id: 96c42b63-1469-48f5-974e-348e1e871f6c
	I0109 00:05:45.174692     124 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"40c365d2-2414-4cb8-9731-fc615f6d2dcd","resourceVersion":"330","creationTimestamp":"2024-01-09T00:05:31Z"},"spec":{"replicas":1},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I0109 00:05:45.174931     124 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-173500" context rescaled to 1 replicas
	I0109 00:05:45.175102     124 start.go:223] Will wait 6m0s for node &{Name: IP:172.24.100.178 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0109 00:05:45.182327     124 out.go:177] * Verifying Kubernetes components...
	I0109 00:05:45.206121     124 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0109 00:05:45.681498     124 command_runner.go:130] > configmap/coredns replaced
	I0109 00:05:45.687170     124 start.go:929] {"host.minikube.internal": 172.24.96.1} host record injected into CoreDNS's ConfigMap
	I0109 00:05:45.688315     124 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0109 00:05:45.689099     124 kapi.go:59] client config for multinode-173500: &rest.Config{Host:"https://172.24.100.178:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\multinode-173500\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\multinode-173500\\client.key", CAFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CA
Data:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1e2c140), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0109 00:05:45.690083     124 node_ready.go:35] waiting up to 6m0s for node "multinode-173500" to be "Ready" ...
	I0109 00:05:45.690379     124 round_trippers.go:463] GET https://172.24.100.178:8443/api/v1/nodes/multinode-173500
	I0109 00:05:45.690379     124 round_trippers.go:469] Request Headers:
	I0109 00:05:45.690379     124 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:05:45.690379     124 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0109 00:05:45.694473     124 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0109 00:05:45.694541     124 round_trippers.go:577] Response Headers:
	I0109 00:05:45.694541     124 round_trippers.go:580]     Content-Type: application/json
	I0109 00:05:45.694595     124 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b65f6d3f-e21a-46b0-805d-9b71486b6c1c
	I0109 00:05:45.694801     124 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 160c1d93-30c8-4413-9986-7aa0d5f0fdd1
	I0109 00:05:45.694801     124 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:05:45 GMT
	I0109 00:05:45.694801     124 round_trippers.go:580]     Audit-Id: ba41a0c1-6c34-4e3f-9083-19b8d747b979
	I0109 00:05:45.694801     124 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:05:45.694801     124 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-173500","uid":"5e5c58e3-1ae8-4346-8766-7537fea36975","resourceVersion":"328","creationTimestamp":"2024-01-09T00:05:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-173500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-173500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_09T00_05_33_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-01-09T00:05:27Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0109 00:05:46.200653     124 round_trippers.go:463] GET https://172.24.100.178:8443/api/v1/nodes/multinode-173500
	I0109 00:05:46.200728     124 round_trippers.go:469] Request Headers:
	I0109 00:05:46.200728     124 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:05:46.200728     124 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0109 00:05:46.207072     124 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0109 00:05:46.207198     124 round_trippers.go:577] Response Headers:
	I0109 00:05:46.207260     124 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:05:46 GMT
	I0109 00:05:46.207260     124 round_trippers.go:580]     Audit-Id: 6cfa6443-253c-40b5-a3d4-8ec925376a8d
	I0109 00:05:46.207331     124 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:05:46.207361     124 round_trippers.go:580]     Content-Type: application/json
	I0109 00:05:46.207361     124 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b65f6d3f-e21a-46b0-805d-9b71486b6c1c
	I0109 00:05:46.207361     124 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 160c1d93-30c8-4413-9986-7aa0d5f0fdd1
	I0109 00:05:46.207674     124 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-173500","uid":"5e5c58e3-1ae8-4346-8766-7537fea36975","resourceVersion":"328","creationTimestamp":"2024-01-09T00:05:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-173500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-173500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_09T00_05_33_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-01-09T00:05:27Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0109 00:05:46.695531     124 round_trippers.go:463] GET https://172.24.100.178:8443/api/v1/nodes/multinode-173500
	I0109 00:05:46.695531     124 round_trippers.go:469] Request Headers:
	I0109 00:05:46.695531     124 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:05:46.695866     124 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0109 00:05:46.699705     124 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0109 00:05:46.699705     124 round_trippers.go:577] Response Headers:
	I0109 00:05:46.700222     124 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b65f6d3f-e21a-46b0-805d-9b71486b6c1c
	I0109 00:05:46.700294     124 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 160c1d93-30c8-4413-9986-7aa0d5f0fdd1
	I0109 00:05:46.700312     124 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:05:46 GMT
	I0109 00:05:46.700382     124 round_trippers.go:580]     Audit-Id: b9f94854-0672-4c3c-97b4-dcde00dbd464
	I0109 00:05:46.700382     124 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:05:46.700382     124 round_trippers.go:580]     Content-Type: application/json
	I0109 00:05:46.700850     124 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-173500","uid":"5e5c58e3-1ae8-4346-8766-7537fea36975","resourceVersion":"328","creationTimestamp":"2024-01-09T00:05:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-173500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-173500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_09T00_05_33_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-01-09T00:05:27Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0109 00:05:46.964286     124 main.go:141] libmachine: [stdout =====>] : Running
	
	I0109 00:05:46.964286     124 main.go:141] libmachine: [stdout =====>] : Running
	
	I0109 00:05:46.964286     124 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:05:46.964286     124 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:05:46.967498     124 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0109 00:05:46.965589     124 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0109 00:05:46.968154     124 kapi.go:59] client config for multinode-173500: &rest.Config{Host:"https://172.24.100.178:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\multinode-173500\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\multinode-173500\\client.key", CAFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CA
Data:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1e2c140), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0109 00:05:46.970012     124 addons.go:429] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0109 00:05:46.970131     124 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0109 00:05:46.970131     124 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-173500 ).state
	I0109 00:05:46.970867     124 addons.go:237] Setting addon default-storageclass=true in "multinode-173500"
	I0109 00:05:46.971109     124 host.go:66] Checking if "multinode-173500" exists ...
	I0109 00:05:46.972204     124 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-173500 ).state
	I0109 00:05:47.203750     124 round_trippers.go:463] GET https://172.24.100.178:8443/api/v1/nodes/multinode-173500
	I0109 00:05:47.203870     124 round_trippers.go:469] Request Headers:
	I0109 00:05:47.203870     124 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:05:47.203870     124 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0109 00:05:47.208189     124 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0109 00:05:47.208189     124 round_trippers.go:577] Response Headers:
	I0109 00:05:47.208189     124 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b65f6d3f-e21a-46b0-805d-9b71486b6c1c
	I0109 00:05:47.208189     124 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 160c1d93-30c8-4413-9986-7aa0d5f0fdd1
	I0109 00:05:47.208189     124 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:05:47 GMT
	I0109 00:05:47.208189     124 round_trippers.go:580]     Audit-Id: 33f476d5-cf3e-443e-8350-e9a4363e668e
	I0109 00:05:47.208189     124 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:05:47.208189     124 round_trippers.go:580]     Content-Type: application/json
	I0109 00:05:47.208474     124 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-173500","uid":"5e5c58e3-1ae8-4346-8766-7537fea36975","resourceVersion":"328","creationTimestamp":"2024-01-09T00:05:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-173500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-173500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_09T00_05_33_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-01-09T00:05:27Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0109 00:05:47.705821     124 round_trippers.go:463] GET https://172.24.100.178:8443/api/v1/nodes/multinode-173500
	I0109 00:05:47.705821     124 round_trippers.go:469] Request Headers:
	I0109 00:05:47.705821     124 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:05:47.705821     124 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0109 00:05:47.709597     124 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0109 00:05:47.709920     124 round_trippers.go:577] Response Headers:
	I0109 00:05:47.709920     124 round_trippers.go:580]     Audit-Id: 0e79ad12-e0d7-4286-9351-35473c7db686
	I0109 00:05:47.709920     124 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:05:47.709991     124 round_trippers.go:580]     Content-Type: application/json
	I0109 00:05:47.709991     124 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b65f6d3f-e21a-46b0-805d-9b71486b6c1c
	I0109 00:05:47.710034     124 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 160c1d93-30c8-4413-9986-7aa0d5f0fdd1
	I0109 00:05:47.710034     124 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:05:47 GMT
	I0109 00:05:47.710306     124 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-173500","uid":"5e5c58e3-1ae8-4346-8766-7537fea36975","resourceVersion":"328","creationTimestamp":"2024-01-09T00:05:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-173500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-173500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_09T00_05_33_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-01-09T00:05:27Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0109 00:05:47.710968     124 node_ready.go:58] node "multinode-173500" has status "Ready":"False"
	I0109 00:05:48.200833     124 round_trippers.go:463] GET https://172.24.100.178:8443/api/v1/nodes/multinode-173500
	I0109 00:05:48.200940     124 round_trippers.go:469] Request Headers:
	I0109 00:05:48.200940     124 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:05:48.200940     124 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0109 00:05:48.204951     124 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0109 00:05:48.204951     124 round_trippers.go:577] Response Headers:
	I0109 00:05:48.204951     124 round_trippers.go:580]     Audit-Id: 8e9811f5-1d35-4801-b518-49140576dbeb
	I0109 00:05:48.205059     124 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:05:48.205059     124 round_trippers.go:580]     Content-Type: application/json
	I0109 00:05:48.205059     124 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b65f6d3f-e21a-46b0-805d-9b71486b6c1c
	I0109 00:05:48.205059     124 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 160c1d93-30c8-4413-9986-7aa0d5f0fdd1
	I0109 00:05:48.205059     124 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:05:48 GMT
	I0109 00:05:48.205440     124 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-173500","uid":"5e5c58e3-1ae8-4346-8766-7537fea36975","resourceVersion":"328","creationTimestamp":"2024-01-09T00:05:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-173500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-173500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_09T00_05_33_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-01-09T00:05:27Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0109 00:05:48.693196     124 round_trippers.go:463] GET https://172.24.100.178:8443/api/v1/nodes/multinode-173500
	I0109 00:05:48.693196     124 round_trippers.go:469] Request Headers:
	I0109 00:05:48.693315     124 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:05:48.693315     124 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0109 00:05:48.696940     124 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0109 00:05:48.697294     124 round_trippers.go:577] Response Headers:
	I0109 00:05:48.697294     124 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 160c1d93-30c8-4413-9986-7aa0d5f0fdd1
	I0109 00:05:48.697294     124 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:05:48 GMT
	I0109 00:05:48.697294     124 round_trippers.go:580]     Audit-Id: 0cc42868-f7dc-40dd-8502-1913a2750080
	I0109 00:05:48.697294     124 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:05:48.697294     124 round_trippers.go:580]     Content-Type: application/json
	I0109 00:05:48.697294     124 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b65f6d3f-e21a-46b0-805d-9b71486b6c1c
	I0109 00:05:48.697840     124 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-173500","uid":"5e5c58e3-1ae8-4346-8766-7537fea36975","resourceVersion":"328","creationTimestamp":"2024-01-09T00:05:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-173500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-173500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_09T00_05_33_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-01-09T00:05:27Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0109 00:05:49.201688     124 round_trippers.go:463] GET https://172.24.100.178:8443/api/v1/nodes/multinode-173500
	I0109 00:05:49.201688     124 round_trippers.go:469] Request Headers:
	I0109 00:05:49.201688     124 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:05:49.201688     124 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0109 00:05:49.205311     124 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0109 00:05:49.205311     124 round_trippers.go:577] Response Headers:
	I0109 00:05:49.206270     124 round_trippers.go:580]     Audit-Id: 49777b60-2cc8-4e3c-87b6-55e444bed9dc
	I0109 00:05:49.206270     124 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:05:49.206270     124 round_trippers.go:580]     Content-Type: application/json
	I0109 00:05:49.206270     124 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b65f6d3f-e21a-46b0-805d-9b71486b6c1c
	I0109 00:05:49.206270     124 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 160c1d93-30c8-4413-9986-7aa0d5f0fdd1
	I0109 00:05:49.206270     124 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:05:49 GMT
	I0109 00:05:49.206561     124 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-173500","uid":"5e5c58e3-1ae8-4346-8766-7537fea36975","resourceVersion":"328","creationTimestamp":"2024-01-09T00:05:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-173500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-173500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_09T00_05_33_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-01-09T00:05:27Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0109 00:05:49.279764     124 main.go:141] libmachine: [stdout =====>] : Running
	
	I0109 00:05:49.279956     124 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:05:49.279956     124 main.go:141] libmachine: [stdout =====>] : Running
	
	I0109 00:05:49.280098     124 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:05:49.279956     124 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-173500 ).networkadapters[0]).ipaddresses[0]
	I0109 00:05:49.280291     124 addons.go:429] installing /etc/kubernetes/addons/storageclass.yaml
	I0109 00:05:49.280375     124 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0109 00:05:49.280375     124 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-173500 ).state
	I0109 00:05:49.691956     124 round_trippers.go:463] GET https://172.24.100.178:8443/api/v1/nodes/multinode-173500
	I0109 00:05:49.691956     124 round_trippers.go:469] Request Headers:
	I0109 00:05:49.691956     124 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:05:49.691956     124 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0109 00:05:49.696395     124 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0109 00:05:49.696686     124 round_trippers.go:577] Response Headers:
	I0109 00:05:49.696686     124 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 160c1d93-30c8-4413-9986-7aa0d5f0fdd1
	I0109 00:05:49.696686     124 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:05:49 GMT
	I0109 00:05:49.696686     124 round_trippers.go:580]     Audit-Id: f24735ef-adc8-441b-8ae1-d319b99a97ac
	I0109 00:05:49.696686     124 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:05:49.696686     124 round_trippers.go:580]     Content-Type: application/json
	I0109 00:05:49.696686     124 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b65f6d3f-e21a-46b0-805d-9b71486b6c1c
	I0109 00:05:49.696686     124 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-173500","uid":"5e5c58e3-1ae8-4346-8766-7537fea36975","resourceVersion":"328","creationTimestamp":"2024-01-09T00:05:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-173500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-173500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_09T00_05_33_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-01-09T00:05:27Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0109 00:05:50.200866     124 round_trippers.go:463] GET https://172.24.100.178:8443/api/v1/nodes/multinode-173500
	I0109 00:05:50.200999     124 round_trippers.go:469] Request Headers:
	I0109 00:05:50.200999     124 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0109 00:05:50.200999     124 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:05:50.204427     124 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0109 00:05:50.204427     124 round_trippers.go:577] Response Headers:
	I0109 00:05:50.204427     124 round_trippers.go:580]     Audit-Id: e427707f-7bc6-4c4e-9f63-bfc339617824
	I0109 00:05:50.204427     124 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:05:50.204427     124 round_trippers.go:580]     Content-Type: application/json
	I0109 00:05:50.204820     124 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b65f6d3f-e21a-46b0-805d-9b71486b6c1c
	I0109 00:05:50.204820     124 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 160c1d93-30c8-4413-9986-7aa0d5f0fdd1
	I0109 00:05:50.204820     124 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:05:50 GMT
	I0109 00:05:50.205179     124 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-173500","uid":"5e5c58e3-1ae8-4346-8766-7537fea36975","resourceVersion":"328","creationTimestamp":"2024-01-09T00:05:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-173500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-173500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_09T00_05_33_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-01-09T00:05:27Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0109 00:05:50.205779     124 node_ready.go:58] node "multinode-173500" has status "Ready":"False"
	I0109 00:05:50.691112     124 round_trippers.go:463] GET https://172.24.100.178:8443/api/v1/nodes/multinode-173500
	I0109 00:05:50.691184     124 round_trippers.go:469] Request Headers:
	I0109 00:05:50.691184     124 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0109 00:05:50.691184     124 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:05:50.694875     124 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0109 00:05:50.694875     124 round_trippers.go:577] Response Headers:
	I0109 00:05:50.694875     124 round_trippers.go:580]     Content-Type: application/json
	I0109 00:05:50.694875     124 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b65f6d3f-e21a-46b0-805d-9b71486b6c1c
	I0109 00:05:50.694875     124 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 160c1d93-30c8-4413-9986-7aa0d5f0fdd1
	I0109 00:05:50.694875     124 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:05:50 GMT
	I0109 00:05:50.694875     124 round_trippers.go:580]     Audit-Id: c4230f58-b128-445a-a74d-7cbdee6862cf
	I0109 00:05:50.694875     124 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:05:50.696145     124 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-173500","uid":"5e5c58e3-1ae8-4346-8766-7537fea36975","resourceVersion":"328","creationTimestamp":"2024-01-09T00:05:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-173500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-173500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_09T00_05_33_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-01-09T00:05:27Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0109 00:05:51.198659     124 round_trippers.go:463] GET https://172.24.100.178:8443/api/v1/nodes/multinode-173500
	I0109 00:05:51.198659     124 round_trippers.go:469] Request Headers:
	I0109 00:05:51.198659     124 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:05:51.198659     124 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0109 00:05:51.202009     124 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0109 00:05:51.202009     124 round_trippers.go:577] Response Headers:
	I0109 00:05:51.202692     124 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 160c1d93-30c8-4413-9986-7aa0d5f0fdd1
	I0109 00:05:51.202692     124 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:05:51 GMT
	I0109 00:05:51.202692     124 round_trippers.go:580]     Audit-Id: 6992c845-ab7e-4dbc-81ae-3fc50113c4ab
	I0109 00:05:51.202692     124 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:05:51.202692     124 round_trippers.go:580]     Content-Type: application/json
	I0109 00:05:51.202692     124 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b65f6d3f-e21a-46b0-805d-9b71486b6c1c
	I0109 00:05:51.202966     124 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-173500","uid":"5e5c58e3-1ae8-4346-8766-7537fea36975","resourceVersion":"328","creationTimestamp":"2024-01-09T00:05:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-173500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-173500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_09T00_05_33_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-01-09T00:05:27Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0109 00:05:51.609395     124 main.go:141] libmachine: [stdout =====>] : Running
	
	I0109 00:05:51.609606     124 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:05:51.609606     124 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-173500 ).networkadapters[0]).ipaddresses[0]
	I0109 00:05:51.691033     124 round_trippers.go:463] GET https://172.24.100.178:8443/api/v1/nodes/multinode-173500
	I0109 00:05:51.691116     124 round_trippers.go:469] Request Headers:
	I0109 00:05:51.691116     124 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:05:51.691116     124 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0109 00:05:51.695716     124 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0109 00:05:51.696074     124 round_trippers.go:577] Response Headers:
	I0109 00:05:51.696074     124 round_trippers.go:580]     Content-Type: application/json
	I0109 00:05:51.696074     124 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b65f6d3f-e21a-46b0-805d-9b71486b6c1c
	I0109 00:05:51.696074     124 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 160c1d93-30c8-4413-9986-7aa0d5f0fdd1
	I0109 00:05:51.696074     124 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:05:51 GMT
	I0109 00:05:51.696074     124 round_trippers.go:580]     Audit-Id: 07d24b6c-ed80-4adc-a0e7-af283a8dbef6
	I0109 00:05:51.696074     124 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:05:51.696259     124 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-173500","uid":"5e5c58e3-1ae8-4346-8766-7537fea36975","resourceVersion":"328","creationTimestamp":"2024-01-09T00:05:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-173500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-173500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_09T00_05_33_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-01-09T00:05:27Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0109 00:05:52.044511     124 main.go:141] libmachine: [stdout =====>] : 172.24.100.178
	
	I0109 00:05:52.044511     124 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:05:52.044511     124 sshutil.go:53] new ssh client: &{IP:172.24.100.178 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-173500\id_rsa Username:docker}
	I0109 00:05:52.198177     124 round_trippers.go:463] GET https://172.24.100.178:8443/api/v1/nodes/multinode-173500
	I0109 00:05:52.198341     124 round_trippers.go:469] Request Headers:
	I0109 00:05:52.198341     124 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:05:52.198341     124 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0109 00:05:52.201151     124 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0109 00:05:52.202133     124 round_trippers.go:577] Response Headers:
	I0109 00:05:52.202133     124 round_trippers.go:580]     Audit-Id: 177d8b14-6e0f-43c7-acbb-9f48da9a4ab8
	I0109 00:05:52.202133     124 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:05:52.202133     124 round_trippers.go:580]     Content-Type: application/json
	I0109 00:05:52.202133     124 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b65f6d3f-e21a-46b0-805d-9b71486b6c1c
	I0109 00:05:52.202133     124 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 160c1d93-30c8-4413-9986-7aa0d5f0fdd1
	I0109 00:05:52.202133     124 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:05:52 GMT
	I0109 00:05:52.202133     124 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-173500","uid":"5e5c58e3-1ae8-4346-8766-7537fea36975","resourceVersion":"328","creationTimestamp":"2024-01-09T00:05:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-173500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-173500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_09T00_05_33_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-01-09T00:05:27Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0109 00:05:52.243201     124 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0109 00:05:52.690458     124 round_trippers.go:463] GET https://172.24.100.178:8443/api/v1/nodes/multinode-173500
	I0109 00:05:52.690458     124 round_trippers.go:469] Request Headers:
	I0109 00:05:52.690458     124 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:05:52.690458     124 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0109 00:05:52.696274     124 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0109 00:05:52.696274     124 round_trippers.go:577] Response Headers:
	I0109 00:05:52.696274     124 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:05:52.696274     124 round_trippers.go:580]     Content-Type: application/json
	I0109 00:05:52.696274     124 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b65f6d3f-e21a-46b0-805d-9b71486b6c1c
	I0109 00:05:52.696274     124 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 160c1d93-30c8-4413-9986-7aa0d5f0fdd1
	I0109 00:05:52.696274     124 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:05:52 GMT
	I0109 00:05:52.696274     124 round_trippers.go:580]     Audit-Id: 193842a0-6218-407b-ada5-d97814b7027a
	I0109 00:05:52.696274     124 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-173500","uid":"5e5c58e3-1ae8-4346-8766-7537fea36975","resourceVersion":"328","creationTimestamp":"2024-01-09T00:05:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-173500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-173500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_09T00_05_33_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-01-09T00:05:27Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0109 00:05:52.697797     124 node_ready.go:58] node "multinode-173500" has status "Ready":"False"
	I0109 00:05:53.128638     124 command_runner.go:130] > serviceaccount/storage-provisioner created
	I0109 00:05:53.129390     124 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner created
	I0109 00:05:53.129390     124 command_runner.go:130] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0109 00:05:53.129390     124 command_runner.go:130] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0109 00:05:53.129390     124 command_runner.go:130] > endpoints/k8s.io-minikube-hostpath created
	I0109 00:05:53.129390     124 command_runner.go:130] > pod/storage-provisioner created
	I0109 00:05:53.193874     124 round_trippers.go:463] GET https://172.24.100.178:8443/api/v1/nodes/multinode-173500
	I0109 00:05:53.194047     124 round_trippers.go:469] Request Headers:
	I0109 00:05:53.194105     124 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:05:53.194105     124 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0109 00:05:53.197540     124 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0109 00:05:53.197540     124 round_trippers.go:577] Response Headers:
	I0109 00:05:53.197540     124 round_trippers.go:580]     Audit-Id: eb4646e4-1c69-4fce-a265-f1bdcefa02e0
	I0109 00:05:53.197540     124 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:05:53.197540     124 round_trippers.go:580]     Content-Type: application/json
	I0109 00:05:53.197540     124 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b65f6d3f-e21a-46b0-805d-9b71486b6c1c
	I0109 00:05:53.197540     124 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 160c1d93-30c8-4413-9986-7aa0d5f0fdd1
	I0109 00:05:53.197540     124 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:05:53 GMT
	I0109 00:05:53.198537     124 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-173500","uid":"5e5c58e3-1ae8-4346-8766-7537fea36975","resourceVersion":"328","creationTimestamp":"2024-01-09T00:05:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-173500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-173500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_09T00_05_33_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-01-09T00:05:27Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0109 00:05:53.701616     124 round_trippers.go:463] GET https://172.24.100.178:8443/api/v1/nodes/multinode-173500
	I0109 00:05:53.701684     124 round_trippers.go:469] Request Headers:
	I0109 00:05:53.701684     124 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:05:53.701684     124 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0109 00:05:53.707226     124 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0109 00:05:53.707226     124 round_trippers.go:577] Response Headers:
	I0109 00:05:53.707226     124 round_trippers.go:580]     Content-Type: application/json
	I0109 00:05:53.707226     124 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b65f6d3f-e21a-46b0-805d-9b71486b6c1c
	I0109 00:05:53.707226     124 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 160c1d93-30c8-4413-9986-7aa0d5f0fdd1
	I0109 00:05:53.707226     124 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:05:53 GMT
	I0109 00:05:53.707226     124 round_trippers.go:580]     Audit-Id: f68c3c9e-c9d9-4a2f-afe2-012bb226454d
	I0109 00:05:53.707226     124 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:05:53.707787     124 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-173500","uid":"5e5c58e3-1ae8-4346-8766-7537fea36975","resourceVersion":"328","creationTimestamp":"2024-01-09T00:05:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-173500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-173500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_09T00_05_33_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-01-09T00:05:27Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0109 00:05:54.205438     124 round_trippers.go:463] GET https://172.24.100.178:8443/api/v1/nodes/multinode-173500
	I0109 00:05:54.205529     124 round_trippers.go:469] Request Headers:
	I0109 00:05:54.205529     124 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:05:54.205529     124 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0109 00:05:54.210140     124 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0109 00:05:54.210249     124 round_trippers.go:577] Response Headers:
	I0109 00:05:54.210249     124 round_trippers.go:580]     Audit-Id: d9d5e40d-7fe9-4c12-8fcc-44b9ae6bbb0d
	I0109 00:05:54.210249     124 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:05:54.210298     124 round_trippers.go:580]     Content-Type: application/json
	I0109 00:05:54.210298     124 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b65f6d3f-e21a-46b0-805d-9b71486b6c1c
	I0109 00:05:54.210339     124 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 160c1d93-30c8-4413-9986-7aa0d5f0fdd1
	I0109 00:05:54.210339     124 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:05:54 GMT
	I0109 00:05:54.210735     124 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-173500","uid":"5e5c58e3-1ae8-4346-8766-7537fea36975","resourceVersion":"328","creationTimestamp":"2024-01-09T00:05:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-173500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-173500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_09T00_05_33_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-01-09T00:05:27Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0109 00:05:54.346390     124 main.go:141] libmachine: [stdout =====>] : 172.24.100.178
	
	I0109 00:05:54.346390     124 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:05:54.346390     124 sshutil.go:53] new ssh client: &{IP:172.24.100.178 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-173500\id_rsa Username:docker}
	I0109 00:05:54.501978     124 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0109 00:05:54.694471     124 round_trippers.go:463] GET https://172.24.100.178:8443/api/v1/nodes/multinode-173500
	I0109 00:05:54.694471     124 round_trippers.go:469] Request Headers:
	I0109 00:05:54.694471     124 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0109 00:05:54.694471     124 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:05:54.700458     124 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0109 00:05:54.700458     124 round_trippers.go:577] Response Headers:
	I0109 00:05:54.700458     124 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 160c1d93-30c8-4413-9986-7aa0d5f0fdd1
	I0109 00:05:54.700458     124 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:05:54 GMT
	I0109 00:05:54.700458     124 round_trippers.go:580]     Audit-Id: 91ea60bb-ecee-426c-967d-91871977710e
	I0109 00:05:54.700458     124 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:05:54.700458     124 round_trippers.go:580]     Content-Type: application/json
	I0109 00:05:54.700458     124 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b65f6d3f-e21a-46b0-805d-9b71486b6c1c
	I0109 00:05:54.700458     124 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-173500","uid":"5e5c58e3-1ae8-4346-8766-7537fea36975","resourceVersion":"328","creationTimestamp":"2024-01-09T00:05:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-173500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-173500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_09T00_05_33_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-01-09T00:05:27Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0109 00:05:54.701466     124 node_ready.go:58] node "multinode-173500" has status "Ready":"False"
	I0109 00:05:54.871508     124 command_runner.go:130] > storageclass.storage.k8s.io/standard created
	I0109 00:05:54.871891     124 round_trippers.go:463] GET https://172.24.100.178:8443/apis/storage.k8s.io/v1/storageclasses
	I0109 00:05:54.871953     124 round_trippers.go:469] Request Headers:
	I0109 00:05:54.871953     124 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0109 00:05:54.871953     124 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:05:54.875089     124 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0109 00:05:54.875089     124 round_trippers.go:577] Response Headers:
	I0109 00:05:54.875089     124 round_trippers.go:580]     Audit-Id: 0a61d9b4-08e0-4923-b759-25c522173912
	I0109 00:05:54.875089     124 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:05:54.875089     124 round_trippers.go:580]     Content-Type: application/json
	I0109 00:05:54.875089     124 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b65f6d3f-e21a-46b0-805d-9b71486b6c1c
	I0109 00:05:54.875089     124 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 160c1d93-30c8-4413-9986-7aa0d5f0fdd1
	I0109 00:05:54.875089     124 round_trippers.go:580]     Content-Length: 1273
	I0109 00:05:54.875089     124 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:05:54 GMT
	I0109 00:05:54.875089     124 request.go:1212] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"385"},"items":[{"metadata":{"name":"standard","uid":"436ffa59-30cb-4986-b245-641de4ee0651","resourceVersion":"385","creationTimestamp":"2024-01-09T00:05:54Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-01-09T00:05:54Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kuberne
tes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is- [truncated 249 chars]
	I0109 00:05:54.876108     124 request.go:1212] Request Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"436ffa59-30cb-4986-b245-641de4ee0651","resourceVersion":"385","creationTimestamp":"2024-01-09T00:05:54Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-01-09T00:05:54Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclas
s.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0109 00:05:54.876108     124 round_trippers.go:463] PUT https://172.24.100.178:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0109 00:05:54.876108     124 round_trippers.go:469] Request Headers:
	I0109 00:05:54.876108     124 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:05:54.876108     124 round_trippers.go:473]     Content-Type: application/json
	I0109 00:05:54.876108     124 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0109 00:05:54.879617     124 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0109 00:05:54.879617     124 round_trippers.go:577] Response Headers:
	I0109 00:05:54.879617     124 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 160c1d93-30c8-4413-9986-7aa0d5f0fdd1
	I0109 00:05:54.879617     124 round_trippers.go:580]     Content-Length: 1220
	I0109 00:05:54.879617     124 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:05:54 GMT
	I0109 00:05:54.879617     124 round_trippers.go:580]     Audit-Id: e9aa617f-12df-4855-8627-c4f393818a7b
	I0109 00:05:54.879617     124 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:05:54.879734     124 round_trippers.go:580]     Content-Type: application/json
	I0109 00:05:54.879734     124 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b65f6d3f-e21a-46b0-805d-9b71486b6c1c
	I0109 00:05:54.879875     124 request.go:1212] Response Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"436ffa59-30cb-4986-b245-641de4ee0651","resourceVersion":"385","creationTimestamp":"2024-01-09T00:05:54Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-01-09T00:05:54Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storagecla
ss.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0109 00:05:54.882797     124 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0109 00:05:54.886635     124 addons.go:508] enable addons completed in 10.265599s: enabled=[storage-provisioner default-storageclass]
	I0109 00:05:55.203656     124 round_trippers.go:463] GET https://172.24.100.178:8443/api/v1/nodes/multinode-173500
	I0109 00:05:55.203656     124 round_trippers.go:469] Request Headers:
	I0109 00:05:55.203729     124 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:05:55.203729     124 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0109 00:05:55.207141     124 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0109 00:05:55.207483     124 round_trippers.go:577] Response Headers:
	I0109 00:05:55.207483     124 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 160c1d93-30c8-4413-9986-7aa0d5f0fdd1
	I0109 00:05:55.207483     124 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:05:55 GMT
	I0109 00:05:55.207483     124 round_trippers.go:580]     Audit-Id: 2ab9fe58-76e4-4041-afaa-f603f4358e79
	I0109 00:05:55.207545     124 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:05:55.207545     124 round_trippers.go:580]     Content-Type: application/json
	I0109 00:05:55.207545     124 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b65f6d3f-e21a-46b0-805d-9b71486b6c1c
	I0109 00:05:55.207691     124 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-173500","uid":"5e5c58e3-1ae8-4346-8766-7537fea36975","resourceVersion":"328","creationTimestamp":"2024-01-09T00:05:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-173500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-173500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_09T00_05_33_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-01-09T00:05:27Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0109 00:05:55.699145     124 round_trippers.go:463] GET https://172.24.100.178:8443/api/v1/nodes/multinode-173500
	I0109 00:05:55.699367     124 round_trippers.go:469] Request Headers:
	I0109 00:05:55.699423     124 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:05:55.699423     124 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0109 00:05:55.702799     124 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0109 00:05:55.702799     124 round_trippers.go:577] Response Headers:
	I0109 00:05:55.702799     124 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:05:55 GMT
	I0109 00:05:55.702799     124 round_trippers.go:580]     Audit-Id: 3df9d84e-a62e-4890-9570-4ecab4e74185
	I0109 00:05:55.702799     124 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:05:55.702799     124 round_trippers.go:580]     Content-Type: application/json
	I0109 00:05:55.702799     124 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b65f6d3f-e21a-46b0-805d-9b71486b6c1c
	I0109 00:05:55.702799     124 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 160c1d93-30c8-4413-9986-7aa0d5f0fdd1
	I0109 00:05:55.703879     124 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-173500","uid":"5e5c58e3-1ae8-4346-8766-7537fea36975","resourceVersion":"328","creationTimestamp":"2024-01-09T00:05:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-173500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-173500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_09T00_05_33_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-01-09T00:05:27Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0109 00:05:56.193141     124 round_trippers.go:463] GET https://172.24.100.178:8443/api/v1/nodes/multinode-173500
	I0109 00:05:56.193224     124 round_trippers.go:469] Request Headers:
	I0109 00:05:56.193224     124 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:05:56.193224     124 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0109 00:05:56.197505     124 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0109 00:05:56.197921     124 round_trippers.go:577] Response Headers:
	I0109 00:05:56.197921     124 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:05:56.197921     124 round_trippers.go:580]     Content-Type: application/json
	I0109 00:05:56.198024     124 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b65f6d3f-e21a-46b0-805d-9b71486b6c1c
	I0109 00:05:56.198048     124 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 160c1d93-30c8-4413-9986-7aa0d5f0fdd1
	I0109 00:05:56.198048     124 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:05:56 GMT
	I0109 00:05:56.198048     124 round_trippers.go:580]     Audit-Id: 60fca80d-e7d6-46bc-af23-70d075fcbe03
	I0109 00:05:56.198282     124 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-173500","uid":"5e5c58e3-1ae8-4346-8766-7537fea36975","resourceVersion":"328","creationTimestamp":"2024-01-09T00:05:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-173500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-173500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_09T00_05_33_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-01-09T00:05:27Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0109 00:05:56.693243     124 round_trippers.go:463] GET https://172.24.100.178:8443/api/v1/nodes/multinode-173500
	I0109 00:05:56.727548     124 round_trippers.go:469] Request Headers:
	I0109 00:05:56.727811     124 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:05:56.727811     124 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0109 00:05:56.733678     124 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0109 00:05:56.733678     124 round_trippers.go:577] Response Headers:
	I0109 00:05:56.733678     124 round_trippers.go:580]     Content-Type: application/json
	I0109 00:05:56.733678     124 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b65f6d3f-e21a-46b0-805d-9b71486b6c1c
	I0109 00:05:56.733678     124 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 160c1d93-30c8-4413-9986-7aa0d5f0fdd1
	I0109 00:05:56.733678     124 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:05:56 GMT
	I0109 00:05:56.733678     124 round_trippers.go:580]     Audit-Id: 6c2217b0-269b-466d-b86b-18b7f3ea44c8
	I0109 00:05:56.733678     124 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:05:56.733678     124 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-173500","uid":"5e5c58e3-1ae8-4346-8766-7537fea36975","resourceVersion":"328","creationTimestamp":"2024-01-09T00:05:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-173500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-173500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_09T00_05_33_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-01-09T00:05:27Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0109 00:05:56.735208     124 node_ready.go:58] node "multinode-173500" has status "Ready":"False"
	I0109 00:05:57.198051     124 round_trippers.go:463] GET https://172.24.100.178:8443/api/v1/nodes/multinode-173500
	I0109 00:05:57.198140     124 round_trippers.go:469] Request Headers:
	I0109 00:05:57.198140     124 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:05:57.198140     124 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0109 00:05:57.201515     124 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0109 00:05:57.201793     124 round_trippers.go:577] Response Headers:
	I0109 00:05:57.201793     124 round_trippers.go:580]     Audit-Id: 4b85b786-ff5d-4d7f-b6e3-dd3b9eca03b4
	I0109 00:05:57.201793     124 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:05:57.201793     124 round_trippers.go:580]     Content-Type: application/json
	I0109 00:05:57.201793     124 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b65f6d3f-e21a-46b0-805d-9b71486b6c1c
	I0109 00:05:57.201793     124 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 160c1d93-30c8-4413-9986-7aa0d5f0fdd1
	I0109 00:05:57.201887     124 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:05:57 GMT
	I0109 00:05:57.202114     124 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-173500","uid":"5e5c58e3-1ae8-4346-8766-7537fea36975","resourceVersion":"328","creationTimestamp":"2024-01-09T00:05:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-173500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-173500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_09T00_05_33_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-01-09T00:05:27Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0109 00:05:57.697809     124 round_trippers.go:463] GET https://172.24.100.178:8443/api/v1/nodes/multinode-173500
	I0109 00:05:57.698033     124 round_trippers.go:469] Request Headers:
	I0109 00:05:57.698033     124 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:05:57.698033     124 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0109 00:05:57.704686     124 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0109 00:05:57.705495     124 round_trippers.go:577] Response Headers:
	I0109 00:05:57.705495     124 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:05:57 GMT
	I0109 00:05:57.705495     124 round_trippers.go:580]     Audit-Id: 6a4335bf-ec77-40e3-b440-d671281e317c
	I0109 00:05:57.705550     124 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:05:57.705550     124 round_trippers.go:580]     Content-Type: application/json
	I0109 00:05:57.705550     124 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b65f6d3f-e21a-46b0-805d-9b71486b6c1c
	I0109 00:05:57.705550     124 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 160c1d93-30c8-4413-9986-7aa0d5f0fdd1
	I0109 00:05:57.705550     124 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-173500","uid":"5e5c58e3-1ae8-4346-8766-7537fea36975","resourceVersion":"328","creationTimestamp":"2024-01-09T00:05:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-173500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-173500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_09T00_05_33_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-01-09T00:05:27Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0109 00:05:58.200393     124 round_trippers.go:463] GET https://172.24.100.178:8443/api/v1/nodes/multinode-173500
	I0109 00:05:58.200606     124 round_trippers.go:469] Request Headers:
	I0109 00:05:58.200606     124 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:05:58.200606     124 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0109 00:05:58.207059     124 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0109 00:05:58.207059     124 round_trippers.go:577] Response Headers:
	I0109 00:05:58.207059     124 round_trippers.go:580]     Audit-Id: 1476b983-e871-4f44-82a8-97673b7222db
	I0109 00:05:58.207059     124 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:05:58.207059     124 round_trippers.go:580]     Content-Type: application/json
	I0109 00:05:58.207059     124 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b65f6d3f-e21a-46b0-805d-9b71486b6c1c
	I0109 00:05:58.207059     124 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 160c1d93-30c8-4413-9986-7aa0d5f0fdd1
	I0109 00:05:58.207059     124 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:05:58 GMT
	I0109 00:05:58.207683     124 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-173500","uid":"5e5c58e3-1ae8-4346-8766-7537fea36975","resourceVersion":"328","creationTimestamp":"2024-01-09T00:05:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-173500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-173500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_09T00_05_33_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-01-09T00:05:27Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0109 00:05:58.701168     124 round_trippers.go:463] GET https://172.24.100.178:8443/api/v1/nodes/multinode-173500
	I0109 00:05:58.701168     124 round_trippers.go:469] Request Headers:
	I0109 00:05:58.701168     124 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:05:58.701168     124 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0109 00:05:58.705073     124 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0109 00:05:58.705073     124 round_trippers.go:577] Response Headers:
	I0109 00:05:58.705073     124 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b65f6d3f-e21a-46b0-805d-9b71486b6c1c
	I0109 00:05:58.705073     124 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 160c1d93-30c8-4413-9986-7aa0d5f0fdd1
	I0109 00:05:58.705073     124 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:05:58 GMT
	I0109 00:05:58.705073     124 round_trippers.go:580]     Audit-Id: 2b43837a-5525-4f60-a88a-4d26b00a90b5
	I0109 00:05:58.705073     124 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:05:58.705073     124 round_trippers.go:580]     Content-Type: application/json
	I0109 00:05:58.705708     124 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-173500","uid":"5e5c58e3-1ae8-4346-8766-7537fea36975","resourceVersion":"328","creationTimestamp":"2024-01-09T00:05:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-173500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-173500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_09T00_05_33_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-01-09T00:05:27Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0109 00:05:59.202758     124 round_trippers.go:463] GET https://172.24.100.178:8443/api/v1/nodes/multinode-173500
	I0109 00:05:59.202758     124 round_trippers.go:469] Request Headers:
	I0109 00:05:59.202758     124 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:05:59.202758     124 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0109 00:05:59.214824     124 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0109 00:05:59.214824     124 round_trippers.go:577] Response Headers:
	I0109 00:05:59.215503     124 round_trippers.go:580]     Audit-Id: 215d5680-2028-4ce6-84b0-68880492f01e
	I0109 00:05:59.215503     124 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:05:59.215503     124 round_trippers.go:580]     Content-Type: application/json
	I0109 00:05:59.215503     124 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b65f6d3f-e21a-46b0-805d-9b71486b6c1c
	I0109 00:05:59.215503     124 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 160c1d93-30c8-4413-9986-7aa0d5f0fdd1
	I0109 00:05:59.215503     124 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:05:59 GMT
	I0109 00:05:59.215852     124 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-173500","uid":"5e5c58e3-1ae8-4346-8766-7537fea36975","resourceVersion":"394","creationTimestamp":"2024-01-09T00:05:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-173500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-173500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_09T00_05_33_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-01-09T00:05:27Z","fieldsType":"FieldsV1","fi [truncated 4783 chars]
	I0109 00:05:59.216472     124 node_ready.go:49] node "multinode-173500" has status "Ready":"True"
	I0109 00:05:59.216472     124 node_ready.go:38] duration metric: took 13.5263026s waiting for node "multinode-173500" to be "Ready" ...
	I0109 00:05:59.216472     124 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0109 00:05:59.216599     124 round_trippers.go:463] GET https://172.24.100.178:8443/api/v1/namespaces/kube-system/pods
	I0109 00:05:59.216599     124 round_trippers.go:469] Request Headers:
	I0109 00:05:59.216599     124 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:05:59.216722     124 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0109 00:05:59.229136     124 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0109 00:05:59.229136     124 round_trippers.go:577] Response Headers:
	I0109 00:05:59.229136     124 round_trippers.go:580]     Audit-Id: 84b675b9-6f2b-42af-9231-48804acbd821
	I0109 00:05:59.229136     124 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:05:59.229136     124 round_trippers.go:580]     Content-Type: application/json
	I0109 00:05:59.229136     124 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b65f6d3f-e21a-46b0-805d-9b71486b6c1c
	I0109 00:05:59.229136     124 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 160c1d93-30c8-4413-9986-7aa0d5f0fdd1
	I0109 00:05:59.229136     124 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:05:59 GMT
	I0109 00:05:59.230867     124 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"400"},"items":[{"metadata":{"name":"coredns-5dd5756b68-bkss9","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"463fb6c6-1e85-419f-9c13-96e58a2ec22e","resourceVersion":"398","creationTimestamp":"2024-01-09T00:05:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"391af85f-9c35-497b-9b4f-c347a35d4a42","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-09T00:05:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"391af85f-9c35-497b-9b4f-c347a35d4a42\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 54012 chars]
	I0109 00:05:59.235563     124 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-bkss9" in "kube-system" namespace to be "Ready" ...
	I0109 00:05:59.236127     124 round_trippers.go:463] GET https://172.24.100.178:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-bkss9
	I0109 00:05:59.236127     124 round_trippers.go:469] Request Headers:
	I0109 00:05:59.236127     124 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:05:59.236218     124 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0109 00:05:59.240423     124 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0109 00:05:59.240423     124 round_trippers.go:577] Response Headers:
	I0109 00:05:59.240423     124 round_trippers.go:580]     Audit-Id: 4da661d8-a69d-4e09-a9b0-ccfa87caf8aa
	I0109 00:05:59.240423     124 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:05:59.240423     124 round_trippers.go:580]     Content-Type: application/json
	I0109 00:05:59.240423     124 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b65f6d3f-e21a-46b0-805d-9b71486b6c1c
	I0109 00:05:59.240423     124 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 160c1d93-30c8-4413-9986-7aa0d5f0fdd1
	I0109 00:05:59.240423     124 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:05:59 GMT
	I0109 00:05:59.243232     124 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-bkss9","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"463fb6c6-1e85-419f-9c13-96e58a2ec22e","resourceVersion":"398","creationTimestamp":"2024-01-09T00:05:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"391af85f-9c35-497b-9b4f-c347a35d4a42","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-09T00:05:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"391af85f-9c35-497b-9b4f-c347a35d4a42\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6152 chars]
	I0109 00:05:59.244249     124 round_trippers.go:463] GET https://172.24.100.178:8443/api/v1/nodes/multinode-173500
	I0109 00:05:59.244249     124 round_trippers.go:469] Request Headers:
	I0109 00:05:59.244249     124 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:05:59.244249     124 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0109 00:05:59.247418     124 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0109 00:05:59.247418     124 round_trippers.go:577] Response Headers:
	I0109 00:05:59.247418     124 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:05:59.247418     124 round_trippers.go:580]     Content-Type: application/json
	I0109 00:05:59.247418     124 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b65f6d3f-e21a-46b0-805d-9b71486b6c1c
	I0109 00:05:59.247418     124 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 160c1d93-30c8-4413-9986-7aa0d5f0fdd1
	I0109 00:05:59.247418     124 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:05:59 GMT
	I0109 00:05:59.247418     124 round_trippers.go:580]     Audit-Id: dca2bbc4-659b-4067-a4fa-4002c02106c5
	I0109 00:05:59.248421     124 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-173500","uid":"5e5c58e3-1ae8-4346-8766-7537fea36975","resourceVersion":"394","creationTimestamp":"2024-01-09T00:05:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-173500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-173500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_09T00_05_33_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-01-09T00:05:27Z","fieldsType":"FieldsV1","fi [truncated 4783 chars]
	I0109 00:05:59.749728     124 round_trippers.go:463] GET https://172.24.100.178:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-bkss9
	I0109 00:05:59.749790     124 round_trippers.go:469] Request Headers:
	I0109 00:05:59.749790     124 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:05:59.749790     124 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0109 00:05:59.756394     124 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0109 00:05:59.756394     124 round_trippers.go:577] Response Headers:
	I0109 00:05:59.756461     124 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:05:59 GMT
	I0109 00:05:59.756461     124 round_trippers.go:580]     Audit-Id: 50f476b4-965a-4857-bc2f-a2d21c2460d6
	I0109 00:05:59.756461     124 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:05:59.756461     124 round_trippers.go:580]     Content-Type: application/json
	I0109 00:05:59.756461     124 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b65f6d3f-e21a-46b0-805d-9b71486b6c1c
	I0109 00:05:59.756461     124 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 160c1d93-30c8-4413-9986-7aa0d5f0fdd1
	I0109 00:05:59.756703     124 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-bkss9","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"463fb6c6-1e85-419f-9c13-96e58a2ec22e","resourceVersion":"398","creationTimestamp":"2024-01-09T00:05:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"391af85f-9c35-497b-9b4f-c347a35d4a42","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-09T00:05:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"391af85f-9c35-497b-9b4f-c347a35d4a42\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6152 chars]
	I0109 00:05:59.757502     124 round_trippers.go:463] GET https://172.24.100.178:8443/api/v1/nodes/multinode-173500
	I0109 00:05:59.757530     124 round_trippers.go:469] Request Headers:
	I0109 00:05:59.757530     124 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0109 00:05:59.757530     124 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:05:59.761567     124 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0109 00:05:59.761567     124 round_trippers.go:577] Response Headers:
	I0109 00:05:59.761567     124 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 160c1d93-30c8-4413-9986-7aa0d5f0fdd1
	I0109 00:05:59.761567     124 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:05:59 GMT
	I0109 00:05:59.761567     124 round_trippers.go:580]     Audit-Id: 45983136-c956-4868-b64d-225da51e460b
	I0109 00:05:59.761567     124 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:05:59.761567     124 round_trippers.go:580]     Content-Type: application/json
	I0109 00:05:59.761567     124 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b65f6d3f-e21a-46b0-805d-9b71486b6c1c
	I0109 00:05:59.761567     124 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-173500","uid":"5e5c58e3-1ae8-4346-8766-7537fea36975","resourceVersion":"394","creationTimestamp":"2024-01-09T00:05:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-173500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-173500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_09T00_05_33_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-01-09T00:05:27Z","fieldsType":"FieldsV1","fi [truncated 4783 chars]
	I0109 00:06:00.242756     124 round_trippers.go:463] GET https://172.24.100.178:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-bkss9
	I0109 00:06:00.242756     124 round_trippers.go:469] Request Headers:
	I0109 00:06:00.242756     124 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:06:00.242756     124 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0109 00:06:00.246821     124 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0109 00:06:00.246821     124 round_trippers.go:577] Response Headers:
	I0109 00:06:00.246821     124 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b65f6d3f-e21a-46b0-805d-9b71486b6c1c
	I0109 00:06:00.246821     124 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 160c1d93-30c8-4413-9986-7aa0d5f0fdd1
	I0109 00:06:00.246821     124 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:06:00 GMT
	I0109 00:06:00.246821     124 round_trippers.go:580]     Audit-Id: f3e1294b-ed63-4346-a79f-6d23e5967e1b
	I0109 00:06:00.246821     124 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:06:00.246821     124 round_trippers.go:580]     Content-Type: application/json
	I0109 00:06:00.246821     124 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-bkss9","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"463fb6c6-1e85-419f-9c13-96e58a2ec22e","resourceVersion":"398","creationTimestamp":"2024-01-09T00:05:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"391af85f-9c35-497b-9b4f-c347a35d4a42","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-09T00:05:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"391af85f-9c35-497b-9b4f-c347a35d4a42\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6152 chars]
	I0109 00:06:00.248073     124 round_trippers.go:463] GET https://172.24.100.178:8443/api/v1/nodes/multinode-173500
	I0109 00:06:00.248162     124 round_trippers.go:469] Request Headers:
	I0109 00:06:00.248162     124 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:06:00.248199     124 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0109 00:06:00.251523     124 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0109 00:06:00.251523     124 round_trippers.go:577] Response Headers:
	I0109 00:06:00.251523     124 round_trippers.go:580]     Audit-Id: fd74d800-6d08-409e-b57f-9ae3fc407a8b
	I0109 00:06:00.251976     124 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:06:00.251976     124 round_trippers.go:580]     Content-Type: application/json
	I0109 00:06:00.251976     124 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b65f6d3f-e21a-46b0-805d-9b71486b6c1c
	I0109 00:06:00.251976     124 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 160c1d93-30c8-4413-9986-7aa0d5f0fdd1
	I0109 00:06:00.252030     124 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:06:00 GMT
	I0109 00:06:00.252279     124 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-173500","uid":"5e5c58e3-1ae8-4346-8766-7537fea36975","resourceVersion":"394","creationTimestamp":"2024-01-09T00:05:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-173500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-173500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_09T00_05_33_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-01-09T00:05:27Z","fieldsType":"FieldsV1","fi [truncated 4783 chars]
	I0109 00:06:00.748300     124 round_trippers.go:463] GET https://172.24.100.178:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-bkss9
	I0109 00:06:00.748403     124 round_trippers.go:469] Request Headers:
	I0109 00:06:00.748403     124 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:06:00.748403     124 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0109 00:06:00.753500     124 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0109 00:06:00.753500     124 round_trippers.go:577] Response Headers:
	I0109 00:06:00.754059     124 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b65f6d3f-e21a-46b0-805d-9b71486b6c1c
	I0109 00:06:00.754059     124 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 160c1d93-30c8-4413-9986-7aa0d5f0fdd1
	I0109 00:06:00.754059     124 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:06:00 GMT
	I0109 00:06:00.754059     124 round_trippers.go:580]     Audit-Id: ed33b69c-93e0-40ba-b35f-5035d2b0fcae
	I0109 00:06:00.754059     124 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:06:00.754059     124 round_trippers.go:580]     Content-Type: application/json
	I0109 00:06:00.754396     124 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-bkss9","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"463fb6c6-1e85-419f-9c13-96e58a2ec22e","resourceVersion":"398","creationTimestamp":"2024-01-09T00:05:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"391af85f-9c35-497b-9b4f-c347a35d4a42","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-09T00:05:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"391af85f-9c35-497b-9b4f-c347a35d4a42\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6152 chars]
	I0109 00:06:00.755289     124 round_trippers.go:463] GET https://172.24.100.178:8443/api/v1/nodes/multinode-173500
	I0109 00:06:00.755289     124 round_trippers.go:469] Request Headers:
	I0109 00:06:00.755289     124 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:06:00.755289     124 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0109 00:06:00.759205     124 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0109 00:06:00.759331     124 round_trippers.go:577] Response Headers:
	I0109 00:06:00.759331     124 round_trippers.go:580]     Audit-Id: 8cdb15db-73fb-4244-a266-ff49630e7b87
	I0109 00:06:00.759396     124 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:06:00.759396     124 round_trippers.go:580]     Content-Type: application/json
	I0109 00:06:00.759468     124 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b65f6d3f-e21a-46b0-805d-9b71486b6c1c
	I0109 00:06:00.759468     124 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 160c1d93-30c8-4413-9986-7aa0d5f0fdd1
	I0109 00:06:00.759468     124 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:06:00 GMT
	I0109 00:06:00.759468     124 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-173500","uid":"5e5c58e3-1ae8-4346-8766-7537fea36975","resourceVersion":"394","creationTimestamp":"2024-01-09T00:05:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-173500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-173500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_09T00_05_33_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-01-09T00:05:27Z","fieldsType":"FieldsV1","fi [truncated 4783 chars]
	I0109 00:06:01.237819     124 round_trippers.go:463] GET https://172.24.100.178:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-bkss9
	I0109 00:06:01.237910     124 round_trippers.go:469] Request Headers:
	I0109 00:06:01.237910     124 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:06:01.237910     124 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0109 00:06:01.241297     124 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0109 00:06:01.242019     124 round_trippers.go:577] Response Headers:
	I0109 00:06:01.242019     124 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b65f6d3f-e21a-46b0-805d-9b71486b6c1c
	I0109 00:06:01.242019     124 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 160c1d93-30c8-4413-9986-7aa0d5f0fdd1
	I0109 00:06:01.242019     124 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:06:01 GMT
	I0109 00:06:01.242019     124 round_trippers.go:580]     Audit-Id: 59bc188d-4c45-4f86-b78d-6da0e6d2630d
	I0109 00:06:01.242019     124 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:06:01.242019     124 round_trippers.go:580]     Content-Type: application/json
	I0109 00:06:01.242125     124 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-bkss9","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"463fb6c6-1e85-419f-9c13-96e58a2ec22e","resourceVersion":"398","creationTimestamp":"2024-01-09T00:05:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"391af85f-9c35-497b-9b4f-c347a35d4a42","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-09T00:05:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"391af85f-9c35-497b-9b4f-c347a35d4a42\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6152 chars]
	I0109 00:06:01.242936     124 round_trippers.go:463] GET https://172.24.100.178:8443/api/v1/nodes/multinode-173500
	I0109 00:06:01.242936     124 round_trippers.go:469] Request Headers:
	I0109 00:06:01.242936     124 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:06:01.243016     124 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0109 00:06:01.250814     124 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0109 00:06:01.250814     124 round_trippers.go:577] Response Headers:
	I0109 00:06:01.250814     124 round_trippers.go:580]     Audit-Id: d03f18bc-c775-4de5-ac22-963db11a804c
	I0109 00:06:01.250814     124 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:06:01.250814     124 round_trippers.go:580]     Content-Type: application/json
	I0109 00:06:01.250814     124 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b65f6d3f-e21a-46b0-805d-9b71486b6c1c
	I0109 00:06:01.250814     124 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 160c1d93-30c8-4413-9986-7aa0d5f0fdd1
	I0109 00:06:01.250814     124 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:06:01 GMT
	I0109 00:06:01.250814     124 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-173500","uid":"5e5c58e3-1ae8-4346-8766-7537fea36975","resourceVersion":"394","creationTimestamp":"2024-01-09T00:05:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-173500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-173500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_09T00_05_33_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-01-09T00:05:27Z","fieldsType":"FieldsV1","fi [truncated 4783 chars]
	I0109 00:06:01.251529     124 pod_ready.go:102] pod "coredns-5dd5756b68-bkss9" in "kube-system" namespace has status "Ready":"False"
	I0109 00:06:01.737991     124 round_trippers.go:463] GET https://172.24.100.178:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-bkss9
	I0109 00:06:01.737991     124 round_trippers.go:469] Request Headers:
	I0109 00:06:01.737991     124 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0109 00:06:01.738098     124 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:06:01.741940     124 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0109 00:06:01.741940     124 round_trippers.go:577] Response Headers:
	I0109 00:06:01.741940     124 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b65f6d3f-e21a-46b0-805d-9b71486b6c1c
	I0109 00:06:01.741940     124 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 160c1d93-30c8-4413-9986-7aa0d5f0fdd1
	I0109 00:06:01.741940     124 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:06:01 GMT
	I0109 00:06:01.741940     124 round_trippers.go:580]     Audit-Id: e0f31249-bf0b-43ed-9563-b6801a87a725
	I0109 00:06:01.742631     124 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:06:01.742631     124 round_trippers.go:580]     Content-Type: application/json
	I0109 00:06:01.742825     124 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-bkss9","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"463fb6c6-1e85-419f-9c13-96e58a2ec22e","resourceVersion":"413","creationTimestamp":"2024-01-09T00:05:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"391af85f-9c35-497b-9b4f-c347a35d4a42","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-09T00:05:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"391af85f-9c35-497b-9b4f-c347a35d4a42\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6284 chars]
	I0109 00:06:01.743505     124 round_trippers.go:463] GET https://172.24.100.178:8443/api/v1/nodes/multinode-173500
	I0109 00:06:01.743505     124 round_trippers.go:469] Request Headers:
	I0109 00:06:01.743610     124 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:06:01.743610     124 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0109 00:06:01.748865     124 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0109 00:06:01.748865     124 round_trippers.go:577] Response Headers:
	I0109 00:06:01.748865     124 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:06:01 GMT
	I0109 00:06:01.748865     124 round_trippers.go:580]     Audit-Id: d6997bec-dfab-47a2-aaad-c7911268caad
	I0109 00:06:01.748865     124 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:06:01.748865     124 round_trippers.go:580]     Content-Type: application/json
	I0109 00:06:01.748865     124 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b65f6d3f-e21a-46b0-805d-9b71486b6c1c
	I0109 00:06:01.748865     124 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 160c1d93-30c8-4413-9986-7aa0d5f0fdd1
	I0109 00:06:01.749998     124 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-173500","uid":"5e5c58e3-1ae8-4346-8766-7537fea36975","resourceVersion":"394","creationTimestamp":"2024-01-09T00:05:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-173500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-173500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_09T00_05_33_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-01-09T00:05:27Z","fieldsType":"FieldsV1","fi [truncated 4783 chars]
	I0109 00:06:01.750384     124 pod_ready.go:92] pod "coredns-5dd5756b68-bkss9" in "kube-system" namespace has status "Ready":"True"
	I0109 00:06:01.750384     124 pod_ready.go:81] duration metric: took 2.5148201s waiting for pod "coredns-5dd5756b68-bkss9" in "kube-system" namespace to be "Ready" ...
	I0109 00:06:01.750384     124 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-173500" in "kube-system" namespace to be "Ready" ...
	I0109 00:06:01.750746     124 round_trippers.go:463] GET https://172.24.100.178:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-173500
	I0109 00:06:01.750746     124 round_trippers.go:469] Request Headers:
	I0109 00:06:01.750746     124 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:06:01.750746     124 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0109 00:06:01.756063     124 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0109 00:06:01.756063     124 round_trippers.go:577] Response Headers:
	I0109 00:06:01.756063     124 round_trippers.go:580]     Audit-Id: 561251d3-1ad1-4cbb-b8bc-ac52ea238e1c
	I0109 00:06:01.756063     124 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:06:01.756063     124 round_trippers.go:580]     Content-Type: application/json
	I0109 00:06:01.756063     124 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b65f6d3f-e21a-46b0-805d-9b71486b6c1c
	I0109 00:06:01.756063     124 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 160c1d93-30c8-4413-9986-7aa0d5f0fdd1
	I0109 00:06:01.756063     124 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:06:01 GMT
	I0109 00:06:01.756063     124 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-173500","namespace":"kube-system","uid":"bbcb3d33-7daf-43d9-b596-66cbce3552ad","resourceVersion":"371","creationTimestamp":"2024-01-09T00:05:31Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.24.100.178:2379","kubernetes.io/config.hash":"8b9b6f8e7be121dc69cce9e8aca59417","kubernetes.io/config.mirror":"8b9b6f8e7be121dc69cce9e8aca59417","kubernetes.io/config.seen":"2024-01-09T00:05:31.606498270Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-173500","uid":"5e5c58e3-1ae8-4346-8766-7537fea36975","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-09T00:05:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 5872 chars]
	I0109 00:06:01.757492     124 round_trippers.go:463] GET https://172.24.100.178:8443/api/v1/nodes/multinode-173500
	I0109 00:06:01.757492     124 round_trippers.go:469] Request Headers:
	I0109 00:06:01.757492     124 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0109 00:06:01.757492     124 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:06:01.760940     124 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0109 00:06:01.760940     124 round_trippers.go:577] Response Headers:
	I0109 00:06:01.760940     124 round_trippers.go:580]     Audit-Id: 3e1b73f4-048d-4e1a-abe0-c0e58b60d256
	I0109 00:06:01.760940     124 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:06:01.760940     124 round_trippers.go:580]     Content-Type: application/json
	I0109 00:06:01.760940     124 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b65f6d3f-e21a-46b0-805d-9b71486b6c1c
	I0109 00:06:01.761047     124 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 160c1d93-30c8-4413-9986-7aa0d5f0fdd1
	I0109 00:06:01.761047     124 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:06:01 GMT
	I0109 00:06:01.761104     124 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-173500","uid":"5e5c58e3-1ae8-4346-8766-7537fea36975","resourceVersion":"394","creationTimestamp":"2024-01-09T00:05:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-173500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-173500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_09T00_05_33_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-01-09T00:05:27Z","fieldsType":"FieldsV1","fi [truncated 4783 chars]
	I0109 00:06:01.761692     124 pod_ready.go:92] pod "etcd-multinode-173500" in "kube-system" namespace has status "Ready":"True"
	I0109 00:06:01.761692     124 pod_ready.go:81] duration metric: took 11.308ms waiting for pod "etcd-multinode-173500" in "kube-system" namespace to be "Ready" ...
	I0109 00:06:01.761692     124 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-173500" in "kube-system" namespace to be "Ready" ...
	I0109 00:06:01.761692     124 round_trippers.go:463] GET https://172.24.100.178:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-173500
	I0109 00:06:01.761692     124 round_trippers.go:469] Request Headers:
	I0109 00:06:01.761692     124 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:06:01.761692     124 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0109 00:06:01.764326     124 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0109 00:06:01.764326     124 round_trippers.go:577] Response Headers:
	I0109 00:06:01.764326     124 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:06:01.764326     124 round_trippers.go:580]     Content-Type: application/json
	I0109 00:06:01.764326     124 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b65f6d3f-e21a-46b0-805d-9b71486b6c1c
	I0109 00:06:01.764326     124 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 160c1d93-30c8-4413-9986-7aa0d5f0fdd1
	I0109 00:06:01.764326     124 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:06:01 GMT
	I0109 00:06:01.764326     124 round_trippers.go:580]     Audit-Id: 2d39ee92-4378-49a1-9892-90f9e1387396
	I0109 00:06:01.765306     124 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-173500","namespace":"kube-system","uid":"6ec45d85-b2d5-483f-afdd-ee98dbb0edd1","resourceVersion":"372","creationTimestamp":"2024-01-09T00:05:31Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.24.100.178:8443","kubernetes.io/config.hash":"6d4780fbf78826137e2d0549410b3c52","kubernetes.io/config.mirror":"6d4780fbf78826137e2d0549410b3c52","kubernetes.io/config.seen":"2024-01-09T00:05:31.606503570Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-173500","uid":"5e5c58e3-1ae8-4346-8766-7537fea36975","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-09T00:05:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7408 chars]
	I0109 00:06:01.765306     124 round_trippers.go:463] GET https://172.24.100.178:8443/api/v1/nodes/multinode-173500
	I0109 00:06:01.765306     124 round_trippers.go:469] Request Headers:
	I0109 00:06:01.765306     124 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:06:01.765306     124 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0109 00:06:01.768535     124 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0109 00:06:01.769117     124 round_trippers.go:577] Response Headers:
	I0109 00:06:01.769117     124 round_trippers.go:580]     Audit-Id: 41da8f7f-e95f-4821-b776-709504840e35
	I0109 00:06:01.769117     124 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:06:01.769117     124 round_trippers.go:580]     Content-Type: application/json
	I0109 00:06:01.769117     124 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b65f6d3f-e21a-46b0-805d-9b71486b6c1c
	I0109 00:06:01.769117     124 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 160c1d93-30c8-4413-9986-7aa0d5f0fdd1
	I0109 00:06:01.769201     124 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:06:01 GMT
	I0109 00:06:01.769333     124 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-173500","uid":"5e5c58e3-1ae8-4346-8766-7537fea36975","resourceVersion":"394","creationTimestamp":"2024-01-09T00:05:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-173500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-173500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_09T00_05_33_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-01-09T00:05:27Z","fieldsType":"FieldsV1","fi [truncated 4783 chars]
	I0109 00:06:01.769780     124 pod_ready.go:92] pod "kube-apiserver-multinode-173500" in "kube-system" namespace has status "Ready":"True"
	I0109 00:06:01.769780     124 pod_ready.go:81] duration metric: took 8.0885ms waiting for pod "kube-apiserver-multinode-173500" in "kube-system" namespace to be "Ready" ...
	I0109 00:06:01.769857     124 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-173500" in "kube-system" namespace to be "Ready" ...
	I0109 00:06:01.769920     124 round_trippers.go:463] GET https://172.24.100.178:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-173500
	I0109 00:06:01.769920     124 round_trippers.go:469] Request Headers:
	I0109 00:06:01.769920     124 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0109 00:06:01.770003     124 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:06:01.772344     124 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0109 00:06:01.772344     124 round_trippers.go:577] Response Headers:
	I0109 00:06:01.772344     124 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:06:01.772344     124 round_trippers.go:580]     Content-Type: application/json
	I0109 00:06:01.772344     124 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b65f6d3f-e21a-46b0-805d-9b71486b6c1c
	I0109 00:06:01.772344     124 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 160c1d93-30c8-4413-9986-7aa0d5f0fdd1
	I0109 00:06:01.772344     124 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:06:01 GMT
	I0109 00:06:01.773382     124 round_trippers.go:580]     Audit-Id: a65bf84e-ce99-4bc0-98a0-e0c2a4e5536d
	I0109 00:06:01.773668     124 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-173500","namespace":"kube-system","uid":"a0252ea5-5d6a-4303-b7e6-151481d4cd8a","resourceVersion":"373","creationTimestamp":"2024-01-09T00:05:31Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"f6b180d5a2686dc98b0355b6df7f53ea","kubernetes.io/config.mirror":"f6b180d5a2686dc98b0355b6df7f53ea","kubernetes.io/config.seen":"2024-01-09T00:05:31.606504770Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-173500","uid":"5e5c58e3-1ae8-4346-8766-7537fea36975","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-09T00:05:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 6973 chars]
	I0109 00:06:01.774188     124 round_trippers.go:463] GET https://172.24.100.178:8443/api/v1/nodes/multinode-173500
	I0109 00:06:01.774188     124 round_trippers.go:469] Request Headers:
	I0109 00:06:01.774188     124 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:06:01.774188     124 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0109 00:06:01.779327     124 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0109 00:06:01.779447     124 round_trippers.go:577] Response Headers:
	I0109 00:06:01.779447     124 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 160c1d93-30c8-4413-9986-7aa0d5f0fdd1
	I0109 00:06:01.779447     124 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:06:01 GMT
	I0109 00:06:01.779447     124 round_trippers.go:580]     Audit-Id: 6964750b-7553-4e6b-9ab1-2dc070a3dd91
	I0109 00:06:01.779447     124 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:06:01.779447     124 round_trippers.go:580]     Content-Type: application/json
	I0109 00:06:01.779447     124 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b65f6d3f-e21a-46b0-805d-9b71486b6c1c
	I0109 00:06:01.779447     124 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-173500","uid":"5e5c58e3-1ae8-4346-8766-7537fea36975","resourceVersion":"394","creationTimestamp":"2024-01-09T00:05:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-173500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-173500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_09T00_05_33_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-01-09T00:05:27Z","fieldsType":"FieldsV1","fi [truncated 4783 chars]
	I0109 00:06:01.780007     124 pod_ready.go:92] pod "kube-controller-manager-multinode-173500" in "kube-system" namespace has status "Ready":"True"
	I0109 00:06:01.780193     124 pod_ready.go:81] duration metric: took 10.1498ms waiting for pod "kube-controller-manager-multinode-173500" in "kube-system" namespace to be "Ready" ...
	I0109 00:06:01.780193     124 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-qrtm6" in "kube-system" namespace to be "Ready" ...
	I0109 00:06:01.780273     124 round_trippers.go:463] GET https://172.24.100.178:8443/api/v1/namespaces/kube-system/pods/kube-proxy-qrtm6
	I0109 00:06:01.780273     124 round_trippers.go:469] Request Headers:
	I0109 00:06:01.780273     124 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:06:01.780273     124 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0109 00:06:01.782879     124 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0109 00:06:01.782879     124 round_trippers.go:577] Response Headers:
	I0109 00:06:01.782879     124 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:06:01.782879     124 round_trippers.go:580]     Content-Type: application/json
	I0109 00:06:01.782879     124 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b65f6d3f-e21a-46b0-805d-9b71486b6c1c
	I0109 00:06:01.782879     124 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 160c1d93-30c8-4413-9986-7aa0d5f0fdd1
	I0109 00:06:01.782879     124 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:06:01 GMT
	I0109 00:06:01.782879     124 round_trippers.go:580]     Audit-Id: b0627845-69fd-40de-8035-2dd42272164f
	I0109 00:06:01.782879     124 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-qrtm6","generateName":"kube-proxy-","namespace":"kube-system","uid":"37d066e0-6ff3-4f22-abc3-6bddfa64736e","resourceVersion":"374","creationTimestamp":"2024-01-09T00:05:45Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"ec2434b9-012c-4df1-b401-04556fed7700","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-09T00:05:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ec2434b9-012c-4df1-b401-04556fed7700\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5537 chars]
	I0109 00:06:01.782879     124 round_trippers.go:463] GET https://172.24.100.178:8443/api/v1/nodes/multinode-173500
	I0109 00:06:01.782879     124 round_trippers.go:469] Request Headers:
	I0109 00:06:01.782879     124 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:06:01.782879     124 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0109 00:06:01.786126     124 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0109 00:06:01.786126     124 round_trippers.go:577] Response Headers:
	I0109 00:06:01.786126     124 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:06:01 GMT
	I0109 00:06:01.786126     124 round_trippers.go:580]     Audit-Id: b556ecf8-999a-4b66-8ba5-2de8db122694
	I0109 00:06:01.786126     124 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:06:01.786126     124 round_trippers.go:580]     Content-Type: application/json
	I0109 00:06:01.786126     124 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b65f6d3f-e21a-46b0-805d-9b71486b6c1c
	I0109 00:06:01.786126     124 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 160c1d93-30c8-4413-9986-7aa0d5f0fdd1
	I0109 00:06:01.786126     124 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-173500","uid":"5e5c58e3-1ae8-4346-8766-7537fea36975","resourceVersion":"394","creationTimestamp":"2024-01-09T00:05:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-173500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-173500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_09T00_05_33_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-01-09T00:05:27Z","fieldsType":"FieldsV1","fi [truncated 4783 chars]
	I0109 00:06:01.787126     124 pod_ready.go:92] pod "kube-proxy-qrtm6" in "kube-system" namespace has status "Ready":"True"
	I0109 00:06:01.787126     124 pod_ready.go:81] duration metric: took 6.9333ms waiting for pod "kube-proxy-qrtm6" in "kube-system" namespace to be "Ready" ...
	I0109 00:06:01.787126     124 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-173500" in "kube-system" namespace to be "Ready" ...
	I0109 00:06:01.940896     124 request.go:629] Waited for 153.7697ms due to client-side throttling, not priority and fairness, request: GET:https://172.24.100.178:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-173500
	I0109 00:06:01.941252     124 round_trippers.go:463] GET https://172.24.100.178:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-173500
	I0109 00:06:01.941252     124 round_trippers.go:469] Request Headers:
	I0109 00:06:01.941369     124 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:06:01.941369     124 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0109 00:06:01.949059     124 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0109 00:06:01.949760     124 round_trippers.go:577] Response Headers:
	I0109 00:06:01.949760     124 round_trippers.go:580]     Audit-Id: c79a9dd5-b2aa-4f00-9493-7fe63ab45f68
	I0109 00:06:01.949760     124 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:06:01.949760     124 round_trippers.go:580]     Content-Type: application/json
	I0109 00:06:01.949838     124 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b65f6d3f-e21a-46b0-805d-9b71486b6c1c
	I0109 00:06:01.949838     124 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 160c1d93-30c8-4413-9986-7aa0d5f0fdd1
	I0109 00:06:01.949838     124 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:06:01 GMT
	I0109 00:06:01.950089     124 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-173500","namespace":"kube-system","uid":"31d8cdf6-292f-4b3c-87c5-951fc34d0ea4","resourceVersion":"370","creationTimestamp":"2024-01-09T00:05:30Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"70306498a200a6bbe0aa0b41e240d63b","kubernetes.io/config.mirror":"70306498a200a6bbe0aa0b41e240d63b","kubernetes.io/config.seen":"2024-01-09T00:05:21.481168866Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-173500","uid":"5e5c58e3-1ae8-4346-8766-7537fea36975","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-09T00:05:30Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4703 chars]
	I0109 00:06:02.143954     124 request.go:629] Waited for 193.3502ms due to client-side throttling, not priority and fairness, request: GET:https://172.24.100.178:8443/api/v1/nodes/multinode-173500
	I0109 00:06:02.144402     124 round_trippers.go:463] GET https://172.24.100.178:8443/api/v1/nodes/multinode-173500
	I0109 00:06:02.144402     124 round_trippers.go:469] Request Headers:
	I0109 00:06:02.144402     124 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0109 00:06:02.144402     124 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:06:02.151680     124 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0109 00:06:02.152514     124 round_trippers.go:577] Response Headers:
	I0109 00:06:02.152514     124 round_trippers.go:580]     Audit-Id: 9cef4da5-86b4-4889-aaf1-42d231d51ae6
	I0109 00:06:02.152514     124 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:06:02.152514     124 round_trippers.go:580]     Content-Type: application/json
	I0109 00:06:02.152652     124 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b65f6d3f-e21a-46b0-805d-9b71486b6c1c
	I0109 00:06:02.152652     124 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 160c1d93-30c8-4413-9986-7aa0d5f0fdd1
	I0109 00:06:02.152652     124 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:06:02 GMT
	I0109 00:06:02.152913     124 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-173500","uid":"5e5c58e3-1ae8-4346-8766-7537fea36975","resourceVersion":"394","creationTimestamp":"2024-01-09T00:05:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-173500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-173500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_09T00_05_33_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-01-09T00:05:27Z","fieldsType":"FieldsV1","fi [truncated 4783 chars]
	I0109 00:06:02.153473     124 pod_ready.go:92] pod "kube-scheduler-multinode-173500" in "kube-system" namespace has status "Ready":"True"
	I0109 00:06:02.153473     124 pod_ready.go:81] duration metric: took 366.3469ms waiting for pod "kube-scheduler-multinode-173500" in "kube-system" namespace to be "Ready" ...
	I0109 00:06:02.153557     124 pod_ready.go:38] duration metric: took 2.9370844s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0109 00:06:02.153705     124 api_server.go:52] waiting for apiserver process to appear ...
	I0109 00:06:02.167347     124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0109 00:06:02.188796     124 command_runner.go:130] > 2039
	I0109 00:06:02.188796     124 api_server.go:72] duration metric: took 17.0136481s to wait for apiserver process to appear ...
	I0109 00:06:02.188796     124 api_server.go:88] waiting for apiserver healthz status ...
	I0109 00:06:02.188796     124 api_server.go:253] Checking apiserver healthz at https://172.24.100.178:8443/healthz ...
	I0109 00:06:02.200986     124 api_server.go:279] https://172.24.100.178:8443/healthz returned 200:
	ok
	I0109 00:06:02.201337     124 round_trippers.go:463] GET https://172.24.100.178:8443/version
	I0109 00:06:02.201376     124 round_trippers.go:469] Request Headers:
	I0109 00:06:02.201376     124 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:06:02.201376     124 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0109 00:06:02.202767     124 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0109 00:06:02.202767     124 round_trippers.go:577] Response Headers:
	I0109 00:06:02.202767     124 round_trippers.go:580]     Content-Length: 264
	I0109 00:06:02.203523     124 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:06:02 GMT
	I0109 00:06:02.203523     124 round_trippers.go:580]     Audit-Id: 7be240aa-5e63-45b7-9617-ba9b81721047
	I0109 00:06:02.203523     124 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:06:02.203523     124 round_trippers.go:580]     Content-Type: application/json
	I0109 00:06:02.203523     124 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b65f6d3f-e21a-46b0-805d-9b71486b6c1c
	I0109 00:06:02.203732     124 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 160c1d93-30c8-4413-9986-7aa0d5f0fdd1
	I0109 00:06:02.203838     124 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "28",
	  "gitVersion": "v1.28.4",
	  "gitCommit": "bae2c62678db2b5053817bc97181fcc2e8388103",
	  "gitTreeState": "clean",
	  "buildDate": "2023-11-15T16:48:54Z",
	  "goVersion": "go1.20.11",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0109 00:06:02.203928     124 api_server.go:141] control plane version: v1.28.4
	I0109 00:06:02.203928     124 api_server.go:131] duration metric: took 15.1325ms to wait for apiserver health ...
	I0109 00:06:02.204015     124 system_pods.go:43] waiting for kube-system pods to appear ...
	I0109 00:06:02.349604     124 request.go:629] Waited for 145.3121ms due to client-side throttling, not priority and fairness, request: GET:https://172.24.100.178:8443/api/v1/namespaces/kube-system/pods
	I0109 00:06:02.349714     124 round_trippers.go:463] GET https://172.24.100.178:8443/api/v1/namespaces/kube-system/pods
	I0109 00:06:02.349714     124 round_trippers.go:469] Request Headers:
	I0109 00:06:02.349714     124 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:06:02.349714     124 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0109 00:06:02.357967     124 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0109 00:06:02.357967     124 round_trippers.go:577] Response Headers:
	I0109 00:06:02.357967     124 round_trippers.go:580]     Content-Type: application/json
	I0109 00:06:02.357967     124 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b65f6d3f-e21a-46b0-805d-9b71486b6c1c
	I0109 00:06:02.357967     124 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 160c1d93-30c8-4413-9986-7aa0d5f0fdd1
	I0109 00:06:02.357967     124 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:06:02 GMT
	I0109 00:06:02.358876     124 round_trippers.go:580]     Audit-Id: 9c175f83-5be0-4c98-9bfc-d87d22c216c2
	I0109 00:06:02.358876     124 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:06:02.360130     124 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"419"},"items":[{"metadata":{"name":"coredns-5dd5756b68-bkss9","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"463fb6c6-1e85-419f-9c13-96e58a2ec22e","resourceVersion":"413","creationTimestamp":"2024-01-09T00:05:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"391af85f-9c35-497b-9b4f-c347a35d4a42","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-09T00:05:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"391af85f-9c35-497b-9b4f-c347a35d4a42\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 54128 chars]
	I0109 00:06:02.362877     124 system_pods.go:59] 8 kube-system pods found
	I0109 00:06:02.362877     124 system_pods.go:61] "coredns-5dd5756b68-bkss9" [463fb6c6-1e85-419f-9c13-96e58a2ec22e] Running
	I0109 00:06:02.362877     124 system_pods.go:61] "etcd-multinode-173500" [bbcb3d33-7daf-43d9-b596-66cbce3552ad] Running
	I0109 00:06:02.362877     124 system_pods.go:61] "kindnet-ht547" [711faf1a-9f77-487c-bd84-1e227ab9c51a] Running
	I0109 00:06:02.362877     124 system_pods.go:61] "kube-apiserver-multinode-173500" [6ec45d85-b2d5-483f-afdd-ee98dbb0edd1] Running
	I0109 00:06:02.362877     124 system_pods.go:61] "kube-controller-manager-multinode-173500" [a0252ea5-5d6a-4303-b7e6-151481d4cd8a] Running
	I0109 00:06:02.362877     124 system_pods.go:61] "kube-proxy-qrtm6" [37d066e0-6ff3-4f22-abc3-6bddfa64736e] Running
	I0109 00:06:02.362877     124 system_pods.go:61] "kube-scheduler-multinode-173500" [31d8cdf6-292f-4b3c-87c5-951fc34d0ea4] Running
	I0109 00:06:02.362877     124 system_pods.go:61] "storage-provisioner" [936240bb-4bdd-4681-91a9-cb458c623805] Running
	I0109 00:06:02.362877     124 system_pods.go:74] duration metric: took 158.8621ms to wait for pod list to return data ...
	I0109 00:06:02.362877     124 default_sa.go:34] waiting for default service account to be created ...
	I0109 00:06:02.552324     124 request.go:629] Waited for 189.2273ms due to client-side throttling, not priority and fairness, request: GET:https://172.24.100.178:8443/api/v1/namespaces/default/serviceaccounts
	I0109 00:06:02.552436     124 round_trippers.go:463] GET https://172.24.100.178:8443/api/v1/namespaces/default/serviceaccounts
	I0109 00:06:02.552436     124 round_trippers.go:469] Request Headers:
	I0109 00:06:02.552673     124 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:06:02.552673     124 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0109 00:06:02.556973     124 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0109 00:06:02.556973     124 round_trippers.go:577] Response Headers:
	I0109 00:06:02.556973     124 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:06:02.556973     124 round_trippers.go:580]     Content-Type: application/json
	I0109 00:06:02.556973     124 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b65f6d3f-e21a-46b0-805d-9b71486b6c1c
	I0109 00:06:02.557102     124 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 160c1d93-30c8-4413-9986-7aa0d5f0fdd1
	I0109 00:06:02.557102     124 round_trippers.go:580]     Content-Length: 261
	I0109 00:06:02.557102     124 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:06:02 GMT
	I0109 00:06:02.557102     124 round_trippers.go:580]     Audit-Id: acbcdf18-6fac-49af-9f79-b43ce867a947
	I0109 00:06:02.557223     124 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"419"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"a9cc6a7c-f512-49f6-8485-edb39bd8695b","resourceVersion":"311","creationTimestamp":"2024-01-09T00:05:44Z"}}]}
	I0109 00:06:02.557609     124 default_sa.go:45] found service account: "default"
	I0109 00:06:02.557694     124 default_sa.go:55] duration metric: took 194.8172ms for default service account to be created ...
	I0109 00:06:02.557694     124 system_pods.go:116] waiting for k8s-apps to be running ...
	I0109 00:06:02.741613     124 request.go:629] Waited for 183.8175ms due to client-side throttling, not priority and fairness, request: GET:https://172.24.100.178:8443/api/v1/namespaces/kube-system/pods
	I0109 00:06:02.741613     124 round_trippers.go:463] GET https://172.24.100.178:8443/api/v1/namespaces/kube-system/pods
	I0109 00:06:02.741613     124 round_trippers.go:469] Request Headers:
	I0109 00:06:02.741613     124 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:06:02.741613     124 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0109 00:06:02.742285     124 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0109 00:06:02.742285     124 round_trippers.go:577] Response Headers:
	I0109 00:06:02.742285     124 round_trippers.go:580]     Audit-Id: c61a3a74-69fc-45e9-a06b-9d971befbb92
	I0109 00:06:02.742285     124 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:06:02.742285     124 round_trippers.go:580]     Content-Type: application/json
	I0109 00:06:02.742285     124 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b65f6d3f-e21a-46b0-805d-9b71486b6c1c
	I0109 00:06:02.742285     124 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 160c1d93-30c8-4413-9986-7aa0d5f0fdd1
	I0109 00:06:02.747269     124 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:06:02 GMT
	I0109 00:06:02.748610     124 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"419"},"items":[{"metadata":{"name":"coredns-5dd5756b68-bkss9","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"463fb6c6-1e85-419f-9c13-96e58a2ec22e","resourceVersion":"413","creationTimestamp":"2024-01-09T00:05:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"391af85f-9c35-497b-9b4f-c347a35d4a42","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-09T00:05:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"391af85f-9c35-497b-9b4f-c347a35d4a42\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 54128 chars]
	I0109 00:06:02.754632     124 system_pods.go:86] 8 kube-system pods found
	I0109 00:06:02.754632     124 system_pods.go:89] "coredns-5dd5756b68-bkss9" [463fb6c6-1e85-419f-9c13-96e58a2ec22e] Running
	I0109 00:06:02.754632     124 system_pods.go:89] "etcd-multinode-173500" [bbcb3d33-7daf-43d9-b596-66cbce3552ad] Running
	I0109 00:06:02.754632     124 system_pods.go:89] "kindnet-ht547" [711faf1a-9f77-487c-bd84-1e227ab9c51a] Running
	I0109 00:06:02.754632     124 system_pods.go:89] "kube-apiserver-multinode-173500" [6ec45d85-b2d5-483f-afdd-ee98dbb0edd1] Running
	I0109 00:06:02.754632     124 system_pods.go:89] "kube-controller-manager-multinode-173500" [a0252ea5-5d6a-4303-b7e6-151481d4cd8a] Running
	I0109 00:06:02.754632     124 system_pods.go:89] "kube-proxy-qrtm6" [37d066e0-6ff3-4f22-abc3-6bddfa64736e] Running
	I0109 00:06:02.754632     124 system_pods.go:89] "kube-scheduler-multinode-173500" [31d8cdf6-292f-4b3c-87c5-951fc34d0ea4] Running
	I0109 00:06:02.754632     124 system_pods.go:89] "storage-provisioner" [936240bb-4bdd-4681-91a9-cb458c623805] Running
	I0109 00:06:02.754632     124 system_pods.go:126] duration metric: took 196.938ms to wait for k8s-apps to be running ...
	I0109 00:06:02.754632     124 system_svc.go:44] waiting for kubelet service to be running ....
	I0109 00:06:02.770312     124 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0109 00:06:02.794588     124 system_svc.go:56] duration metric: took 39.9561ms WaitForService to wait for kubelet.
	I0109 00:06:02.794588     124 kubeadm.go:581] duration metric: took 17.6194405s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0109 00:06:02.794720     124 node_conditions.go:102] verifying NodePressure condition ...
	I0109 00:06:02.945871     124 request.go:629] Waited for 150.7818ms due to client-side throttling, not priority and fairness, request: GET:https://172.24.100.178:8443/api/v1/nodes
	I0109 00:06:02.946001     124 round_trippers.go:463] GET https://172.24.100.178:8443/api/v1/nodes
	I0109 00:06:02.946001     124 round_trippers.go:469] Request Headers:
	I0109 00:06:02.946001     124 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0109 00:06:02.946001     124 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:06:02.949225     124 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0109 00:06:02.950101     124 round_trippers.go:577] Response Headers:
	I0109 00:06:02.950101     124 round_trippers.go:580]     Audit-Id: 14281057-a54e-4591-ba4e-13a3c0d33f01
	I0109 00:06:02.950101     124 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:06:02.950101     124 round_trippers.go:580]     Content-Type: application/json
	I0109 00:06:02.950101     124 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b65f6d3f-e21a-46b0-805d-9b71486b6c1c
	I0109 00:06:02.950101     124 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 160c1d93-30c8-4413-9986-7aa0d5f0fdd1
	I0109 00:06:02.950101     124 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:06:02 GMT
	I0109 00:06:02.950350     124 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"421"},"items":[{"metadata":{"name":"multinode-173500","uid":"5e5c58e3-1ae8-4346-8766-7537fea36975","resourceVersion":"420","creationTimestamp":"2024-01-09T00:05:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-173500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-173500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_09T00_05_33_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFiel
ds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time" [truncated 5012 chars]
	I0109 00:06:02.951067     124 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0109 00:06:02.951135     124 node_conditions.go:123] node cpu capacity is 2
	I0109 00:06:02.951211     124 node_conditions.go:105] duration metric: took 156.491ms to run NodePressure ...
	I0109 00:06:02.951231     124 start.go:228] waiting for startup goroutines ...
	I0109 00:06:02.951231     124 start.go:233] waiting for cluster config update ...
	I0109 00:06:02.951231     124 start.go:242] writing updated cluster config ...
	I0109 00:06:02.956492     124 out.go:177] 
	I0109 00:06:02.965808     124 config.go:182] Loaded profile config "multinode-173500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0109 00:06:02.966276     124 profile.go:148] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-173500\config.json ...
	I0109 00:06:02.973480     124 out.go:177] * Starting worker node multinode-173500-m02 in cluster multinode-173500
	I0109 00:06:02.976554     124 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0109 00:06:02.976705     124 cache.go:56] Caching tarball of preloaded images
	I0109 00:06:02.976854     124 preload.go:174] Found C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0109 00:06:02.976854     124 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on docker
	I0109 00:06:02.977403     124 profile.go:148] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-173500\config.json ...
	I0109 00:06:02.982186     124 start.go:365] acquiring machines lock for multinode-173500-m02: {Name:mk1756dfe9622c208593bf9b718faf6c1651aea2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0109 00:06:02.982186     124 start.go:369] acquired machines lock for "multinode-173500-m02" in 0s
	I0109 00:06:02.982908     124 start.go:93] Provisioning new machine with config: &{Name:multinode-173500 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17806/minikube-v1.32.1-1702708929-17806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704751654-17830@sha256:cabd32f8d9e8d804966eb117ed5366660f6363a4d1415f0b5480de6e396be617 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-173500 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:172.24.100.178 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRe
quested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name:m02 IP: Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0109 00:06:02.983041     124 start.go:125] createHost starting for "m02" (driver="hyperv")
	I0109 00:06:02.985674     124 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0109 00:06:02.985674     124 start.go:159] libmachine.API.Create for "multinode-173500" (driver="hyperv")
	I0109 00:06:02.985674     124 client.go:168] LocalClient.Create starting
	I0109 00:06:02.986440     124 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem
	I0109 00:06:02.986440     124 main.go:141] libmachine: Decoding PEM data...
	I0109 00:06:02.986808     124 main.go:141] libmachine: Parsing certificate...
	I0109 00:06:02.986941     124 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem
	I0109 00:06:02.987180     124 main.go:141] libmachine: Decoding PEM data...
	I0109 00:06:02.987180     124 main.go:141] libmachine: Parsing certificate...
	I0109 00:06:02.987180     124 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0109 00:06:04.943581     124 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0109 00:06:04.943725     124 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:06:04.943797     124 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0109 00:06:06.743992     124 main.go:141] libmachine: [stdout =====>] : False
	
	I0109 00:06:06.743992     124 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:06:06.744085     124 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0109 00:06:08.267177     124 main.go:141] libmachine: [stdout =====>] : True
	
	I0109 00:06:08.267348     124 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:06:08.267422     124 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0109 00:06:11.951959     124 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0109 00:06:11.952244     124 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:06:11.957387     124 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube1/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.32.1-1702708929-17806-amd64.iso...
	I0109 00:06:12.455608     124 main.go:141] libmachine: Creating SSH key...
	I0109 00:06:12.535628     124 main.go:141] libmachine: Creating VM...
	I0109 00:06:12.535628     124 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0109 00:06:15.543514     124 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0109 00:06:15.543607     124 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:06:15.543607     124 main.go:141] libmachine: Using switch "Default Switch"
	I0109 00:06:15.543778     124 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0109 00:06:17.370491     124 main.go:141] libmachine: [stdout =====>] : True
	
	I0109 00:06:17.370814     124 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:06:17.370888     124 main.go:141] libmachine: Creating VHD
	I0109 00:06:17.370888     124 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-173500-m02\fixed.vhd' -SizeBytes 10MB -Fixed
	I0109 00:06:21.168178     124 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube1
	Path                    : C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-173500-m02\fixed
	                          .vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 1AE8C0BB-68A0-484E-9463-ABBF8C389B30
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0109 00:06:21.168272     124 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:06:21.168340     124 main.go:141] libmachine: Writing magic tar header
	I0109 00:06:21.168396     124 main.go:141] libmachine: Writing SSH key tar header
	I0109 00:06:21.177553     124 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-173500-m02\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-173500-m02\disk.vhd' -VHDType Dynamic -DeleteSource
	I0109 00:06:24.386606     124 main.go:141] libmachine: [stdout =====>] : 
	I0109 00:06:24.386606     124 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:06:24.386936     124 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-173500-m02\disk.vhd' -SizeBytes 20000MB
	I0109 00:06:26.986883     124 main.go:141] libmachine: [stdout =====>] : 
	I0109 00:06:26.987093     124 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:06:26.987093     124 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM multinode-173500-m02 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-173500-m02' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0109 00:06:30.788452     124 main.go:141] libmachine: [stdout =====>] : 
	Name                 State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----                 ----- ----------- ----------------- ------   ------             -------
	multinode-173500-m02 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0109 00:06:30.788813     124 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:06:30.788813     124 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName multinode-173500-m02 -DynamicMemoryEnabled $false
	I0109 00:06:33.111020     124 main.go:141] libmachine: [stdout =====>] : 
	I0109 00:06:33.111020     124 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:06:33.111305     124 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor multinode-173500-m02 -Count 2
	I0109 00:06:35.335711     124 main.go:141] libmachine: [stdout =====>] : 
	I0109 00:06:35.335711     124 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:06:35.335711     124 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName multinode-173500-m02 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-173500-m02\boot2docker.iso'
	I0109 00:06:38.037758     124 main.go:141] libmachine: [stdout =====>] : 
	I0109 00:06:38.038011     124 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:06:38.038114     124 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName multinode-173500-m02 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-173500-m02\disk.vhd'
	I0109 00:06:40.741393     124 main.go:141] libmachine: [stdout =====>] : 
	I0109 00:06:40.741393     124 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:06:40.741484     124 main.go:141] libmachine: Starting VM...
	I0109 00:06:40.741484     124 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-173500-m02
	I0109 00:06:43.931547     124 main.go:141] libmachine: [stdout =====>] : 
	I0109 00:06:43.931547     124 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:06:43.931547     124 main.go:141] libmachine: Waiting for host to start...
	I0109 00:06:43.931547     124 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-173500-m02 ).state
	I0109 00:06:46.248400     124 main.go:141] libmachine: [stdout =====>] : Running
	
	I0109 00:06:46.248638     124 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:06:46.248686     124 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-173500-m02 ).networkadapters[0]).ipaddresses[0]
	I0109 00:06:48.894673     124 main.go:141] libmachine: [stdout =====>] : 
	I0109 00:06:48.894673     124 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:06:49.897945     124 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-173500-m02 ).state
	I0109 00:06:52.159537     124 main.go:141] libmachine: [stdout =====>] : Running
	
	I0109 00:06:52.159537     124 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:06:52.159537     124 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-173500-m02 ).networkadapters[0]).ipaddresses[0]
	I0109 00:06:54.801871     124 main.go:141] libmachine: [stdout =====>] : 
	I0109 00:06:54.801871     124 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:06:55.816598     124 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-173500-m02 ).state
	I0109 00:06:58.042179     124 main.go:141] libmachine: [stdout =====>] : Running
	
	I0109 00:06:58.042518     124 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:06:58.042518     124 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-173500-m02 ).networkadapters[0]).ipaddresses[0]
	I0109 00:07:00.663815     124 main.go:141] libmachine: [stdout =====>] : 
	I0109 00:07:00.664159     124 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:07:01.678413     124 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-173500-m02 ).state
	I0109 00:07:03.953315     124 main.go:141] libmachine: [stdout =====>] : Running
	
	I0109 00:07:03.953315     124 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:07:03.953404     124 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-173500-m02 ).networkadapters[0]).ipaddresses[0]
	I0109 00:07:06.558735     124 main.go:141] libmachine: [stdout =====>] : 
	I0109 00:07:06.558863     124 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:07:07.571715     124 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-173500-m02 ).state
	I0109 00:07:09.850963     124 main.go:141] libmachine: [stdout =====>] : Running
	
	I0109 00:07:09.851040     124 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:07:09.851040     124 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-173500-m02 ).networkadapters[0]).ipaddresses[0]
	I0109 00:07:12.515494     124 main.go:141] libmachine: [stdout =====>] : 172.24.108.84
	
	I0109 00:07:12.515741     124 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:07:12.515741     124 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-173500-m02 ).state
	I0109 00:07:14.694490     124 main.go:141] libmachine: [stdout =====>] : Running
	
	I0109 00:07:14.694835     124 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:07:14.695017     124 machine.go:88] provisioning docker machine ...
	I0109 00:07:14.695095     124 buildroot.go:166] provisioning hostname "multinode-173500-m02"
	I0109 00:07:14.695095     124 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-173500-m02 ).state
	I0109 00:07:16.872602     124 main.go:141] libmachine: [stdout =====>] : Running
	
	I0109 00:07:16.872852     124 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:07:16.873053     124 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-173500-m02 ).networkadapters[0]).ipaddresses[0]
	I0109 00:07:19.527430     124 main.go:141] libmachine: [stdout =====>] : 172.24.108.84
	
	I0109 00:07:19.527646     124 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:07:19.533598     124 main.go:141] libmachine: Using SSH client type: native
	I0109 00:07:19.544624     124 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa46120] 0xa48c60 <nil>  [] 0s} 172.24.108.84 22 <nil> <nil>}
	I0109 00:07:19.544624     124 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-173500-m02 && echo "multinode-173500-m02" | sudo tee /etc/hostname
	I0109 00:07:19.698622     124 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-173500-m02
	
	I0109 00:07:19.698695     124 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-173500-m02 ).state
	I0109 00:07:21.965924     124 main.go:141] libmachine: [stdout =====>] : Running
	
	I0109 00:07:21.966110     124 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:07:21.966209     124 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-173500-m02 ).networkadapters[0]).ipaddresses[0]
	I0109 00:07:24.590251     124 main.go:141] libmachine: [stdout =====>] : 172.24.108.84
	
	I0109 00:07:24.590326     124 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:07:24.597103     124 main.go:141] libmachine: Using SSH client type: native
	I0109 00:07:24.597682     124 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa46120] 0xa48c60 <nil>  [] 0s} 172.24.108.84 22 <nil> <nil>}
	I0109 00:07:24.597817     124 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-173500-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-173500-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-173500-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0109 00:07:24.736780     124 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0109 00:07:24.736847     124 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube1\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube1\minikube-integration\.minikube}
	I0109 00:07:24.736847     124 buildroot.go:174] setting up certificates
	I0109 00:07:24.736913     124 provision.go:83] configureAuth start
	I0109 00:07:24.736969     124 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-173500-m02 ).state
	I0109 00:07:26.923241     124 main.go:141] libmachine: [stdout =====>] : Running
	
	I0109 00:07:26.923436     124 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:07:26.923436     124 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-173500-m02 ).networkadapters[0]).ipaddresses[0]
	I0109 00:07:29.550890     124 main.go:141] libmachine: [stdout =====>] : 172.24.108.84
	
	I0109 00:07:29.551142     124 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:07:29.551142     124 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-173500-m02 ).state
	I0109 00:07:31.750293     124 main.go:141] libmachine: [stdout =====>] : Running
	
	I0109 00:07:31.750484     124 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:07:31.750484     124 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-173500-m02 ).networkadapters[0]).ipaddresses[0]
	I0109 00:07:34.419848     124 main.go:141] libmachine: [stdout =====>] : 172.24.108.84
	
	I0109 00:07:34.419936     124 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:07:34.419936     124 provision.go:138] copyHostCerts
	I0109 00:07:34.420015     124 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem
	I0109 00:07:34.420015     124 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem, removing ...
	I0109 00:07:34.420015     124 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.pem
	I0109 00:07:34.420809     124 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0109 00:07:34.422621     124 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem
	I0109 00:07:34.422621     124 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem, removing ...
	I0109 00:07:34.422621     124 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cert.pem
	I0109 00:07:34.423209     124 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0109 00:07:34.424083     124 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem
	I0109 00:07:34.424083     124 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem, removing ...
	I0109 00:07:34.424083     124 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\key.pem
	I0109 00:07:34.424821     124 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem (1675 bytes)
	I0109 00:07:34.425699     124 provision.go:112] generating server cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-173500-m02 san=[172.24.108.84 172.24.108.84 localhost 127.0.0.1 minikube multinode-173500-m02]
	I0109 00:07:34.538730     124 provision.go:172] copyRemoteCerts
	I0109 00:07:34.554056     124 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0109 00:07:34.554056     124 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-173500-m02 ).state
	I0109 00:07:36.737592     124 main.go:141] libmachine: [stdout =====>] : Running
	
	I0109 00:07:36.737790     124 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:07:36.737883     124 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-173500-m02 ).networkadapters[0]).ipaddresses[0]
	I0109 00:07:39.359835     124 main.go:141] libmachine: [stdout =====>] : 172.24.108.84
	
	I0109 00:07:39.360095     124 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:07:39.360244     124 sshutil.go:53] new ssh client: &{IP:172.24.108.84 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-173500-m02\id_rsa Username:docker}
	I0109 00:07:39.470833     124 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.9167008s)
	I0109 00:07:39.470873     124 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0109 00:07:39.471500     124 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1237 bytes)
	I0109 00:07:39.508880     124 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0109 00:07:39.509275     124 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0109 00:07:39.547370     124 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0109 00:07:39.547535     124 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0109 00:07:39.585967     124 provision.go:86] duration metric: configureAuth took 14.8490522s
	I0109 00:07:39.586040     124 buildroot.go:189] setting minikube options for container-runtime
	I0109 00:07:39.586235     124 config.go:182] Loaded profile config "multinode-173500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0109 00:07:39.586235     124 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-173500-m02 ).state
	I0109 00:07:41.746291     124 main.go:141] libmachine: [stdout =====>] : Running
	
	I0109 00:07:41.746518     124 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:07:41.746518     124 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-173500-m02 ).networkadapters[0]).ipaddresses[0]
	I0109 00:07:44.318631     124 main.go:141] libmachine: [stdout =====>] : 172.24.108.84
	
	I0109 00:07:44.318704     124 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:07:44.323876     124 main.go:141] libmachine: Using SSH client type: native
	I0109 00:07:44.324630     124 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa46120] 0xa48c60 <nil>  [] 0s} 172.24.108.84 22 <nil> <nil>}
	I0109 00:07:44.324630     124 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0109 00:07:44.451422     124 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0109 00:07:44.451422     124 buildroot.go:70] root file system type: tmpfs
	I0109 00:07:44.454925     124 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0109 00:07:44.454975     124 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-173500-m02 ).state
	I0109 00:07:46.649762     124 main.go:141] libmachine: [stdout =====>] : Running
	
	I0109 00:07:46.650035     124 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:07:46.650107     124 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-173500-m02 ).networkadapters[0]).ipaddresses[0]
	I0109 00:07:49.249090     124 main.go:141] libmachine: [stdout =====>] : 172.24.108.84
	
	I0109 00:07:49.249275     124 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:07:49.254830     124 main.go:141] libmachine: Using SSH client type: native
	I0109 00:07:49.255458     124 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa46120] 0xa48c60 <nil>  [] 0s} 172.24.108.84 22 <nil> <nil>}
	I0109 00:07:49.255829     124 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.24.100.178"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0109 00:07:49.408793     124 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.24.100.178
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0109 00:07:49.408915     124 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-173500-m02 ).state
	I0109 00:07:51.598319     124 main.go:141] libmachine: [stdout =====>] : Running
	
	I0109 00:07:51.598319     124 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:07:51.598411     124 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-173500-m02 ).networkadapters[0]).ipaddresses[0]
	I0109 00:07:54.191133     124 main.go:141] libmachine: [stdout =====>] : 172.24.108.84
	
	I0109 00:07:54.191464     124 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:07:54.196496     124 main.go:141] libmachine: Using SSH client type: native
	I0109 00:07:54.197838     124 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa46120] 0xa48c60 <nil>  [] 0s} 172.24.108.84 22 <nil> <nil>}
	I0109 00:07:54.197838     124 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0109 00:07:55.366583     124 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0109 00:07:55.366678     124 machine.go:91] provisioned docker machine in 40.6716362s
	I0109 00:07:55.366678     124 client.go:171] LocalClient.Create took 1m52.3810013s
	I0109 00:07:55.366747     124 start.go:167] duration metric: libmachine.API.Create for "multinode-173500" took 1m52.3810705s
	I0109 00:07:55.366812     124 start.go:300] post-start starting for "multinode-173500-m02" (driver="hyperv")
	I0109 00:07:55.366812     124 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0109 00:07:55.384700     124 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0109 00:07:55.384700     124 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-173500-m02 ).state
	I0109 00:07:57.525358     124 main.go:141] libmachine: [stdout =====>] : Running
	
	I0109 00:07:57.525358     124 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:07:57.525475     124 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-173500-m02 ).networkadapters[0]).ipaddresses[0]
	I0109 00:08:00.126224     124 main.go:141] libmachine: [stdout =====>] : 172.24.108.84
	
	I0109 00:08:00.126224     124 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:08:00.126224     124 sshutil.go:53] new ssh client: &{IP:172.24.108.84 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-173500-m02\id_rsa Username:docker}
	I0109 00:08:00.238091     124 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.8533904s)
	I0109 00:08:00.253460     124 ssh_runner.go:195] Run: cat /etc/os-release
	I0109 00:08:00.259795     124 command_runner.go:130] > NAME=Buildroot
	I0109 00:08:00.260647     124 command_runner.go:130] > VERSION=2021.02.12-1-gae27a7b-dirty
	I0109 00:08:00.260647     124 command_runner.go:130] > ID=buildroot
	I0109 00:08:00.260647     124 command_runner.go:130] > VERSION_ID=2021.02.12
	I0109 00:08:00.260647     124 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
	I0109 00:08:00.260647     124 info.go:137] Remote host: Buildroot 2021.02.12
	I0109 00:08:00.260765     124 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\addons for local assets ...
	I0109 00:08:00.261396     124 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\files for local assets ...
	I0109 00:08:00.263005     124 filesync.go:149] local asset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\142882.pem -> 142882.pem in /etc/ssl/certs
	I0109 00:08:00.263005     124 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\142882.pem -> /etc/ssl/certs/142882.pem
	I0109 00:08:00.276046     124 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0109 00:08:00.293588     124 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\142882.pem --> /etc/ssl/certs/142882.pem (1708 bytes)
	I0109 00:08:00.336143     124 start.go:303] post-start completed in 4.9693306s
	I0109 00:08:00.339470     124 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-173500-m02 ).state
	I0109 00:08:02.492099     124 main.go:141] libmachine: [stdout =====>] : Running
	
	I0109 00:08:02.492187     124 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:08:02.492260     124 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-173500-m02 ).networkadapters[0]).ipaddresses[0]
	I0109 00:08:05.094895     124 main.go:141] libmachine: [stdout =====>] : 172.24.108.84
	
	I0109 00:08:05.094895     124 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:08:05.095345     124 profile.go:148] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-173500\config.json ...
	I0109 00:08:05.098271     124 start.go:128] duration metric: createHost completed in 2m2.1152261s
	I0109 00:08:05.098271     124 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-173500-m02 ).state
	I0109 00:08:07.270829     124 main.go:141] libmachine: [stdout =====>] : Running
	
	I0109 00:08:07.270829     124 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:08:07.270915     124 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-173500-m02 ).networkadapters[0]).ipaddresses[0]
	I0109 00:08:09.854135     124 main.go:141] libmachine: [stdout =====>] : 172.24.108.84
	
	I0109 00:08:09.854135     124 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:08:09.861208     124 main.go:141] libmachine: Using SSH client type: native
	I0109 00:08:09.862340     124 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa46120] 0xa48c60 <nil>  [] 0s} 172.24.108.84 22 <nil> <nil>}
	I0109 00:08:09.862340     124 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0109 00:08:10.001435     124 main.go:141] libmachine: SSH cmd err, output: <nil>: 1704758890.000533020
	
	I0109 00:08:10.001435     124 fix.go:206] guest clock: 1704758890.000533020
	I0109 00:08:10.001551     124 fix.go:219] Guest: 2024-01-09 00:08:10.00053302 +0000 UTC Remote: 2024-01-09 00:08:05.0982716 +0000 UTC m=+338.689355801 (delta=4.90226142s)
	I0109 00:08:10.001551     124 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-173500-m02 ).state
	I0109 00:08:12.205751     124 main.go:141] libmachine: [stdout =====>] : Running
	
	I0109 00:08:12.205891     124 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:08:12.205891     124 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-173500-m02 ).networkadapters[0]).ipaddresses[0]
	I0109 00:08:14.872245     124 main.go:141] libmachine: [stdout =====>] : 172.24.108.84
	
	I0109 00:08:14.872456     124 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:08:14.877933     124 main.go:141] libmachine: Using SSH client type: native
	I0109 00:08:14.878727     124 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa46120] 0xa48c60 <nil>  [] 0s} 172.24.108.84 22 <nil> <nil>}
	I0109 00:08:14.878727     124 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1704758890
	I0109 00:08:15.014176     124 main.go:141] libmachine: SSH cmd err, output: <nil>: Tue Jan  9 00:08:10 UTC 2024
	
	I0109 00:08:15.014176     124 fix.go:226] clock set: Tue Jan  9 00:08:10 UTC 2024
	 (err=<nil>)
	I0109 00:08:15.014242     124 start.go:83] releasing machines lock for "multinode-173500-m02", held for 2m12.0320504s
	I0109 00:08:15.014529     124 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-173500-m02 ).state
	I0109 00:08:17.182979     124 main.go:141] libmachine: [stdout =====>] : Running
	
	I0109 00:08:17.182979     124 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:08:17.183107     124 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-173500-m02 ).networkadapters[0]).ipaddresses[0]
	I0109 00:08:19.820747     124 main.go:141] libmachine: [stdout =====>] : 172.24.108.84
	
	I0109 00:08:19.820747     124 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:08:19.824861     124 out.go:177] * Found network options:
	I0109 00:08:19.828449     124 out.go:177]   - NO_PROXY=172.24.100.178
	W0109 00:08:19.831112     124 proxy.go:119] fail to check proxy env: Error ip not in block
	I0109 00:08:19.834271     124 out.go:177]   - NO_PROXY=172.24.100.178
	W0109 00:08:19.836410     124 proxy.go:119] fail to check proxy env: Error ip not in block
	W0109 00:08:19.837381     124 proxy.go:119] fail to check proxy env: Error ip not in block
	I0109 00:08:19.840404     124 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0109 00:08:19.841374     124 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-173500-m02 ).state
	I0109 00:08:19.851437     124 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0109 00:08:19.851437     124 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-173500-m02 ).state
	I0109 00:08:22.156943     124 main.go:141] libmachine: [stdout =====>] : Running
	
	I0109 00:08:22.156943     124 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:08:22.156943     124 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-173500-m02 ).networkadapters[0]).ipaddresses[0]
	I0109 00:08:22.156943     124 main.go:141] libmachine: [stdout =====>] : Running
	
	I0109 00:08:22.157152     124 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:08:22.157195     124 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-173500-m02 ).networkadapters[0]).ipaddresses[0]
	I0109 00:08:24.868794     124 main.go:141] libmachine: [stdout =====>] : 172.24.108.84
	
	I0109 00:08:24.868950     124 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:08:24.869205     124 sshutil.go:53] new ssh client: &{IP:172.24.108.84 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-173500-m02\id_rsa Username:docker}
	I0109 00:08:24.889297     124 main.go:141] libmachine: [stdout =====>] : 172.24.108.84
	
	I0109 00:08:24.889297     124 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:08:24.889514     124 sshutil.go:53] new ssh client: &{IP:172.24.108.84 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-173500-m02\id_rsa Username:docker}
	I0109 00:08:25.072555     124 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0109 00:08:25.072669     124 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	I0109 00:08:25.072669     124 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (5.2212319s)
	W0109 00:08:25.072761     124 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0109 00:08:25.072946     124 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (5.2315713s)
	I0109 00:08:25.086429     124 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0109 00:08:25.110571     124 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0109 00:08:25.110698     124 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0109 00:08:25.110698     124 start.go:475] detecting cgroup driver to use...
	I0109 00:08:25.110698     124 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0109 00:08:25.139866     124 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0109 00:08:25.154992     124 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0109 00:08:25.186530     124 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0109 00:08:25.204103     124 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0109 00:08:25.217235     124 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0109 00:08:25.245021     124 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0109 00:08:25.275093     124 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0109 00:08:25.306482     124 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0109 00:08:25.336452     124 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0109 00:08:25.366044     124 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0109 00:08:25.396106     124 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0109 00:08:25.420204     124 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0109 00:08:25.434306     124 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0109 00:08:25.462302     124 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0109 00:08:25.630742     124 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0109 00:08:25.658885     124 start.go:475] detecting cgroup driver to use...
	I0109 00:08:25.671701     124 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0109 00:08:25.690932     124 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0109 00:08:25.691008     124 command_runner.go:130] > [Unit]
	I0109 00:08:25.691008     124 command_runner.go:130] > Description=Docker Application Container Engine
	I0109 00:08:25.691008     124 command_runner.go:130] > Documentation=https://docs.docker.com
	I0109 00:08:25.691008     124 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0109 00:08:25.691008     124 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0109 00:08:25.691008     124 command_runner.go:130] > StartLimitBurst=3
	I0109 00:08:25.691086     124 command_runner.go:130] > StartLimitIntervalSec=60
	I0109 00:08:25.691086     124 command_runner.go:130] > [Service]
	I0109 00:08:25.691086     124 command_runner.go:130] > Type=notify
	I0109 00:08:25.691138     124 command_runner.go:130] > Restart=on-failure
	I0109 00:08:25.691138     124 command_runner.go:130] > Environment=NO_PROXY=172.24.100.178
	I0109 00:08:25.691138     124 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0109 00:08:25.691138     124 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0109 00:08:25.691138     124 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0109 00:08:25.691138     124 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0109 00:08:25.691138     124 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0109 00:08:25.691138     124 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0109 00:08:25.691138     124 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0109 00:08:25.691138     124 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0109 00:08:25.691138     124 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0109 00:08:25.691138     124 command_runner.go:130] > ExecStart=
	I0109 00:08:25.691138     124 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0109 00:08:25.691327     124 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0109 00:08:25.691327     124 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0109 00:08:25.691327     124 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0109 00:08:25.691327     124 command_runner.go:130] > LimitNOFILE=infinity
	I0109 00:08:25.691327     124 command_runner.go:130] > LimitNPROC=infinity
	I0109 00:08:25.691327     124 command_runner.go:130] > LimitCORE=infinity
	I0109 00:08:25.691327     124 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0109 00:08:25.691327     124 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0109 00:08:25.691327     124 command_runner.go:130] > TasksMax=infinity
	I0109 00:08:25.691327     124 command_runner.go:130] > TimeoutStartSec=0
	I0109 00:08:25.691327     124 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0109 00:08:25.691327     124 command_runner.go:130] > Delegate=yes
	I0109 00:08:25.691327     124 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0109 00:08:25.691327     124 command_runner.go:130] > KillMode=process
	I0109 00:08:25.691327     124 command_runner.go:130] > [Install]
	I0109 00:08:25.691327     124 command_runner.go:130] > WantedBy=multi-user.target
	I0109 00:08:25.704308     124 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0109 00:08:25.742915     124 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0109 00:08:25.780933     124 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0109 00:08:25.811942     124 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0109 00:08:25.845940     124 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0109 00:08:25.900904     124 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0109 00:08:25.921911     124 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0109 00:08:25.953205     124 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0109 00:08:25.968205     124 ssh_runner.go:195] Run: which cri-dockerd
	I0109 00:08:25.974042     124 command_runner.go:130] > /usr/bin/cri-dockerd
	I0109 00:08:25.987932     124 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0109 00:08:26.003658     124 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0109 00:08:26.048968     124 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0109 00:08:26.243359     124 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0109 00:08:26.398286     124 docker.go:560] configuring docker to use "cgroupfs" as cgroup driver...
	I0109 00:08:26.398286     124 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0109 00:08:26.441262     124 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0109 00:08:26.616874     124 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0109 00:08:28.203304     124 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.5864302s)
	I0109 00:08:28.216910     124 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0109 00:08:28.399221     124 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0109 00:08:28.575303     124 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0109 00:08:28.743295     124 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0109 00:08:28.920822     124 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0109 00:08:28.959594     124 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0109 00:08:29.139581     124 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I0109 00:08:29.247013     124 start.go:522] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0109 00:08:29.263886     124 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0109 00:08:29.271704     124 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0109 00:08:29.271704     124 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0109 00:08:29.271704     124 command_runner.go:130] > Device: 16h/22d	Inode: 924         Links: 1
	I0109 00:08:29.271704     124 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0109 00:08:29.271704     124 command_runner.go:130] > Access: 2024-01-09 00:08:29.160544033 +0000
	I0109 00:08:29.271704     124 command_runner.go:130] > Modify: 2024-01-09 00:08:29.160544033 +0000
	I0109 00:08:29.271974     124 command_runner.go:130] > Change: 2024-01-09 00:08:29.164544033 +0000
	I0109 00:08:29.271974     124 command_runner.go:130] >  Birth: -
	I0109 00:08:29.272081     124 start.go:543] Will wait 60s for crictl version
	I0109 00:08:29.285514     124 ssh_runner.go:195] Run: which crictl
	I0109 00:08:29.291646     124 command_runner.go:130] > /usr/bin/crictl
	I0109 00:08:29.308717     124 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0109 00:08:29.377113     124 command_runner.go:130] > Version:  0.1.0
	I0109 00:08:29.377174     124 command_runner.go:130] > RuntimeName:  docker
	I0109 00:08:29.377174     124 command_runner.go:130] > RuntimeVersion:  24.0.7
	I0109 00:08:29.377174     124 command_runner.go:130] > RuntimeApiVersion:  v1
	I0109 00:08:29.377229     124 start.go:559] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  24.0.7
	RuntimeApiVersion:  v1
	I0109 00:08:29.387605     124 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0109 00:08:29.429202     124 command_runner.go:130] > 24.0.7
	I0109 00:08:29.440424     124 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0109 00:08:29.475428     124 command_runner.go:130] > 24.0.7
	I0109 00:08:29.482727     124 out.go:204] * Preparing Kubernetes v1.28.4 on Docker 24.0.7 ...
	I0109 00:08:29.485170     124 out.go:177]   - env NO_PROXY=172.24.100.178
	I0109 00:08:29.487346     124 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0109 00:08:29.492158     124 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0109 00:08:29.492158     124 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0109 00:08:29.492158     124 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0109 00:08:29.492158     124 ip.go:207] Found interface: {Index:13 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:c4:61:0a Flags:up|broadcast|multicast|running}
	I0109 00:08:29.494917     124 ip.go:210] interface addr: fe80::3fa5:15f5:46dc:dc8f/64
	I0109 00:08:29.494917     124 ip.go:210] interface addr: 172.24.96.1/20
	I0109 00:08:29.511245     124 ssh_runner.go:195] Run: grep 172.24.96.1	host.minikube.internal$ /etc/hosts
	I0109 00:08:29.517199     124 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.24.96.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0109 00:08:29.536590     124 certs.go:56] Setting up C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-173500 for IP: 172.24.108.84
	I0109 00:08:29.536590     124 certs.go:190] acquiring lock for shared ca certs: {Name:mk7484196a709b348d442d7deac4228c8c4b804e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0109 00:08:29.537333     124 certs.go:199] skipping minikubeCA CA generation: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key
	I0109 00:08:29.537769     124 certs.go:199] skipping proxyClientCA CA generation: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key
	I0109 00:08:29.537919     124 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0109 00:08:29.537919     124 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0109 00:08:29.537919     124 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0109 00:08:29.538448     124 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0109 00:08:29.539024     124 certs.go:437] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\14288.pem (1338 bytes)
	W0109 00:08:29.539360     124 certs.go:433] ignoring C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\14288_empty.pem, impossibly tiny 0 bytes
	I0109 00:08:29.539444     124 certs.go:437] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I0109 00:08:29.539753     124 certs.go:437] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0109 00:08:29.540179     124 certs.go:437] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0109 00:08:29.540435     124 certs.go:437] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I0109 00:08:29.540700     124 certs.go:437] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\142882.pem (1708 bytes)
	I0109 00:08:29.540700     124 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0109 00:08:29.540700     124 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\14288.pem -> /usr/share/ca-certificates/14288.pem
	I0109 00:08:29.541276     124 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\142882.pem -> /usr/share/ca-certificates/142882.pem
	I0109 00:08:29.542572     124 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0109 00:08:29.589091     124 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0109 00:08:29.637186     124 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0109 00:08:29.676323     124 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0109 00:08:29.716066     124 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0109 00:08:29.755283     124 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\14288.pem --> /usr/share/ca-certificates/14288.pem (1338 bytes)
	I0109 00:08:29.799429     124 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\142882.pem --> /usr/share/ca-certificates/142882.pem (1708 bytes)
	I0109 00:08:29.853659     124 ssh_runner.go:195] Run: openssl version
	I0109 00:08:29.861790     124 command_runner.go:130] > OpenSSL 1.1.1n  15 Mar 2022
	I0109 00:08:29.874761     124 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14288.pem && ln -fs /usr/share/ca-certificates/14288.pem /etc/ssl/certs/14288.pem"
	I0109 00:08:29.905958     124 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14288.pem
	I0109 00:08:29.911112     124 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Jan  8 23:11 /usr/share/ca-certificates/14288.pem
	I0109 00:08:29.912206     124 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan  8 23:11 /usr/share/ca-certificates/14288.pem
	I0109 00:08:29.924864     124 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14288.pem
	I0109 00:08:29.932924     124 command_runner.go:130] > 51391683
	I0109 00:08:29.946976     124 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14288.pem /etc/ssl/certs/51391683.0"
	I0109 00:08:29.983979     124 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/142882.pem && ln -fs /usr/share/ca-certificates/142882.pem /etc/ssl/certs/142882.pem"
	I0109 00:08:30.014938     124 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/142882.pem
	I0109 00:08:30.020988     124 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Jan  8 23:11 /usr/share/ca-certificates/142882.pem
	I0109 00:08:30.020988     124 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan  8 23:11 /usr/share/ca-certificates/142882.pem
	I0109 00:08:30.039240     124 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/142882.pem
	I0109 00:08:30.049962     124 command_runner.go:130] > 3ec20f2e
	I0109 00:08:30.068581     124 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/142882.pem /etc/ssl/certs/3ec20f2e.0"
	I0109 00:08:30.101220     124 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0109 00:08:30.131326     124 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0109 00:08:30.137932     124 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Jan  8 22:56 /usr/share/ca-certificates/minikubeCA.pem
	I0109 00:08:30.138239     124 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan  8 22:56 /usr/share/ca-certificates/minikubeCA.pem
	I0109 00:08:30.153642     124 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0109 00:08:30.160778     124 command_runner.go:130] > b5213941
	I0109 00:08:30.176227     124 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0109 00:08:30.208304     124 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0109 00:08:30.214336     124 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0109 00:08:30.214946     124 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0109 00:08:30.224949     124 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0109 00:08:30.261182     124 command_runner.go:130] > cgroupfs
	I0109 00:08:30.262343     124 cni.go:84] Creating CNI manager for ""
	I0109 00:08:30.262343     124 cni.go:136] 2 nodes found, recommending kindnet
	I0109 00:08:30.262422     124 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0109 00:08:30.262503     124 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.24.108.84 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-173500 NodeName:multinode-173500-m02 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.24.100.178"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.24.108.84 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0109 00:08:30.262837     124 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.24.108.84
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "multinode-173500-m02"
	  kubeletExtraArgs:
	    node-ip: 172.24.108.84
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.24.100.178"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0109 00:08:30.262967     124 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=multinode-173500-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.24.108.84
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:multinode-173500 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0109 00:08:30.277715     124 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0109 00:08:30.290856     124 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/binaries/v1.28.4': No such file or directory
	I0109 00:08:30.290856     124 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.28.4: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.28.4': No such file or directory
	
	Initiating transfer...
	I0109 00:08:30.304415     124 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.28.4
	I0109 00:08:30.327009     124 download.go:107] Downloading: https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubelet.sha256 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\linux\amd64\v1.28.4/kubelet
	I0109 00:08:30.327009     124 download.go:107] Downloading: https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubeadm.sha256 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\linux\amd64\v1.28.4/kubeadm
	I0109 00:08:30.327009     124 download.go:107] Downloading: https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubectl.sha256 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\linux\amd64\v1.28.4/kubectl
	I0109 00:08:31.331469     124 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\linux\amd64\v1.28.4/kubectl -> /var/lib/minikube/binaries/v1.28.4/kubectl
	I0109 00:08:31.350461     124 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubectl
	I0109 00:08:31.357854     124 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/binaries/v1.28.4/kubectl': No such file or directory
	I0109 00:08:31.358434     124 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.28.4/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.28.4/kubectl': No such file or directory
	I0109 00:08:31.358434     124 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\linux\amd64\v1.28.4/kubectl --> /var/lib/minikube/binaries/v1.28.4/kubectl (49885184 bytes)
	I0109 00:08:32.810508     124 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\linux\amd64\v1.28.4/kubeadm -> /var/lib/minikube/binaries/v1.28.4/kubeadm
	I0109 00:08:32.823465     124 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubeadm
	I0109 00:08:32.829913     124 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/binaries/v1.28.4/kubeadm': No such file or directory
	I0109 00:08:32.831146     124 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.28.4/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.28.4/kubeadm': No such file or directory
	I0109 00:08:32.831146     124 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\linux\amd64\v1.28.4/kubeadm --> /var/lib/minikube/binaries/v1.28.4/kubeadm (49102848 bytes)
	I0109 00:08:35.085220     124 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0109 00:08:35.106625     124 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\linux\amd64\v1.28.4/kubelet -> /var/lib/minikube/binaries/v1.28.4/kubelet
	I0109 00:08:35.121622     124 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubelet
	I0109 00:08:35.127735     124 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/binaries/v1.28.4/kubelet': No such file or directory
	I0109 00:08:35.128082     124 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.28.4/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.28.4/kubelet': No such file or directory
	I0109 00:08:35.128214     124 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\linux\amd64\v1.28.4/kubelet --> /var/lib/minikube/binaries/v1.28.4/kubelet (110850048 bytes)
	I0109 00:08:35.810822     124 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I0109 00:08:35.826285     124 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (382 bytes)
	I0109 00:08:35.852367     124 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0109 00:08:35.891673     124 ssh_runner.go:195] Run: grep 172.24.100.178	control-plane.minikube.internal$ /etc/hosts
	I0109 00:08:35.896921     124 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.24.100.178	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0109 00:08:35.915202     124 host.go:66] Checking if "multinode-173500" exists ...
	I0109 00:08:35.916112     124 config.go:182] Loaded profile config "multinode-173500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0109 00:08:35.916112     124 start.go:304] JoinCluster: &{Name:multinode-173500 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17806/minikube-v1.32.1-1702708929-17806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704751654-17830@sha256:cabd32f8d9e8d804966eb117ed5366660f6363a4d1415f0b5480de6e396be617 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion
:v1.28.4 ClusterName:multinode-173500 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:172.24.100.178 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.24.108.84 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:tru
e ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I0109 00:08:35.916378     124 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0109 00:08:35.916378     124 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-173500 ).state
	I0109 00:08:38.050497     124 main.go:141] libmachine: [stdout =====>] : Running
	
	I0109 00:08:38.050497     124 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:08:38.050497     124 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-173500 ).networkadapters[0]).ipaddresses[0]
	I0109 00:08:40.626878     124 main.go:141] libmachine: [stdout =====>] : 172.24.100.178
	
	I0109 00:08:40.627136     124 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:08:40.627312     124 sshutil.go:53] new ssh client: &{IP:172.24.100.178 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-173500\id_rsa Username:docker}
	I0109 00:08:40.836500     124 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token fs9foq.c8qlfxt1cz1jxeza --discovery-token-ca-cert-hash sha256:6a12e94bf3397e7db59fa944f4e20c2c2c34b5794397b381e3c5134eb1900391 
	I0109 00:08:40.836535     124 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm token create --print-join-command --ttl=0": (4.920157s)
	I0109 00:08:40.836535     124 start.go:325] trying to join worker node "m02" to cluster: &{Name:m02 IP:172.24.108.84 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0109 00:08:40.836535     124 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm join control-plane.minikube.internal:8443 --token fs9foq.c8qlfxt1cz1jxeza --discovery-token-ca-cert-hash sha256:6a12e94bf3397e7db59fa944f4e20c2c2c34b5794397b381e3c5134eb1900391 --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-173500-m02"
	I0109 00:08:40.900251     124 command_runner.go:130] ! W0109 00:08:40.899250    1369 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I0109 00:08:41.088314     124 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0109 00:08:43.888269     124 command_runner.go:130] > [preflight] Running pre-flight checks
	I0109 00:08:43.888269     124 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I0109 00:08:43.888269     124 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I0109 00:08:43.888269     124 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0109 00:08:43.888269     124 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0109 00:08:43.888269     124 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0109 00:08:43.888269     124 command_runner.go:130] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
	I0109 00:08:43.888269     124 command_runner.go:130] > This node has joined the cluster:
	I0109 00:08:43.888458     124 command_runner.go:130] > * Certificate signing request was sent to apiserver and a response was received.
	I0109 00:08:43.888458     124 command_runner.go:130] > * The Kubelet was informed of the new secure connection details.
	I0109 00:08:43.888458     124 command_runner.go:130] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I0109 00:08:43.888512     124 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm join control-plane.minikube.internal:8443 --token fs9foq.c8qlfxt1cz1jxeza --discovery-token-ca-cert-hash sha256:6a12e94bf3397e7db59fa944f4e20c2c2c34b5794397b381e3c5134eb1900391 --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-173500-m02": (3.0519759s)
	I0109 00:08:43.888512     124 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0109 00:08:44.082481     124 command_runner.go:130] ! Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /usr/lib/systemd/system/kubelet.service.
	I0109 00:08:44.279067     124 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a minikube.k8s.io/name=multinode-173500 minikube.k8s.io/updated_at=2024_01_09T00_08_44_0700 minikube.k8s.io/primary=false "-l minikube.k8s.io/primary!=true" --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:08:44.425447     124 command_runner.go:130] > node/multinode-173500-m02 labeled
	I0109 00:08:44.426269     124 start.go:306] JoinCluster complete in 8.510156s
	I0109 00:08:44.426351     124 cni.go:84] Creating CNI manager for ""
	I0109 00:08:44.426351     124 cni.go:136] 2 nodes found, recommending kindnet
	I0109 00:08:44.440851     124 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0109 00:08:44.450891     124 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0109 00:08:44.450891     124 command_runner.go:130] >   Size: 2685752   	Blocks: 5248       IO Block: 4096   regular file
	I0109 00:08:44.450891     124 command_runner.go:130] > Device: 11h/17d	Inode: 3544        Links: 1
	I0109 00:08:44.450891     124 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0109 00:08:44.450891     124 command_runner.go:130] > Access: 2024-01-09 00:03:39.631411700 +0000
	I0109 00:08:44.450891     124 command_runner.go:130] > Modify: 2023-12-16 11:53:47.000000000 +0000
	I0109 00:08:44.450891     124 command_runner.go:130] > Change: 2024-01-09 00:03:29.422000000 +0000
	I0109 00:08:44.450891     124 command_runner.go:130] >  Birth: -
	I0109 00:08:44.450891     124 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I0109 00:08:44.450891     124 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0109 00:08:44.500625     124 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0109 00:08:44.898338     124 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I0109 00:08:44.898338     124 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I0109 00:08:44.898338     124 command_runner.go:130] > serviceaccount/kindnet unchanged
	I0109 00:08:44.898338     124 command_runner.go:130] > daemonset.apps/kindnet configured
	I0109 00:08:44.899338     124 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0109 00:08:44.900322     124 kapi.go:59] client config for multinode-173500: &rest.Config{Host:"https://172.24.100.178:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\multinode-173500\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\multinode-173500\\client.key", CAFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CA
Data:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1e2c140), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0109 00:08:44.901343     124 round_trippers.go:463] GET https://172.24.100.178:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0109 00:08:44.901343     124 round_trippers.go:469] Request Headers:
	I0109 00:08:44.901343     124 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:08:44.901343     124 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0109 00:08:44.914917     124 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I0109 00:08:44.914917     124 round_trippers.go:577] Response Headers:
	I0109 00:08:44.914917     124 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:08:44.914917     124 round_trippers.go:580]     Content-Type: application/json
	I0109 00:08:44.914917     124 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b65f6d3f-e21a-46b0-805d-9b71486b6c1c
	I0109 00:08:44.915015     124 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 160c1d93-30c8-4413-9986-7aa0d5f0fdd1
	I0109 00:08:44.915038     124 round_trippers.go:580]     Content-Length: 291
	I0109 00:08:44.915038     124 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:08:44 GMT
	I0109 00:08:44.915099     124 round_trippers.go:580]     Audit-Id: 200c9686-b27b-4a0b-8f04-b2a7524a2b33
	I0109 00:08:44.915099     124 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"40c365d2-2414-4cb8-9731-fc615f6d2dcd","resourceVersion":"418","creationTimestamp":"2024-01-09T00:05:31Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0109 00:08:44.915160     124 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-173500" context rescaled to 1 replicas
	I0109 00:08:44.915160     124 start.go:223] Will wait 6m0s for node &{Name:m02 IP:172.24.108.84 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0109 00:08:44.919024     124 out.go:177] * Verifying Kubernetes components...
	I0109 00:08:44.934502     124 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0109 00:08:44.958250     124 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0109 00:08:44.959255     124 kapi.go:59] client config for multinode-173500: &rest.Config{Host:"https://172.24.100.178:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\multinode-173500\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\multinode-173500\\client.key", CAFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CA
Data:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1e2c140), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0109 00:08:44.960204     124 node_ready.go:35] waiting up to 6m0s for node "multinode-173500-m02" to be "Ready" ...
	I0109 00:08:44.960204     124 round_trippers.go:463] GET https://172.24.100.178:8443/api/v1/nodes/multinode-173500-m02
	I0109 00:08:44.960204     124 round_trippers.go:469] Request Headers:
	I0109 00:08:44.960204     124 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0109 00:08:44.960204     124 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:08:44.964780     124 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0109 00:08:44.965732     124 round_trippers.go:577] Response Headers:
	I0109 00:08:44.965779     124 round_trippers.go:580]     Audit-Id: 53b167d2-fd53-44f9-9560-77364e78531e
	I0109 00:08:44.965779     124 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:08:44.965779     124 round_trippers.go:580]     Content-Type: application/json
	I0109 00:08:44.965779     124 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b65f6d3f-e21a-46b0-805d-9b71486b6c1c
	I0109 00:08:44.965823     124 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 160c1d93-30c8-4413-9986-7aa0d5f0fdd1
	I0109 00:08:44.965823     124 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:08:44 GMT
	I0109 00:08:44.965905     124 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-173500-m02","uid":"2696f851-45f3-47f4-953f-d03a5dc2fac0","resourceVersion":"575","creationTimestamp":"2024-01-09T00:08:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-173500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-173500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_09T00_08_44_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-09T00:08:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annot
ations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":" [truncated 3121 chars]
	I0109 00:08:45.463684     124 round_trippers.go:463] GET https://172.24.100.178:8443/api/v1/nodes/multinode-173500-m02
	I0109 00:08:45.463766     124 round_trippers.go:469] Request Headers:
	I0109 00:08:45.463839     124 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:08:45.463839     124 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0109 00:08:45.468454     124 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0109 00:08:45.468454     124 round_trippers.go:577] Response Headers:
	I0109 00:08:45.468454     124 round_trippers.go:580]     Audit-Id: f912eda5-dee6-4b52-81a9-4544d326f6e1
	I0109 00:08:45.468571     124 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:08:45.468571     124 round_trippers.go:580]     Content-Type: application/json
	I0109 00:08:45.468571     124 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b65f6d3f-e21a-46b0-805d-9b71486b6c1c
	I0109 00:08:45.468571     124 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 160c1d93-30c8-4413-9986-7aa0d5f0fdd1
	I0109 00:08:45.468571     124 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:08:45 GMT
	I0109 00:08:45.468757     124 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-173500-m02","uid":"2696f851-45f3-47f4-953f-d03a5dc2fac0","resourceVersion":"575","creationTimestamp":"2024-01-09T00:08:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-173500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-173500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_09T00_08_44_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-09T00:08:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annot
ations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":" [truncated 3121 chars]
	I0109 00:08:45.965820     124 round_trippers.go:463] GET https://172.24.100.178:8443/api/v1/nodes/multinode-173500-m02
	I0109 00:08:45.965922     124 round_trippers.go:469] Request Headers:
	I0109 00:08:45.965922     124 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:08:45.965922     124 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0109 00:08:45.971563     124 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0109 00:08:45.971563     124 round_trippers.go:577] Response Headers:
	I0109 00:08:45.971563     124 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 160c1d93-30c8-4413-9986-7aa0d5f0fdd1
	I0109 00:08:45.971563     124 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:08:45 GMT
	I0109 00:08:45.971563     124 round_trippers.go:580]     Audit-Id: bb4c68b4-d5ac-49e2-b005-a0d74af3e3fb
	I0109 00:08:45.971563     124 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:08:45.971563     124 round_trippers.go:580]     Content-Type: application/json
	I0109 00:08:45.971563     124 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b65f6d3f-e21a-46b0-805d-9b71486b6c1c
	I0109 00:08:45.971563     124 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-173500-m02","uid":"2696f851-45f3-47f4-953f-d03a5dc2fac0","resourceVersion":"575","creationTimestamp":"2024-01-09T00:08:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-173500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-173500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_09T00_08_44_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-09T00:08:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annot
ations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":" [truncated 3121 chars]
	I0109 00:08:46.470149     124 round_trippers.go:463] GET https://172.24.100.178:8443/api/v1/nodes/multinode-173500-m02
	I0109 00:08:46.470149     124 round_trippers.go:469] Request Headers:
	I0109 00:08:46.470149     124 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:08:46.470149     124 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0109 00:08:46.474548     124 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0109 00:08:46.474548     124 round_trippers.go:577] Response Headers:
	I0109 00:08:46.474784     124 round_trippers.go:580]     Content-Type: application/json
	I0109 00:08:46.474784     124 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b65f6d3f-e21a-46b0-805d-9b71486b6c1c
	I0109 00:08:46.474784     124 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 160c1d93-30c8-4413-9986-7aa0d5f0fdd1
	I0109 00:08:46.474784     124 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:08:46 GMT
	I0109 00:08:46.474784     124 round_trippers.go:580]     Audit-Id: 408149ed-4489-47c6-bc83-2c52be60df0c
	I0109 00:08:46.474784     124 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:08:46.475024     124 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-173500-m02","uid":"2696f851-45f3-47f4-953f-d03a5dc2fac0","resourceVersion":"575","creationTimestamp":"2024-01-09T00:08:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-173500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-173500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_09T00_08_44_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-09T00:08:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annot
ations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":" [truncated 3121 chars]
	I0109 00:08:46.967686     124 round_trippers.go:463] GET https://172.24.100.178:8443/api/v1/nodes/multinode-173500-m02
	I0109 00:08:46.967686     124 round_trippers.go:469] Request Headers:
	I0109 00:08:46.967686     124 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:08:46.967686     124 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0109 00:08:46.971657     124 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0109 00:08:46.971657     124 round_trippers.go:577] Response Headers:
	I0109 00:08:46.971657     124 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:08:46 GMT
	I0109 00:08:46.971657     124 round_trippers.go:580]     Audit-Id: 543e661b-58b1-424d-826c-e1f102cd705f
	I0109 00:08:46.971657     124 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:08:46.972122     124 round_trippers.go:580]     Content-Type: application/json
	I0109 00:08:46.972122     124 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b65f6d3f-e21a-46b0-805d-9b71486b6c1c
	I0109 00:08:46.972122     124 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 160c1d93-30c8-4413-9986-7aa0d5f0fdd1
	I0109 00:08:46.972358     124 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-173500-m02","uid":"2696f851-45f3-47f4-953f-d03a5dc2fac0","resourceVersion":"575","creationTimestamp":"2024-01-09T00:08:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-173500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-173500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_09T00_08_44_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-09T00:08:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annot
ations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":" [truncated 3121 chars]
	I0109 00:08:46.972603     124 node_ready.go:58] node "multinode-173500-m02" has status "Ready":"False"
	I0109 00:08:47.472751     124 round_trippers.go:463] GET https://172.24.100.178:8443/api/v1/nodes/multinode-173500-m02
	I0109 00:08:47.472751     124 round_trippers.go:469] Request Headers:
	I0109 00:08:47.472825     124 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:08:47.472825     124 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0109 00:08:47.478421     124 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0109 00:08:47.478566     124 round_trippers.go:577] Response Headers:
	I0109 00:08:47.478566     124 round_trippers.go:580]     Content-Type: application/json
	I0109 00:08:47.478566     124 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b65f6d3f-e21a-46b0-805d-9b71486b6c1c
	I0109 00:08:47.478566     124 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 160c1d93-30c8-4413-9986-7aa0d5f0fdd1
	I0109 00:08:47.478566     124 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:08:47 GMT
	I0109 00:08:47.478566     124 round_trippers.go:580]     Audit-Id: b2958310-d58c-4417-bba4-ba53984cd971
	I0109 00:08:47.478647     124 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:08:47.478823     124 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-173500-m02","uid":"2696f851-45f3-47f4-953f-d03a5dc2fac0","resourceVersion":"575","creationTimestamp":"2024-01-09T00:08:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-173500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-173500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_09T00_08_44_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-09T00:08:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annot
ations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":" [truncated 3121 chars]
	I0109 00:08:47.962070     124 round_trippers.go:463] GET https://172.24.100.178:8443/api/v1/nodes/multinode-173500-m02
	I0109 00:08:47.962173     124 round_trippers.go:469] Request Headers:
	I0109 00:08:47.962173     124 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0109 00:08:47.962239     124 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:08:47.965518     124 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0109 00:08:47.965518     124 round_trippers.go:577] Response Headers:
	I0109 00:08:47.965518     124 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 160c1d93-30c8-4413-9986-7aa0d5f0fdd1
	I0109 00:08:47.965518     124 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:08:47 GMT
	I0109 00:08:47.965518     124 round_trippers.go:580]     Audit-Id: 8e190219-86cb-416d-b661-fe12abcfe1f9
	I0109 00:08:47.965518     124 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:08:47.965518     124 round_trippers.go:580]     Content-Type: application/json
	I0109 00:08:47.965880     124 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b65f6d3f-e21a-46b0-805d-9b71486b6c1c
	I0109 00:08:47.966068     124 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-173500-m02","uid":"2696f851-45f3-47f4-953f-d03a5dc2fac0","resourceVersion":"575","creationTimestamp":"2024-01-09T00:08:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-173500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-173500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_09T00_08_44_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-09T00:08:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annot
ations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":" [truncated 3121 chars]
	I0109 00:08:48.462055     124 round_trippers.go:463] GET https://172.24.100.178:8443/api/v1/nodes/multinode-173500-m02
	I0109 00:08:48.462140     124 round_trippers.go:469] Request Headers:
	I0109 00:08:48.462140     124 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:08:48.462140     124 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0109 00:08:48.466449     124 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0109 00:08:48.466449     124 round_trippers.go:577] Response Headers:
	I0109 00:08:48.466449     124 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b65f6d3f-e21a-46b0-805d-9b71486b6c1c
	I0109 00:08:48.466449     124 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 160c1d93-30c8-4413-9986-7aa0d5f0fdd1
	I0109 00:08:48.466449     124 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:08:48 GMT
	I0109 00:08:48.466449     124 round_trippers.go:580]     Audit-Id: 868b6d68-71e4-4498-ab25-426c35a89ce6
	I0109 00:08:48.466987     124 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:08:48.466987     124 round_trippers.go:580]     Content-Type: application/json
	I0109 00:08:48.467184     124 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-173500-m02","uid":"2696f851-45f3-47f4-953f-d03a5dc2fac0","resourceVersion":"575","creationTimestamp":"2024-01-09T00:08:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-173500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-173500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_09T00_08_44_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-09T00:08:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annot
ations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":" [truncated 3121 chars]
	I0109 00:08:48.965101     124 round_trippers.go:463] GET https://172.24.100.178:8443/api/v1/nodes/multinode-173500-m02
	I0109 00:08:48.965101     124 round_trippers.go:469] Request Headers:
	I0109 00:08:48.965295     124 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:08:48.965295     124 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0109 00:08:48.972131     124 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0109 00:08:48.972131     124 round_trippers.go:577] Response Headers:
	I0109 00:08:48.972131     124 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:08:48.972131     124 round_trippers.go:580]     Content-Type: application/json
	I0109 00:08:48.972131     124 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b65f6d3f-e21a-46b0-805d-9b71486b6c1c
	I0109 00:08:48.973051     124 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 160c1d93-30c8-4413-9986-7aa0d5f0fdd1
	I0109 00:08:48.973073     124 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:08:48 GMT
	I0109 00:08:48.973073     124 round_trippers.go:580]     Audit-Id: e1e3cf1c-bc87-4879-86e7-b9c7d1565f52
	I0109 00:08:48.973215     124 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-173500-m02","uid":"2696f851-45f3-47f4-953f-d03a5dc2fac0","resourceVersion":"575","creationTimestamp":"2024-01-09T00:08:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-173500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-173500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_09T00_08_44_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-09T00:08:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annot
ations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":" [truncated 3121 chars]
	I0109 00:08:48.974305     124 node_ready.go:58] node "multinode-173500-m02" has status "Ready":"False"
	I0109 00:08:49.471157     124 round_trippers.go:463] GET https://172.24.100.178:8443/api/v1/nodes/multinode-173500-m02
	I0109 00:08:49.471157     124 round_trippers.go:469] Request Headers:
	I0109 00:08:49.471157     124 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:08:49.471157     124 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0109 00:08:49.475801     124 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0109 00:08:49.475801     124 round_trippers.go:577] Response Headers:
	I0109 00:08:49.475801     124 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:08:49 GMT
	I0109 00:08:49.475801     124 round_trippers.go:580]     Audit-Id: a08feb99-b0eb-4d9c-b4fd-418f9260f067
	I0109 00:08:49.475801     124 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:08:49.475801     124 round_trippers.go:580]     Content-Type: application/json
	I0109 00:08:49.475801     124 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b65f6d3f-e21a-46b0-805d-9b71486b6c1c
	I0109 00:08:49.475801     124 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 160c1d93-30c8-4413-9986-7aa0d5f0fdd1
	I0109 00:08:49.476535     124 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-173500-m02","uid":"2696f851-45f3-47f4-953f-d03a5dc2fac0","resourceVersion":"575","creationTimestamp":"2024-01-09T00:08:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-173500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-173500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_09T00_08_44_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-09T00:08:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annot
ations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":" [truncated 3121 chars]
	I0109 00:08:49.963791     124 round_trippers.go:463] GET https://172.24.100.178:8443/api/v1/nodes/multinode-173500-m02
	I0109 00:08:49.963873     124 round_trippers.go:469] Request Headers:
	I0109 00:08:49.963873     124 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:08:49.963873     124 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0109 00:08:49.969604     124 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0109 00:08:49.969789     124 round_trippers.go:577] Response Headers:
	I0109 00:08:49.969837     124 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 160c1d93-30c8-4413-9986-7aa0d5f0fdd1
	I0109 00:08:49.969837     124 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:08:49 GMT
	I0109 00:08:49.969837     124 round_trippers.go:580]     Audit-Id: 9f81e615-55a5-4fd8-9094-303562e2f94a
	I0109 00:08:49.969837     124 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:08:49.969837     124 round_trippers.go:580]     Content-Type: application/json
	I0109 00:08:49.969837     124 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b65f6d3f-e21a-46b0-805d-9b71486b6c1c
	I0109 00:08:49.969976     124 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-173500-m02","uid":"2696f851-45f3-47f4-953f-d03a5dc2fac0","resourceVersion":"575","creationTimestamp":"2024-01-09T00:08:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-173500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-173500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_09T00_08_44_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-09T00:08:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annot
ations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":" [truncated 3121 chars]
	I0109 00:08:50.471288     124 round_trippers.go:463] GET https://172.24.100.178:8443/api/v1/nodes/multinode-173500-m02
	I0109 00:08:50.471390     124 round_trippers.go:469] Request Headers:
	I0109 00:08:50.471390     124 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0109 00:08:50.471390     124 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:08:50.476109     124 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0109 00:08:50.476109     124 round_trippers.go:577] Response Headers:
	I0109 00:08:50.476109     124 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 160c1d93-30c8-4413-9986-7aa0d5f0fdd1
	I0109 00:08:50.476109     124 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:08:50 GMT
	I0109 00:08:50.476109     124 round_trippers.go:580]     Audit-Id: 2b80a858-f674-4dbf-b0f5-fd4f70b0161f
	I0109 00:08:50.476109     124 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:08:50.476109     124 round_trippers.go:580]     Content-Type: application/json
	I0109 00:08:50.476109     124 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b65f6d3f-e21a-46b0-805d-9b71486b6c1c
	I0109 00:08:50.476109     124 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-173500-m02","uid":"2696f851-45f3-47f4-953f-d03a5dc2fac0","resourceVersion":"575","creationTimestamp":"2024-01-09T00:08:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-173500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-173500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_09T00_08_44_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-09T00:08:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annot
ations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":" [truncated 3121 chars]
	I0109 00:08:50.963173     124 round_trippers.go:463] GET https://172.24.100.178:8443/api/v1/nodes/multinode-173500-m02
	I0109 00:08:50.968224     124 round_trippers.go:469] Request Headers:
	I0109 00:08:50.968224     124 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:08:50.968224     124 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0109 00:08:50.971144     124 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0109 00:08:50.971144     124 round_trippers.go:577] Response Headers:
	I0109 00:08:50.971144     124 round_trippers.go:580]     Content-Type: application/json
	I0109 00:08:50.971144     124 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b65f6d3f-e21a-46b0-805d-9b71486b6c1c
	I0109 00:08:50.971144     124 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 160c1d93-30c8-4413-9986-7aa0d5f0fdd1
	I0109 00:08:50.971973     124 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:08:50 GMT
	I0109 00:08:50.971973     124 round_trippers.go:580]     Audit-Id: cc072a50-92c7-43b2-b1ae-504b04c7ff42
	I0109 00:08:50.971973     124 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:08:50.972329     124 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-173500-m02","uid":"2696f851-45f3-47f4-953f-d03a5dc2fac0","resourceVersion":"575","creationTimestamp":"2024-01-09T00:08:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-173500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-173500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_09T00_08_44_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-09T00:08:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annot
ations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":" [truncated 3121 chars]
	I0109 00:08:51.471537     124 round_trippers.go:463] GET https://172.24.100.178:8443/api/v1/nodes/multinode-173500-m02
	I0109 00:08:51.471629     124 round_trippers.go:469] Request Headers:
	I0109 00:08:51.471629     124 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:08:51.471629     124 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0109 00:08:51.476124     124 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0109 00:08:51.476514     124 round_trippers.go:577] Response Headers:
	I0109 00:08:51.476514     124 round_trippers.go:580]     Audit-Id: 0878a1bf-e461-4650-abfb-7ac5b7c742ff
	I0109 00:08:51.476514     124 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:08:51.476514     124 round_trippers.go:580]     Content-Type: application/json
	I0109 00:08:51.476514     124 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b65f6d3f-e21a-46b0-805d-9b71486b6c1c
	I0109 00:08:51.476514     124 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 160c1d93-30c8-4413-9986-7aa0d5f0fdd1
	I0109 00:08:51.476514     124 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:08:51 GMT
	I0109 00:08:51.476514     124 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-173500-m02","uid":"2696f851-45f3-47f4-953f-d03a5dc2fac0","resourceVersion":"575","creationTimestamp":"2024-01-09T00:08:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-173500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-173500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_09T00_08_44_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-09T00:08:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annot
ations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":" [truncated 3121 chars]
	I0109 00:08:51.477186     124 node_ready.go:58] node "multinode-173500-m02" has status "Ready":"False"
	I0109 00:08:51.961329     124 round_trippers.go:463] GET https://172.24.100.178:8443/api/v1/nodes/multinode-173500-m02
	I0109 00:08:51.961430     124 round_trippers.go:469] Request Headers:
	I0109 00:08:51.961430     124 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:08:51.961495     124 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0109 00:08:51.964700     124 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0109 00:08:51.964700     124 round_trippers.go:577] Response Headers:
	I0109 00:08:51.964700     124 round_trippers.go:580]     Content-Type: application/json
	I0109 00:08:51.964700     124 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b65f6d3f-e21a-46b0-805d-9b71486b6c1c
	I0109 00:08:51.964700     124 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 160c1d93-30c8-4413-9986-7aa0d5f0fdd1
	I0109 00:08:51.964700     124 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:08:51 GMT
	I0109 00:08:51.964700     124 round_trippers.go:580]     Audit-Id: 7d1f32b8-67ee-4a5d-a6f4-f34a214bcf02
	I0109 00:08:51.964700     124 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:08:51.964700     124 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-173500-m02","uid":"2696f851-45f3-47f4-953f-d03a5dc2fac0","resourceVersion":"575","creationTimestamp":"2024-01-09T00:08:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-173500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-173500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_09T00_08_44_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-09T00:08:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annot
ations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":" [truncated 3121 chars]
	I0109 00:08:52.469743     124 round_trippers.go:463] GET https://172.24.100.178:8443/api/v1/nodes/multinode-173500-m02
	I0109 00:08:52.469817     124 round_trippers.go:469] Request Headers:
	I0109 00:08:52.469817     124 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0109 00:08:52.469817     124 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:08:52.473287     124 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0109 00:08:52.473287     124 round_trippers.go:577] Response Headers:
	I0109 00:08:52.473915     124 round_trippers.go:580]     Audit-Id: 65b2cc96-9a66-4dd4-a5b9-2476c5cfe4db
	I0109 00:08:52.473915     124 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:08:52.473915     124 round_trippers.go:580]     Content-Type: application/json
	I0109 00:08:52.473915     124 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b65f6d3f-e21a-46b0-805d-9b71486b6c1c
	I0109 00:08:52.473915     124 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 160c1d93-30c8-4413-9986-7aa0d5f0fdd1
	I0109 00:08:52.473915     124 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:08:52 GMT
	I0109 00:08:52.474142     124 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-173500-m02","uid":"2696f851-45f3-47f4-953f-d03a5dc2fac0","resourceVersion":"575","creationTimestamp":"2024-01-09T00:08:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-173500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-173500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_09T00_08_44_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-09T00:08:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annot
ations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":" [truncated 3121 chars]
	I0109 00:08:52.968601     124 round_trippers.go:463] GET https://172.24.100.178:8443/api/v1/nodes/multinode-173500-m02
	I0109 00:08:52.968601     124 round_trippers.go:469] Request Headers:
	I0109 00:08:52.968708     124 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:08:52.968708     124 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0109 00:08:52.972649     124 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0109 00:08:52.973533     124 round_trippers.go:577] Response Headers:
	I0109 00:08:52.973533     124 round_trippers.go:580]     Audit-Id: afec9a16-d209-40a9-b4d4-c192e263e3ce
	I0109 00:08:52.973533     124 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:08:52.973533     124 round_trippers.go:580]     Content-Type: application/json
	I0109 00:08:52.973533     124 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b65f6d3f-e21a-46b0-805d-9b71486b6c1c
	I0109 00:08:52.973533     124 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 160c1d93-30c8-4413-9986-7aa0d5f0fdd1
	I0109 00:08:52.973533     124 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:08:52 GMT
	I0109 00:08:52.974283     124 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-173500-m02","uid":"2696f851-45f3-47f4-953f-d03a5dc2fac0","resourceVersion":"575","creationTimestamp":"2024-01-09T00:08:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-173500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-173500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_09T00_08_44_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-09T00:08:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annot
ations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":" [truncated 3121 chars]
	I0109 00:08:53.471499     124 round_trippers.go:463] GET https://172.24.100.178:8443/api/v1/nodes/multinode-173500-m02
	I0109 00:08:53.471499     124 round_trippers.go:469] Request Headers:
	I0109 00:08:53.471499     124 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:08:53.471499     124 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0109 00:08:53.475130     124 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0109 00:08:53.475769     124 round_trippers.go:577] Response Headers:
	I0109 00:08:53.475769     124 round_trippers.go:580]     Content-Type: application/json
	I0109 00:08:53.475769     124 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b65f6d3f-e21a-46b0-805d-9b71486b6c1c
	I0109 00:08:53.475769     124 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 160c1d93-30c8-4413-9986-7aa0d5f0fdd1
	I0109 00:08:53.475769     124 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:08:53 GMT
	I0109 00:08:53.475769     124 round_trippers.go:580]     Audit-Id: b8df264f-a5c2-484c-b2b4-ae56d7726dbc
	I0109 00:08:53.475769     124 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:08:53.476999     124 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-173500-m02","uid":"2696f851-45f3-47f4-953f-d03a5dc2fac0","resourceVersion":"575","creationTimestamp":"2024-01-09T00:08:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-173500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-173500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_09T00_08_44_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-09T00:08:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annot
ations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":" [truncated 3121 chars]
	I0109 00:08:53.477834     124 node_ready.go:58] node "multinode-173500-m02" has status "Ready":"False"
	I0109 00:08:53.965788     124 round_trippers.go:463] GET https://172.24.100.178:8443/api/v1/nodes/multinode-173500-m02
	I0109 00:08:53.966013     124 round_trippers.go:469] Request Headers:
	I0109 00:08:53.966013     124 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:08:53.966080     124 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0109 00:08:53.992015     124 round_trippers.go:574] Response Status: 200 OK in 25 milliseconds
	I0109 00:08:53.992626     124 round_trippers.go:577] Response Headers:
	I0109 00:08:53.992797     124 round_trippers.go:580]     Content-Type: application/json
	I0109 00:08:53.992797     124 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b65f6d3f-e21a-46b0-805d-9b71486b6c1c
	I0109 00:08:53.992935     124 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 160c1d93-30c8-4413-9986-7aa0d5f0fdd1
	I0109 00:08:53.993022     124 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:08:53 GMT
	I0109 00:08:53.993022     124 round_trippers.go:580]     Audit-Id: 276e30e7-2c85-494d-869b-2773be25acdc
	I0109 00:08:53.993022     124 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:08:53.993340     124 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-173500-m02","uid":"2696f851-45f3-47f4-953f-d03a5dc2fac0","resourceVersion":"590","creationTimestamp":"2024-01-09T00:08:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-173500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-173500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_09T00_08_44_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-09T00:08:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annot
ations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":" [truncated 3390 chars]
	I0109 00:08:54.472996     124 round_trippers.go:463] GET https://172.24.100.178:8443/api/v1/nodes/multinode-173500-m02
	I0109 00:08:54.473083     124 round_trippers.go:469] Request Headers:
	I0109 00:08:54.473083     124 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:08:54.473083     124 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0109 00:08:54.477383     124 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0109 00:08:54.478025     124 round_trippers.go:577] Response Headers:
	I0109 00:08:54.478025     124 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 160c1d93-30c8-4413-9986-7aa0d5f0fdd1
	I0109 00:08:54.478025     124 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:08:54 GMT
	I0109 00:08:54.478025     124 round_trippers.go:580]     Audit-Id: da7347da-9481-4451-a200-e10d0df3bf4e
	I0109 00:08:54.478025     124 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:08:54.478025     124 round_trippers.go:580]     Content-Type: application/json
	I0109 00:08:54.478122     124 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b65f6d3f-e21a-46b0-805d-9b71486b6c1c
	I0109 00:08:54.478251     124 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-173500-m02","uid":"2696f851-45f3-47f4-953f-d03a5dc2fac0","resourceVersion":"590","creationTimestamp":"2024-01-09T00:08:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-173500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-173500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_09T00_08_44_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-09T00:08:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annot
ations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":" [truncated 3390 chars]
	I0109 00:08:54.961880     124 round_trippers.go:463] GET https://172.24.100.178:8443/api/v1/nodes/multinode-173500-m02
	I0109 00:08:54.962041     124 round_trippers.go:469] Request Headers:
	I0109 00:08:54.962041     124 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:08:54.962041     124 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0109 00:08:54.965373     124 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0109 00:08:54.965373     124 round_trippers.go:577] Response Headers:
	I0109 00:08:54.966484     124 round_trippers.go:580]     Audit-Id: 989b43f4-dafd-4c49-8ee0-25230be5db76
	I0109 00:08:54.966527     124 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:08:54.966527     124 round_trippers.go:580]     Content-Type: application/json
	I0109 00:08:54.966527     124 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b65f6d3f-e21a-46b0-805d-9b71486b6c1c
	I0109 00:08:54.966586     124 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 160c1d93-30c8-4413-9986-7aa0d5f0fdd1
	I0109 00:08:54.966586     124 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:08:54 GMT
	I0109 00:08:54.966722     124 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-173500-m02","uid":"2696f851-45f3-47f4-953f-d03a5dc2fac0","resourceVersion":"590","creationTimestamp":"2024-01-09T00:08:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-173500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-173500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_09T00_08_44_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-09T00:08:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annot
ations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":" [truncated 3390 chars]
	I0109 00:08:55.470829     124 round_trippers.go:463] GET https://172.24.100.178:8443/api/v1/nodes/multinode-173500-m02
	I0109 00:08:55.470829     124 round_trippers.go:469] Request Headers:
	I0109 00:08:55.470829     124 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:08:55.470829     124 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0109 00:08:55.475768     124 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0109 00:08:55.476763     124 round_trippers.go:577] Response Headers:
	I0109 00:08:55.476763     124 round_trippers.go:580]     Audit-Id: 989ef9d2-38f4-46ac-ae29-e691bc3bd5f1
	I0109 00:08:55.476763     124 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:08:55.476763     124 round_trippers.go:580]     Content-Type: application/json
	I0109 00:08:55.476763     124 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b65f6d3f-e21a-46b0-805d-9b71486b6c1c
	I0109 00:08:55.476763     124 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 160c1d93-30c8-4413-9986-7aa0d5f0fdd1
	I0109 00:08:55.476763     124 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:08:55 GMT
	I0109 00:08:55.476763     124 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-173500-m02","uid":"2696f851-45f3-47f4-953f-d03a5dc2fac0","resourceVersion":"590","creationTimestamp":"2024-01-09T00:08:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-173500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-173500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_09T00_08_44_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-09T00:08:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annot
ations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":" [truncated 3390 chars]
	I0109 00:08:55.976360     124 round_trippers.go:463] GET https://172.24.100.178:8443/api/v1/nodes/multinode-173500-m02
	I0109 00:08:55.976430     124 round_trippers.go:469] Request Headers:
	I0109 00:08:55.976430     124 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:08:55.976430     124 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0109 00:08:55.984730     124 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0109 00:08:55.984730     124 round_trippers.go:577] Response Headers:
	I0109 00:08:55.984810     124 round_trippers.go:580]     Audit-Id: baabd2c2-1748-4571-85ec-cb2c68203018
	I0109 00:08:55.984810     124 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:08:55.984866     124 round_trippers.go:580]     Content-Type: application/json
	I0109 00:08:55.984866     124 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b65f6d3f-e21a-46b0-805d-9b71486b6c1c
	I0109 00:08:55.984866     124 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 160c1d93-30c8-4413-9986-7aa0d5f0fdd1
	I0109 00:08:55.984866     124 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:08:55 GMT
	I0109 00:08:55.985241     124 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-173500-m02","uid":"2696f851-45f3-47f4-953f-d03a5dc2fac0","resourceVersion":"590","creationTimestamp":"2024-01-09T00:08:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-173500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-173500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_09T00_08_44_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-09T00:08:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annot
ations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":" [truncated 3390 chars]
	I0109 00:08:55.985241     124 node_ready.go:58] node "multinode-173500-m02" has status "Ready":"False"
	I0109 00:08:56.467335     124 round_trippers.go:463] GET https://172.24.100.178:8443/api/v1/nodes/multinode-173500-m02
	I0109 00:08:56.467401     124 round_trippers.go:469] Request Headers:
	I0109 00:08:56.467401     124 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0109 00:08:56.467401     124 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:08:56.471114     124 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0109 00:08:56.471114     124 round_trippers.go:577] Response Headers:
	I0109 00:08:56.471114     124 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:08:56.471114     124 round_trippers.go:580]     Content-Type: application/json
	I0109 00:08:56.471114     124 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b65f6d3f-e21a-46b0-805d-9b71486b6c1c
	I0109 00:08:56.471114     124 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 160c1d93-30c8-4413-9986-7aa0d5f0fdd1
	I0109 00:08:56.471114     124 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:08:56 GMT
	I0109 00:08:56.471960     124 round_trippers.go:580]     Audit-Id: 436e7f16-5e8d-41e0-b38d-1013acdf007e
	I0109 00:08:56.472278     124 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-173500-m02","uid":"2696f851-45f3-47f4-953f-d03a5dc2fac0","resourceVersion":"590","creationTimestamp":"2024-01-09T00:08:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-173500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-173500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_09T00_08_44_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-09T00:08:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annot
ations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":" [truncated 3390 chars]
	I0109 00:08:56.968699     124 round_trippers.go:463] GET https://172.24.100.178:8443/api/v1/nodes/multinode-173500-m02
	I0109 00:08:56.968699     124 round_trippers.go:469] Request Headers:
	I0109 00:08:56.968699     124 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:08:56.968699     124 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0109 00:08:56.975689     124 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0109 00:08:56.976247     124 round_trippers.go:577] Response Headers:
	I0109 00:08:56.976247     124 round_trippers.go:580]     Content-Type: application/json
	I0109 00:08:56.976347     124 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b65f6d3f-e21a-46b0-805d-9b71486b6c1c
	I0109 00:08:56.976347     124 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 160c1d93-30c8-4413-9986-7aa0d5f0fdd1
	I0109 00:08:56.976347     124 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:08:56 GMT
	I0109 00:08:56.976438     124 round_trippers.go:580]     Audit-Id: 60a08e0e-2c37-4d76-8c1a-b554ea4c90af
	I0109 00:08:56.976438     124 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:08:56.976643     124 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-173500-m02","uid":"2696f851-45f3-47f4-953f-d03a5dc2fac0","resourceVersion":"590","creationTimestamp":"2024-01-09T00:08:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-173500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-173500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_09T00_08_44_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-09T00:08:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annot
ations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":" [truncated 3390 chars]
	I0109 00:08:57.461048     124 round_trippers.go:463] GET https://172.24.100.178:8443/api/v1/nodes/multinode-173500-m02
	I0109 00:08:57.461048     124 round_trippers.go:469] Request Headers:
	I0109 00:08:57.461168     124 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:08:57.461168     124 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0109 00:08:57.465731     124 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0109 00:08:57.465731     124 round_trippers.go:577] Response Headers:
	I0109 00:08:57.465731     124 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 160c1d93-30c8-4413-9986-7aa0d5f0fdd1
	I0109 00:08:57.465731     124 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:08:57 GMT
	I0109 00:08:57.465731     124 round_trippers.go:580]     Audit-Id: 75d1fe26-552d-455a-99a4-eac089aad6c6
	I0109 00:08:57.465731     124 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:08:57.465731     124 round_trippers.go:580]     Content-Type: application/json
	I0109 00:08:57.465731     124 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b65f6d3f-e21a-46b0-805d-9b71486b6c1c
	I0109 00:08:57.465731     124 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-173500-m02","uid":"2696f851-45f3-47f4-953f-d03a5dc2fac0","resourceVersion":"590","creationTimestamp":"2024-01-09T00:08:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-173500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-173500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_09T00_08_44_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-09T00:08:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annot
ations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":" [truncated 3390 chars]
	I0109 00:08:57.967409     124 round_trippers.go:463] GET https://172.24.100.178:8443/api/v1/nodes/multinode-173500-m02
	I0109 00:08:57.967409     124 round_trippers.go:469] Request Headers:
	I0109 00:08:57.967409     124 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:08:57.967534     124 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0109 00:08:57.974515     124 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0109 00:08:57.974515     124 round_trippers.go:577] Response Headers:
	I0109 00:08:57.974515     124 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:08:57.974515     124 round_trippers.go:580]     Content-Type: application/json
	I0109 00:08:57.974515     124 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b65f6d3f-e21a-46b0-805d-9b71486b6c1c
	I0109 00:08:57.974515     124 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 160c1d93-30c8-4413-9986-7aa0d5f0fdd1
	I0109 00:08:57.974515     124 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:08:57 GMT
	I0109 00:08:57.974515     124 round_trippers.go:580]     Audit-Id: 64f6374e-5bdc-4409-88aa-86604ac3da6c
	I0109 00:08:57.975039     124 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-173500-m02","uid":"2696f851-45f3-47f4-953f-d03a5dc2fac0","resourceVersion":"590","creationTimestamp":"2024-01-09T00:08:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-173500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-173500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_09T00_08_44_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-09T00:08:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annot
ations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":" [truncated 3390 chars]
	I0109 00:08:58.472116     124 round_trippers.go:463] GET https://172.24.100.178:8443/api/v1/nodes/multinode-173500-m02
	I0109 00:08:58.472147     124 round_trippers.go:469] Request Headers:
	I0109 00:08:58.472147     124 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:08:58.472147     124 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0109 00:08:58.477351     124 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0109 00:08:58.477351     124 round_trippers.go:577] Response Headers:
	I0109 00:08:58.478238     124 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b65f6d3f-e21a-46b0-805d-9b71486b6c1c
	I0109 00:08:58.478238     124 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 160c1d93-30c8-4413-9986-7aa0d5f0fdd1
	I0109 00:08:58.478238     124 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:08:58 GMT
	I0109 00:08:58.478238     124 round_trippers.go:580]     Audit-Id: d19f4194-bba7-4517-8916-1627c0dc89bc
	I0109 00:08:58.478238     124 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:08:58.478238     124 round_trippers.go:580]     Content-Type: application/json
	I0109 00:08:58.478466     124 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-173500-m02","uid":"2696f851-45f3-47f4-953f-d03a5dc2fac0","resourceVersion":"590","creationTimestamp":"2024-01-09T00:08:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-173500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-173500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_09T00_08_44_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-09T00:08:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annot
ations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":" [truncated 3390 chars]
	I0109 00:08:58.478466     124 node_ready.go:58] node "multinode-173500-m02" has status "Ready":"False"
	I0109 00:08:58.974970     124 round_trippers.go:463] GET https://172.24.100.178:8443/api/v1/nodes/multinode-173500-m02
	I0109 00:08:58.975171     124 round_trippers.go:469] Request Headers:
	I0109 00:08:58.975171     124 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0109 00:08:58.975171     124 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:08:58.978591     124 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0109 00:08:58.979234     124 round_trippers.go:577] Response Headers:
	I0109 00:08:58.979234     124 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 160c1d93-30c8-4413-9986-7aa0d5f0fdd1
	I0109 00:08:58.979234     124 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:08:58 GMT
	I0109 00:08:58.979234     124 round_trippers.go:580]     Audit-Id: 637bea5f-f939-4ba5-9316-34e3a685c6fd
	I0109 00:08:58.979234     124 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:08:58.979234     124 round_trippers.go:580]     Content-Type: application/json
	I0109 00:08:58.979234     124 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b65f6d3f-e21a-46b0-805d-9b71486b6c1c
	I0109 00:08:58.979573     124 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-173500-m02","uid":"2696f851-45f3-47f4-953f-d03a5dc2fac0","resourceVersion":"590","creationTimestamp":"2024-01-09T00:08:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-173500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-173500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_09T00_08_44_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-09T00:08:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annot
ations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":" [truncated 3390 chars]
	I0109 00:08:59.461473     124 round_trippers.go:463] GET https://172.24.100.178:8443/api/v1/nodes/multinode-173500-m02
	I0109 00:08:59.461548     124 round_trippers.go:469] Request Headers:
	I0109 00:08:59.461548     124 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0109 00:08:59.461548     124 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:08:59.464936     124 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0109 00:08:59.465703     124 round_trippers.go:577] Response Headers:
	I0109 00:08:59.465703     124 round_trippers.go:580]     Audit-Id: f592f18b-961b-4c7c-8f9d-a78afad19d42
	I0109 00:08:59.465703     124 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:08:59.465703     124 round_trippers.go:580]     Content-Type: application/json
	I0109 00:08:59.465703     124 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b65f6d3f-e21a-46b0-805d-9b71486b6c1c
	I0109 00:08:59.465703     124 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 160c1d93-30c8-4413-9986-7aa0d5f0fdd1
	I0109 00:08:59.465703     124 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:08:59 GMT
	I0109 00:08:59.465792     124 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-173500-m02","uid":"2696f851-45f3-47f4-953f-d03a5dc2fac0","resourceVersion":"590","creationTimestamp":"2024-01-09T00:08:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-173500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-173500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_09T00_08_44_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-09T00:08:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annot
ations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":" [truncated 3390 chars]
	I0109 00:08:59.965712     124 round_trippers.go:463] GET https://172.24.100.178:8443/api/v1/nodes/multinode-173500-m02
	I0109 00:08:59.965712     124 round_trippers.go:469] Request Headers:
	I0109 00:08:59.965712     124 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:08:59.965712     124 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0109 00:08:59.968700     124 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0109 00:08:59.968700     124 round_trippers.go:577] Response Headers:
	I0109 00:08:59.968700     124 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:08:59.968700     124 round_trippers.go:580]     Content-Type: application/json
	I0109 00:08:59.969278     124 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b65f6d3f-e21a-46b0-805d-9b71486b6c1c
	I0109 00:08:59.969278     124 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 160c1d93-30c8-4413-9986-7aa0d5f0fdd1
	I0109 00:08:59.969278     124 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:08:59 GMT
	I0109 00:08:59.969278     124 round_trippers.go:580]     Audit-Id: e300a725-4616-4d3b-95a4-dc5bb8c2b72a
	I0109 00:08:59.969695     124 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-173500-m02","uid":"2696f851-45f3-47f4-953f-d03a5dc2fac0","resourceVersion":"590","creationTimestamp":"2024-01-09T00:08:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-173500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-173500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_09T00_08_44_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-09T00:08:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annot
ations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":" [truncated 3390 chars]
	I0109 00:09:00.462656     124 round_trippers.go:463] GET https://172.24.100.178:8443/api/v1/nodes/multinode-173500-m02
	I0109 00:09:00.462656     124 round_trippers.go:469] Request Headers:
	I0109 00:09:00.462656     124 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:09:00.462656     124 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0109 00:09:00.467274     124 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0109 00:09:00.468085     124 round_trippers.go:577] Response Headers:
	I0109 00:09:00.468319     124 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 160c1d93-30c8-4413-9986-7aa0d5f0fdd1
	I0109 00:09:00.468319     124 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:09:00 GMT
	I0109 00:09:00.468360     124 round_trippers.go:580]     Audit-Id: 0f8162fe-5288-4b0c-8598-71b7aebd6739
	I0109 00:09:00.468360     124 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:09:00.468360     124 round_trippers.go:580]     Content-Type: application/json
	I0109 00:09:00.468360     124 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b65f6d3f-e21a-46b0-805d-9b71486b6c1c
	I0109 00:09:00.468600     124 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-173500-m02","uid":"2696f851-45f3-47f4-953f-d03a5dc2fac0","resourceVersion":"590","creationTimestamp":"2024-01-09T00:08:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-173500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-173500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_09T00_08_44_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-09T00:08:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annot
ations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":" [truncated 3390 chars]
	I0109 00:09:00.963911     124 round_trippers.go:463] GET https://172.24.100.178:8443/api/v1/nodes/multinode-173500-m02
	I0109 00:09:00.963982     124 round_trippers.go:469] Request Headers:
	I0109 00:09:00.963982     124 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:09:00.963982     124 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0109 00:09:00.968819     124 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0109 00:09:00.968819     124 round_trippers.go:577] Response Headers:
	I0109 00:09:00.968819     124 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b65f6d3f-e21a-46b0-805d-9b71486b6c1c
	I0109 00:09:00.968819     124 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 160c1d93-30c8-4413-9986-7aa0d5f0fdd1
	I0109 00:09:00.969566     124 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:09:00 GMT
	I0109 00:09:00.969566     124 round_trippers.go:580]     Audit-Id: e974a733-afa1-4b12-b691-e7f140bcbe95
	I0109 00:09:00.969566     124 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:09:00.969566     124 round_trippers.go:580]     Content-Type: application/json
	I0109 00:09:00.969566     124 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-173500-m02","uid":"2696f851-45f3-47f4-953f-d03a5dc2fac0","resourceVersion":"590","creationTimestamp":"2024-01-09T00:08:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-173500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-173500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_09T00_08_44_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-09T00:08:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annot
ations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":" [truncated 3390 chars]
	I0109 00:09:00.970280     124 node_ready.go:58] node "multinode-173500-m02" has status "Ready":"False"
	I0109 00:09:01.466816     124 round_trippers.go:463] GET https://172.24.100.178:8443/api/v1/nodes/multinode-173500-m02
	I0109 00:09:01.466918     124 round_trippers.go:469] Request Headers:
	I0109 00:09:01.466918     124 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:09:01.466918     124 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0109 00:09:01.471515     124 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0109 00:09:01.471717     124 round_trippers.go:577] Response Headers:
	I0109 00:09:01.471717     124 round_trippers.go:580]     Audit-Id: aaa2b6fd-7504-4056-8742-7985d4e62ed5
	I0109 00:09:01.471717     124 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:09:01.471717     124 round_trippers.go:580]     Content-Type: application/json
	I0109 00:09:01.471717     124 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b65f6d3f-e21a-46b0-805d-9b71486b6c1c
	I0109 00:09:01.471717     124 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 160c1d93-30c8-4413-9986-7aa0d5f0fdd1
	I0109 00:09:01.471717     124 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:09:01 GMT
	I0109 00:09:01.472513     124 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-173500-m02","uid":"2696f851-45f3-47f4-953f-d03a5dc2fac0","resourceVersion":"590","creationTimestamp":"2024-01-09T00:08:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-173500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-173500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_09T00_08_44_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-09T00:08:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annot
ations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":" [truncated 3390 chars]
	I0109 00:09:01.966483     124 round_trippers.go:463] GET https://172.24.100.178:8443/api/v1/nodes/multinode-173500-m02
	I0109 00:09:01.966483     124 round_trippers.go:469] Request Headers:
	I0109 00:09:01.966589     124 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:09:01.966589     124 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0109 00:09:01.970871     124 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0109 00:09:01.970871     124 round_trippers.go:577] Response Headers:
	I0109 00:09:01.970871     124 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b65f6d3f-e21a-46b0-805d-9b71486b6c1c
	I0109 00:09:01.970871     124 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 160c1d93-30c8-4413-9986-7aa0d5f0fdd1
	I0109 00:09:01.970871     124 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:09:01 GMT
	I0109 00:09:01.970871     124 round_trippers.go:580]     Audit-Id: 0aafd01a-b782-457e-a147-a396296def52
	I0109 00:09:01.970871     124 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:09:01.970871     124 round_trippers.go:580]     Content-Type: application/json
	I0109 00:09:01.971707     124 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-173500-m02","uid":"2696f851-45f3-47f4-953f-d03a5dc2fac0","resourceVersion":"590","creationTimestamp":"2024-01-09T00:08:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-173500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-173500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_09T00_08_44_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-09T00:08:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annot
ations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":" [truncated 3390 chars]
	I0109 00:09:02.468872     124 round_trippers.go:463] GET https://172.24.100.178:8443/api/v1/nodes/multinode-173500-m02
	I0109 00:09:02.468971     124 round_trippers.go:469] Request Headers:
	I0109 00:09:02.468971     124 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:09:02.468971     124 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0109 00:09:02.479003     124 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0109 00:09:02.479408     124 round_trippers.go:577] Response Headers:
	I0109 00:09:02.479481     124 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 160c1d93-30c8-4413-9986-7aa0d5f0fdd1
	I0109 00:09:02.479481     124 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:09:02 GMT
	I0109 00:09:02.479481     124 round_trippers.go:580]     Audit-Id: c9ab4cab-dba9-4b96-a7c9-6eb6922175f3
	I0109 00:09:02.479481     124 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:09:02.479481     124 round_trippers.go:580]     Content-Type: application/json
	I0109 00:09:02.479481     124 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b65f6d3f-e21a-46b0-805d-9b71486b6c1c
	I0109 00:09:02.479481     124 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-173500-m02","uid":"2696f851-45f3-47f4-953f-d03a5dc2fac0","resourceVersion":"590","creationTimestamp":"2024-01-09T00:08:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-173500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-173500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_09T00_08_44_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-09T00:08:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annot
ations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":" [truncated 3390 chars]
	I0109 00:09:02.969554     124 round_trippers.go:463] GET https://172.24.100.178:8443/api/v1/nodes/multinode-173500-m02
	I0109 00:09:02.969697     124 round_trippers.go:469] Request Headers:
	I0109 00:09:02.969697     124 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:09:02.969697     124 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0109 00:09:02.974038     124 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0109 00:09:02.974038     124 round_trippers.go:577] Response Headers:
	I0109 00:09:02.974038     124 round_trippers.go:580]     Audit-Id: 329cbf01-31b4-4df0-96bb-00d5041efd52
	I0109 00:09:02.974259     124 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:09:02.974259     124 round_trippers.go:580]     Content-Type: application/json
	I0109 00:09:02.974259     124 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b65f6d3f-e21a-46b0-805d-9b71486b6c1c
	I0109 00:09:02.974259     124 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 160c1d93-30c8-4413-9986-7aa0d5f0fdd1
	I0109 00:09:02.974323     124 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:09:02 GMT
	I0109 00:09:02.974323     124 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-173500-m02","uid":"2696f851-45f3-47f4-953f-d03a5dc2fac0","resourceVersion":"590","creationTimestamp":"2024-01-09T00:08:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-173500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-173500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_09T00_08_44_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-09T00:08:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annot
ations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":" [truncated 3390 chars]
	I0109 00:09:02.975061     124 node_ready.go:58] node "multinode-173500-m02" has status "Ready":"False"
	I0109 00:09:03.466205     124 round_trippers.go:463] GET https://172.24.100.178:8443/api/v1/nodes/multinode-173500-m02
	I0109 00:09:03.466289     124 round_trippers.go:469] Request Headers:
	I0109 00:09:03.466289     124 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:09:03.466289     124 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0109 00:09:03.470681     124 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0109 00:09:03.470681     124 round_trippers.go:577] Response Headers:
	I0109 00:09:03.470681     124 round_trippers.go:580]     Audit-Id: eecd206c-b8af-4743-9ed0-62597fd66abb
	I0109 00:09:03.470681     124 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:09:03.470681     124 round_trippers.go:580]     Content-Type: application/json
	I0109 00:09:03.470681     124 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b65f6d3f-e21a-46b0-805d-9b71486b6c1c
	I0109 00:09:03.470770     124 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 160c1d93-30c8-4413-9986-7aa0d5f0fdd1
	I0109 00:09:03.470770     124 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:09:03 GMT
	I0109 00:09:03.470999     124 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-173500-m02","uid":"2696f851-45f3-47f4-953f-d03a5dc2fac0","resourceVersion":"590","creationTimestamp":"2024-01-09T00:08:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-173500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-173500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_09T00_08_44_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-09T00:08:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annot
ations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":" [truncated 3390 chars]
	I0109 00:09:03.963657     124 round_trippers.go:463] GET https://172.24.100.178:8443/api/v1/nodes/multinode-173500-m02
	I0109 00:09:03.963722     124 round_trippers.go:469] Request Headers:
	I0109 00:09:03.963722     124 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0109 00:09:03.963722     124 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:09:03.966212     124 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0109 00:09:03.966212     124 round_trippers.go:577] Response Headers:
	I0109 00:09:03.967114     124 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:09:03.967114     124 round_trippers.go:580]     Content-Type: application/json
	I0109 00:09:03.967114     124 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b65f6d3f-e21a-46b0-805d-9b71486b6c1c
	I0109 00:09:03.967176     124 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 160c1d93-30c8-4413-9986-7aa0d5f0fdd1
	I0109 00:09:03.967176     124 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:09:03 GMT
	I0109 00:09:03.967176     124 round_trippers.go:580]     Audit-Id: 9f16f74a-6735-465b-878c-df3464de3838
	I0109 00:09:03.967512     124 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-173500-m02","uid":"2696f851-45f3-47f4-953f-d03a5dc2fac0","resourceVersion":"590","creationTimestamp":"2024-01-09T00:08:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-173500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-173500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_09T00_08_44_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-09T00:08:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annot
ations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":" [truncated 3390 chars]
	I0109 00:09:04.468172     124 round_trippers.go:463] GET https://172.24.100.178:8443/api/v1/nodes/multinode-173500-m02
	I0109 00:09:04.468252     124 round_trippers.go:469] Request Headers:
	I0109 00:09:04.468252     124 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:09:04.468252     124 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0109 00:09:04.470847     124 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0109 00:09:04.470847     124 round_trippers.go:577] Response Headers:
	I0109 00:09:04.470847     124 round_trippers.go:580]     Audit-Id: a356c221-a681-4bcd-8d16-7364467b7b01
	I0109 00:09:04.470847     124 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:09:04.470847     124 round_trippers.go:580]     Content-Type: application/json
	I0109 00:09:04.470847     124 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b65f6d3f-e21a-46b0-805d-9b71486b6c1c
	I0109 00:09:04.470847     124 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 160c1d93-30c8-4413-9986-7aa0d5f0fdd1
	I0109 00:09:04.470847     124 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:09:04 GMT
	I0109 00:09:04.470847     124 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-173500-m02","uid":"2696f851-45f3-47f4-953f-d03a5dc2fac0","resourceVersion":"610","creationTimestamp":"2024-01-09T00:08:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-173500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-173500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_09T00_08_44_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-09T00:08:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annot
ations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":" [truncated 3256 chars]
	I0109 00:09:04.470847     124 node_ready.go:49] node "multinode-173500-m02" has status "Ready":"True"
	I0109 00:09:04.470847     124 node_ready.go:38] duration metric: took 19.5106419s waiting for node "multinode-173500-m02" to be "Ready" ...
	I0109 00:09:04.470847     124 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0109 00:09:04.470847     124 round_trippers.go:463] GET https://172.24.100.178:8443/api/v1/namespaces/kube-system/pods
	I0109 00:09:04.470847     124 round_trippers.go:469] Request Headers:
	I0109 00:09:04.470847     124 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:09:04.470847     124 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0109 00:09:04.479812     124 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0109 00:09:04.479812     124 round_trippers.go:577] Response Headers:
	I0109 00:09:04.479812     124 round_trippers.go:580]     Audit-Id: ffa8c6f5-0f3c-4a6b-a008-b1826d48d8ce
	I0109 00:09:04.479812     124 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:09:04.479812     124 round_trippers.go:580]     Content-Type: application/json
	I0109 00:09:04.479812     124 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b65f6d3f-e21a-46b0-805d-9b71486b6c1c
	I0109 00:09:04.479812     124 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 160c1d93-30c8-4413-9986-7aa0d5f0fdd1
	I0109 00:09:04.480476     124 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:09:04 GMT
	I0109 00:09:04.482392     124 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"610"},"items":[{"metadata":{"name":"coredns-5dd5756b68-bkss9","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"463fb6c6-1e85-419f-9c13-96e58a2ec22e","resourceVersion":"413","creationTimestamp":"2024-01-09T00:05:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"391af85f-9c35-497b-9b4f-c347a35d4a42","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-09T00:05:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"391af85f-9c35-497b-9b4f-c347a35d4a42\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 67514 chars]
	I0109 00:09:04.485948     124 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-bkss9" in "kube-system" namespace to be "Ready" ...
	I0109 00:09:04.486073     124 round_trippers.go:463] GET https://172.24.100.178:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-bkss9
	I0109 00:09:04.486172     124 round_trippers.go:469] Request Headers:
	I0109 00:09:04.486172     124 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:09:04.486172     124 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0109 00:09:04.489034     124 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0109 00:09:04.489034     124 round_trippers.go:577] Response Headers:
	I0109 00:09:04.489034     124 round_trippers.go:580]     Audit-Id: 546cf201-79c9-4697-b339-69a63676d1f9
	I0109 00:09:04.489893     124 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:09:04.489893     124 round_trippers.go:580]     Content-Type: application/json
	I0109 00:09:04.489893     124 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b65f6d3f-e21a-46b0-805d-9b71486b6c1c
	I0109 00:09:04.489893     124 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 160c1d93-30c8-4413-9986-7aa0d5f0fdd1
	I0109 00:09:04.489893     124 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:09:04 GMT
	I0109 00:09:04.490064     124 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-bkss9","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"463fb6c6-1e85-419f-9c13-96e58a2ec22e","resourceVersion":"413","creationTimestamp":"2024-01-09T00:05:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"391af85f-9c35-497b-9b4f-c347a35d4a42","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-09T00:05:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"391af85f-9c35-497b-9b4f-c347a35d4a42\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6284 chars]
	I0109 00:09:04.490630     124 round_trippers.go:463] GET https://172.24.100.178:8443/api/v1/nodes/multinode-173500
	I0109 00:09:04.490630     124 round_trippers.go:469] Request Headers:
	I0109 00:09:04.490630     124 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:09:04.490630     124 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0109 00:09:04.493698     124 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0109 00:09:04.493698     124 round_trippers.go:577] Response Headers:
	I0109 00:09:04.493698     124 round_trippers.go:580]     Audit-Id: 71e290f9-21e3-4d10-995c-0626c0fd95e4
	I0109 00:09:04.493698     124 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:09:04.493698     124 round_trippers.go:580]     Content-Type: application/json
	I0109 00:09:04.493698     124 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b65f6d3f-e21a-46b0-805d-9b71486b6c1c
	I0109 00:09:04.493698     124 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 160c1d93-30c8-4413-9986-7aa0d5f0fdd1
	I0109 00:09:04.493698     124 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:09:04 GMT
	I0109 00:09:04.494443     124 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-173500","uid":"5e5c58e3-1ae8-4346-8766-7537fea36975","resourceVersion":"420","creationTimestamp":"2024-01-09T00:05:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-173500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-173500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_09T00_05_33_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-01-09T00:05:27Z","fieldsType":"FieldsV1","fi [truncated 4959 chars]
	I0109 00:09:04.494443     124 pod_ready.go:92] pod "coredns-5dd5756b68-bkss9" in "kube-system" namespace has status "Ready":"True"
	I0109 00:09:04.495004     124 pod_ready.go:81] duration metric: took 8.4657ms waiting for pod "coredns-5dd5756b68-bkss9" in "kube-system" namespace to be "Ready" ...
	I0109 00:09:04.495004     124 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-173500" in "kube-system" namespace to be "Ready" ...
	I0109 00:09:04.495004     124 round_trippers.go:463] GET https://172.24.100.178:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-173500
	I0109 00:09:04.495004     124 round_trippers.go:469] Request Headers:
	I0109 00:09:04.495158     124 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:09:04.495158     124 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0109 00:09:04.497409     124 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0109 00:09:04.497409     124 round_trippers.go:577] Response Headers:
	I0109 00:09:04.497409     124 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 160c1d93-30c8-4413-9986-7aa0d5f0fdd1
	I0109 00:09:04.497409     124 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:09:04 GMT
	I0109 00:09:04.497409     124 round_trippers.go:580]     Audit-Id: 3cee76b7-4b2c-4c45-be68-0784676d1c28
	I0109 00:09:04.497409     124 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:09:04.497409     124 round_trippers.go:580]     Content-Type: application/json
	I0109 00:09:04.497409     124 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b65f6d3f-e21a-46b0-805d-9b71486b6c1c
	I0109 00:09:04.497807     124 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-173500","namespace":"kube-system","uid":"bbcb3d33-7daf-43d9-b596-66cbce3552ad","resourceVersion":"371","creationTimestamp":"2024-01-09T00:05:31Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.24.100.178:2379","kubernetes.io/config.hash":"8b9b6f8e7be121dc69cce9e8aca59417","kubernetes.io/config.mirror":"8b9b6f8e7be121dc69cce9e8aca59417","kubernetes.io/config.seen":"2024-01-09T00:05:31.606498270Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-173500","uid":"5e5c58e3-1ae8-4346-8766-7537fea36975","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-09T00:05:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 5872 chars]
	I0109 00:09:04.498363     124 round_trippers.go:463] GET https://172.24.100.178:8443/api/v1/nodes/multinode-173500
	I0109 00:09:04.498363     124 round_trippers.go:469] Request Headers:
	I0109 00:09:04.498363     124 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0109 00:09:04.498363     124 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:09:04.504174     124 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0109 00:09:04.504174     124 round_trippers.go:577] Response Headers:
	I0109 00:09:04.504174     124 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:09:04 GMT
	I0109 00:09:04.504174     124 round_trippers.go:580]     Audit-Id: 5417fb39-0047-426a-847e-ee42463f31f9
	I0109 00:09:04.504174     124 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:09:04.504174     124 round_trippers.go:580]     Content-Type: application/json
	I0109 00:09:04.504174     124 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b65f6d3f-e21a-46b0-805d-9b71486b6c1c
	I0109 00:09:04.504297     124 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 160c1d93-30c8-4413-9986-7aa0d5f0fdd1
	I0109 00:09:04.504969     124 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-173500","uid":"5e5c58e3-1ae8-4346-8766-7537fea36975","resourceVersion":"420","creationTimestamp":"2024-01-09T00:05:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-173500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-173500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_09T00_05_33_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-01-09T00:05:27Z","fieldsType":"FieldsV1","fi [truncated 4959 chars]
	I0109 00:09:04.505188     124 pod_ready.go:92] pod "etcd-multinode-173500" in "kube-system" namespace has status "Ready":"True"
	I0109 00:09:04.505188     124 pod_ready.go:81] duration metric: took 10.1844ms waiting for pod "etcd-multinode-173500" in "kube-system" namespace to be "Ready" ...
	I0109 00:09:04.505188     124 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-173500" in "kube-system" namespace to be "Ready" ...
	I0109 00:09:04.505188     124 round_trippers.go:463] GET https://172.24.100.178:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-173500
	I0109 00:09:04.505188     124 round_trippers.go:469] Request Headers:
	I0109 00:09:04.505188     124 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:09:04.505188     124 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0109 00:09:04.508169     124 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0109 00:09:04.508169     124 round_trippers.go:577] Response Headers:
	I0109 00:09:04.508710     124 round_trippers.go:580]     Content-Type: application/json
	I0109 00:09:04.508710     124 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b65f6d3f-e21a-46b0-805d-9b71486b6c1c
	I0109 00:09:04.508710     124 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 160c1d93-30c8-4413-9986-7aa0d5f0fdd1
	I0109 00:09:04.508710     124 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:09:04 GMT
	I0109 00:09:04.508710     124 round_trippers.go:580]     Audit-Id: 3e2bf0e1-2091-4ac7-9334-e4b1bbb37b7e
	I0109 00:09:04.508710     124 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:09:04.509083     124 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-173500","namespace":"kube-system","uid":"6ec45d85-b2d5-483f-afdd-ee98dbb0edd1","resourceVersion":"372","creationTimestamp":"2024-01-09T00:05:31Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.24.100.178:8443","kubernetes.io/config.hash":"6d4780fbf78826137e2d0549410b3c52","kubernetes.io/config.mirror":"6d4780fbf78826137e2d0549410b3c52","kubernetes.io/config.seen":"2024-01-09T00:05:31.606503570Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-173500","uid":"5e5c58e3-1ae8-4346-8766-7537fea36975","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-09T00:05:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7408 chars]
	I0109 00:09:04.509083     124 round_trippers.go:463] GET https://172.24.100.178:8443/api/v1/nodes/multinode-173500
	I0109 00:09:04.509631     124 round_trippers.go:469] Request Headers:
	I0109 00:09:04.509693     124 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:09:04.509693     124 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0109 00:09:04.512345     124 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0109 00:09:04.512345     124 round_trippers.go:577] Response Headers:
	I0109 00:09:04.512826     124 round_trippers.go:580]     Content-Type: application/json
	I0109 00:09:04.512826     124 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b65f6d3f-e21a-46b0-805d-9b71486b6c1c
	I0109 00:09:04.512826     124 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 160c1d93-30c8-4413-9986-7aa0d5f0fdd1
	I0109 00:09:04.512826     124 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:09:04 GMT
	I0109 00:09:04.512826     124 round_trippers.go:580]     Audit-Id: 34b4d1fe-c513-4961-835c-3ebdbc580c29
	I0109 00:09:04.512919     124 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:09:04.513051     124 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-173500","uid":"5e5c58e3-1ae8-4346-8766-7537fea36975","resourceVersion":"420","creationTimestamp":"2024-01-09T00:05:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-173500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-173500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_09T00_05_33_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-01-09T00:05:27Z","fieldsType":"FieldsV1","fi [truncated 4959 chars]
	I0109 00:09:04.513443     124 pod_ready.go:92] pod "kube-apiserver-multinode-173500" in "kube-system" namespace has status "Ready":"True"
	I0109 00:09:04.513526     124 pod_ready.go:81] duration metric: took 8.3375ms waiting for pod "kube-apiserver-multinode-173500" in "kube-system" namespace to be "Ready" ...
	I0109 00:09:04.513526     124 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-173500" in "kube-system" namespace to be "Ready" ...
	I0109 00:09:04.513674     124 round_trippers.go:463] GET https://172.24.100.178:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-173500
	I0109 00:09:04.513674     124 round_trippers.go:469] Request Headers:
	I0109 00:09:04.513674     124 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:09:04.513674     124 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0109 00:09:04.516298     124 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0109 00:09:04.516298     124 round_trippers.go:577] Response Headers:
	I0109 00:09:04.516377     124 round_trippers.go:580]     Audit-Id: 7b6a681b-2dd4-4216-afca-fe305c2f8e40
	I0109 00:09:04.516377     124 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:09:04.516377     124 round_trippers.go:580]     Content-Type: application/json
	I0109 00:09:04.516377     124 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b65f6d3f-e21a-46b0-805d-9b71486b6c1c
	I0109 00:09:04.516377     124 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 160c1d93-30c8-4413-9986-7aa0d5f0fdd1
	I0109 00:09:04.516377     124 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:09:04 GMT
	I0109 00:09:04.516683     124 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-173500","namespace":"kube-system","uid":"a0252ea5-5d6a-4303-b7e6-151481d4cd8a","resourceVersion":"373","creationTimestamp":"2024-01-09T00:05:31Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"f6b180d5a2686dc98b0355b6df7f53ea","kubernetes.io/config.mirror":"f6b180d5a2686dc98b0355b6df7f53ea","kubernetes.io/config.seen":"2024-01-09T00:05:31.606504770Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-173500","uid":"5e5c58e3-1ae8-4346-8766-7537fea36975","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-09T00:05:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 6973 chars]
	I0109 00:09:04.517299     124 round_trippers.go:463] GET https://172.24.100.178:8443/api/v1/nodes/multinode-173500
	I0109 00:09:04.517299     124 round_trippers.go:469] Request Headers:
	I0109 00:09:04.517299     124 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:09:04.517299     124 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0109 00:09:04.520087     124 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0109 00:09:04.520087     124 round_trippers.go:577] Response Headers:
	I0109 00:09:04.520087     124 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 160c1d93-30c8-4413-9986-7aa0d5f0fdd1
	I0109 00:09:04.520344     124 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:09:04 GMT
	I0109 00:09:04.520344     124 round_trippers.go:580]     Audit-Id: 3dfdff3d-2c94-4461-88c7-7adbb058974c
	I0109 00:09:04.520344     124 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:09:04.520344     124 round_trippers.go:580]     Content-Type: application/json
	I0109 00:09:04.520344     124 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b65f6d3f-e21a-46b0-805d-9b71486b6c1c
	I0109 00:09:04.520664     124 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-173500","uid":"5e5c58e3-1ae8-4346-8766-7537fea36975","resourceVersion":"420","creationTimestamp":"2024-01-09T00:05:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-173500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-173500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_09T00_05_33_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-01-09T00:05:27Z","fieldsType":"FieldsV1","fi [truncated 4959 chars]
	I0109 00:09:04.521082     124 pod_ready.go:92] pod "kube-controller-manager-multinode-173500" in "kube-system" namespace has status "Ready":"True"
	I0109 00:09:04.521162     124 pod_ready.go:81] duration metric: took 7.4752ms waiting for pod "kube-controller-manager-multinode-173500" in "kube-system" namespace to be "Ready" ...
	I0109 00:09:04.521162     124 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-4h4sv" in "kube-system" namespace to be "Ready" ...
	I0109 00:09:04.669556     124 request.go:629] Waited for 148.2881ms due to client-side throttling, not priority and fairness, request: GET:https://172.24.100.178:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4h4sv
	I0109 00:09:04.669556     124 round_trippers.go:463] GET https://172.24.100.178:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4h4sv
	I0109 00:09:04.669556     124 round_trippers.go:469] Request Headers:
	I0109 00:09:04.669556     124 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:09:04.669556     124 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0109 00:09:04.673321     124 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0109 00:09:04.673321     124 round_trippers.go:577] Response Headers:
	I0109 00:09:04.673321     124 round_trippers.go:580]     Audit-Id: c3a320a6-10e1-48a2-9d45-12f74d6f1af6
	I0109 00:09:04.673321     124 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:09:04.673321     124 round_trippers.go:580]     Content-Type: application/json
	I0109 00:09:04.673321     124 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b65f6d3f-e21a-46b0-805d-9b71486b6c1c
	I0109 00:09:04.674150     124 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 160c1d93-30c8-4413-9986-7aa0d5f0fdd1
	I0109 00:09:04.674150     124 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:09:04 GMT
	I0109 00:09:04.674409     124 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-4h4sv","generateName":"kube-proxy-","namespace":"kube-system","uid":"a45861ba-73e0-452f-a535-f66e154ea1c6","resourceVersion":"592","creationTimestamp":"2024-01-09T00:08:43Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"ec2434b9-012c-4df1-b401-04556fed7700","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-09T00:08:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ec2434b9-012c-4df1-b401-04556fed7700\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5541 chars]
	I0109 00:09:04.872191     124 request.go:629] Waited for 197.424ms due to client-side throttling, not priority and fairness, request: GET:https://172.24.100.178:8443/api/v1/nodes/multinode-173500-m02
	I0109 00:09:04.872300     124 round_trippers.go:463] GET https://172.24.100.178:8443/api/v1/nodes/multinode-173500-m02
	I0109 00:09:04.872300     124 round_trippers.go:469] Request Headers:
	I0109 00:09:04.872564     124 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:09:04.872564     124 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0109 00:09:04.877157     124 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0109 00:09:04.877236     124 round_trippers.go:577] Response Headers:
	I0109 00:09:04.877236     124 round_trippers.go:580]     Content-Type: application/json
	I0109 00:09:04.877236     124 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b65f6d3f-e21a-46b0-805d-9b71486b6c1c
	I0109 00:09:04.877236     124 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 160c1d93-30c8-4413-9986-7aa0d5f0fdd1
	I0109 00:09:04.877236     124 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:09:04 GMT
	I0109 00:09:04.877236     124 round_trippers.go:580]     Audit-Id: 20634d62-69f4-4873-a9cd-3022d21e5ab9
	I0109 00:09:04.877236     124 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:09:04.877455     124 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-173500-m02","uid":"2696f851-45f3-47f4-953f-d03a5dc2fac0","resourceVersion":"610","creationTimestamp":"2024-01-09T00:08:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-173500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-173500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_09T00_08_44_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-09T00:08:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annot
ations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":" [truncated 3256 chars]
	I0109 00:09:04.878315     124 pod_ready.go:92] pod "kube-proxy-4h4sv" in "kube-system" namespace has status "Ready":"True"
	I0109 00:09:04.878315     124 pod_ready.go:81] duration metric: took 357.153ms waiting for pod "kube-proxy-4h4sv" in "kube-system" namespace to be "Ready" ...
	I0109 00:09:04.878315     124 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-qrtm6" in "kube-system" namespace to be "Ready" ...
	I0109 00:09:05.073788     124 request.go:629] Waited for 195.2434ms due to client-side throttling, not priority and fairness, request: GET:https://172.24.100.178:8443/api/v1/namespaces/kube-system/pods/kube-proxy-qrtm6
	I0109 00:09:05.074080     124 round_trippers.go:463] GET https://172.24.100.178:8443/api/v1/namespaces/kube-system/pods/kube-proxy-qrtm6
	I0109 00:09:05.074080     124 round_trippers.go:469] Request Headers:
	I0109 00:09:05.074080     124 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:09:05.074080     124 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0109 00:09:05.078732     124 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0109 00:09:05.078770     124 round_trippers.go:577] Response Headers:
	I0109 00:09:05.078770     124 round_trippers.go:580]     Audit-Id: 6475649f-4f20-4d42-b440-738d5b114d36
	I0109 00:09:05.078770     124 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:09:05.078770     124 round_trippers.go:580]     Content-Type: application/json
	I0109 00:09:05.078770     124 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b65f6d3f-e21a-46b0-805d-9b71486b6c1c
	I0109 00:09:05.078770     124 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 160c1d93-30c8-4413-9986-7aa0d5f0fdd1
	I0109 00:09:05.078770     124 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:09:05 GMT
	I0109 00:09:05.078770     124 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-qrtm6","generateName":"kube-proxy-","namespace":"kube-system","uid":"37d066e0-6ff3-4f22-abc3-6bddfa64736e","resourceVersion":"374","creationTimestamp":"2024-01-09T00:05:45Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"ec2434b9-012c-4df1-b401-04556fed7700","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-09T00:05:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ec2434b9-012c-4df1-b401-04556fed7700\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5537 chars]
	I0109 00:09:05.276557     124 request.go:629] Waited for 196.6634ms due to client-side throttling, not priority and fairness, request: GET:https://172.24.100.178:8443/api/v1/nodes/multinode-173500
	I0109 00:09:05.276652     124 round_trippers.go:463] GET https://172.24.100.178:8443/api/v1/nodes/multinode-173500
	I0109 00:09:05.276652     124 round_trippers.go:469] Request Headers:
	I0109 00:09:05.276652     124 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0109 00:09:05.276652     124 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:09:05.281093     124 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0109 00:09:05.281093     124 round_trippers.go:577] Response Headers:
	I0109 00:09:05.281093     124 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 160c1d93-30c8-4413-9986-7aa0d5f0fdd1
	I0109 00:09:05.281093     124 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:09:05 GMT
	I0109 00:09:05.281093     124 round_trippers.go:580]     Audit-Id: 764ee964-ac7c-45c3-8c67-04b582493dc8
	I0109 00:09:05.281657     124 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:09:05.281657     124 round_trippers.go:580]     Content-Type: application/json
	I0109 00:09:05.281657     124 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b65f6d3f-e21a-46b0-805d-9b71486b6c1c
	I0109 00:09:05.281903     124 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-173500","uid":"5e5c58e3-1ae8-4346-8766-7537fea36975","resourceVersion":"420","creationTimestamp":"2024-01-09T00:05:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-173500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-173500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_09T00_05_33_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-01-09T00:05:27Z","fieldsType":"FieldsV1","fi [truncated 4959 chars]
	I0109 00:09:05.282530     124 pod_ready.go:92] pod "kube-proxy-qrtm6" in "kube-system" namespace has status "Ready":"True"
	I0109 00:09:05.282619     124 pod_ready.go:81] duration metric: took 404.3044ms waiting for pod "kube-proxy-qrtm6" in "kube-system" namespace to be "Ready" ...
	I0109 00:09:05.282619     124 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-173500" in "kube-system" namespace to be "Ready" ...
	I0109 00:09:05.478943     124 request.go:629] Waited for 195.9741ms due to client-side throttling, not priority and fairness, request: GET:https://172.24.100.178:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-173500
	I0109 00:09:05.478943     124 round_trippers.go:463] GET https://172.24.100.178:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-173500
	I0109 00:09:05.478943     124 round_trippers.go:469] Request Headers:
	I0109 00:09:05.478943     124 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:09:05.478943     124 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0109 00:09:05.483608     124 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0109 00:09:05.483608     124 round_trippers.go:577] Response Headers:
	I0109 00:09:05.483948     124 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:09:05.483948     124 round_trippers.go:580]     Content-Type: application/json
	I0109 00:09:05.483948     124 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b65f6d3f-e21a-46b0-805d-9b71486b6c1c
	I0109 00:09:05.483948     124 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 160c1d93-30c8-4413-9986-7aa0d5f0fdd1
	I0109 00:09:05.483948     124 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:09:05 GMT
	I0109 00:09:05.483948     124 round_trippers.go:580]     Audit-Id: d70213ff-ae7d-445e-b35c-89d22b243e38
	I0109 00:09:05.484323     124 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-173500","namespace":"kube-system","uid":"31d8cdf6-292f-4b3c-87c5-951fc34d0ea4","resourceVersion":"370","creationTimestamp":"2024-01-09T00:05:30Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"70306498a200a6bbe0aa0b41e240d63b","kubernetes.io/config.mirror":"70306498a200a6bbe0aa0b41e240d63b","kubernetes.io/config.seen":"2024-01-09T00:05:21.481168866Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-173500","uid":"5e5c58e3-1ae8-4346-8766-7537fea36975","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-09T00:05:30Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4703 chars]
	I0109 00:09:05.668666     124 request.go:629] Waited for 183.4354ms due to client-side throttling, not priority and fairness, request: GET:https://172.24.100.178:8443/api/v1/nodes/multinode-173500
	I0109 00:09:05.668757     124 round_trippers.go:463] GET https://172.24.100.178:8443/api/v1/nodes/multinode-173500
	I0109 00:09:05.668836     124 round_trippers.go:469] Request Headers:
	I0109 00:09:05.668836     124 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:09:05.668836     124 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0109 00:09:05.672824     124 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0109 00:09:05.672824     124 round_trippers.go:577] Response Headers:
	I0109 00:09:05.672824     124 round_trippers.go:580]     Audit-Id: 5f618d23-1ea1-4a91-8c24-400c773c4beb
	I0109 00:09:05.672824     124 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:09:05.672824     124 round_trippers.go:580]     Content-Type: application/json
	I0109 00:09:05.672824     124 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b65f6d3f-e21a-46b0-805d-9b71486b6c1c
	I0109 00:09:05.672824     124 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 160c1d93-30c8-4413-9986-7aa0d5f0fdd1
	I0109 00:09:05.672934     124 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:09:05 GMT
	I0109 00:09:05.673042     124 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-173500","uid":"5e5c58e3-1ae8-4346-8766-7537fea36975","resourceVersion":"420","creationTimestamp":"2024-01-09T00:05:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-173500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-173500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_09T00_05_33_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-01-09T00:05:27Z","fieldsType":"FieldsV1","fi [truncated 4959 chars]
	I0109 00:09:05.673744     124 pod_ready.go:92] pod "kube-scheduler-multinode-173500" in "kube-system" namespace has status "Ready":"True"
	I0109 00:09:05.673892     124 pod_ready.go:81] duration metric: took 391.2726ms waiting for pod "kube-scheduler-multinode-173500" in "kube-system" namespace to be "Ready" ...
	I0109 00:09:05.673921     124 pod_ready.go:38] duration metric: took 1.2030739s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0109 00:09:05.673921     124 system_svc.go:44] waiting for kubelet service to be running ....
	I0109 00:09:05.688812     124 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0109 00:09:05.713981     124 system_svc.go:56] duration metric: took 40.0259ms WaitForService to wait for kubelet.
	I0109 00:09:05.714035     124 kubeadm.go:581] duration metric: took 20.7988193s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0109 00:09:05.714035     124 node_conditions.go:102] verifying NodePressure condition ...
	I0109 00:09:05.872303     124 request.go:629] Waited for 158.1531ms due to client-side throttling, not priority and fairness, request: GET:https://172.24.100.178:8443/api/v1/nodes
	I0109 00:09:05.872944     124 round_trippers.go:463] GET https://172.24.100.178:8443/api/v1/nodes
	I0109 00:09:05.872944     124 round_trippers.go:469] Request Headers:
	I0109 00:09:05.872944     124 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0109 00:09:05.873032     124 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:09:05.877252     124 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0109 00:09:05.877454     124 round_trippers.go:577] Response Headers:
	I0109 00:09:05.877454     124 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:09:05 GMT
	I0109 00:09:05.877454     124 round_trippers.go:580]     Audit-Id: 8c53460b-ba42-4c51-af92-9bf4889e93f5
	I0109 00:09:05.877454     124 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:09:05.877454     124 round_trippers.go:580]     Content-Type: application/json
	I0109 00:09:05.877454     124 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b65f6d3f-e21a-46b0-805d-9b71486b6c1c
	I0109 00:09:05.877454     124 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 160c1d93-30c8-4413-9986-7aa0d5f0fdd1
	I0109 00:09:05.878047     124 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"611"},"items":[{"metadata":{"name":"multinode-173500","uid":"5e5c58e3-1ae8-4346-8766-7537fea36975","resourceVersion":"420","creationTimestamp":"2024-01-09T00:05:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-173500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-173500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_09T00_05_33_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFiel
ds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time" [truncated 9260 chars]
	I0109 00:09:05.878576     124 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0109 00:09:05.878576     124 node_conditions.go:123] node cpu capacity is 2
	I0109 00:09:05.878576     124 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0109 00:09:05.878576     124 node_conditions.go:123] node cpu capacity is 2
	I0109 00:09:05.878576     124 node_conditions.go:105] duration metric: took 164.5412ms to run NodePressure ...
	I0109 00:09:05.878576     124 start.go:228] waiting for startup goroutines ...
	I0109 00:09:05.878576     124 start.go:242] writing updated cluster config ...
	I0109 00:09:05.894704     124 ssh_runner.go:195] Run: rm -f paused
	I0109 00:09:06.059074     124 start.go:600] kubectl: 1.29.0, cluster: 1.28.4 (minor skew: 1)
	I0109 00:09:06.065210     124 out.go:177] * Done! kubectl is now configured to use "multinode-173500" cluster and "default" namespace by default
	
	
	==> Docker <==
	-- Journal begins at Tue 2024-01-09 00:03:32 UTC, ends at Tue 2024-01-09 00:10:23 UTC. --
	Jan 09 00:05:59 multinode-173500 dockerd[1331]: time="2024-01-09T00:05:59.734427071Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 09 00:05:59 multinode-173500 dockerd[1331]: time="2024-01-09T00:05:59.744612306Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jan 09 00:05:59 multinode-173500 dockerd[1331]: time="2024-01-09T00:05:59.744746707Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 09 00:05:59 multinode-173500 dockerd[1331]: time="2024-01-09T00:05:59.744780607Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jan 09 00:05:59 multinode-173500 dockerd[1331]: time="2024-01-09T00:05:59.744799407Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 09 00:06:00 multinode-173500 cri-dockerd[1215]: time="2024-01-09T00:06:00Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/ea6b136c3ff5de888c757497a2d4eba3cc54dc7e0bd660e0f76c60e6969a2290/resolv.conf as [nameserver 172.24.96.1]"
	Jan 09 00:06:00 multinode-173500 cri-dockerd[1215]: time="2024-01-09T00:06:00Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/95f02a16160efe98834759312f119478583f5698d67a865708c5d3b0545ccfef/resolv.conf as [nameserver 172.24.96.1]"
	Jan 09 00:06:00 multinode-173500 dockerd[1331]: time="2024-01-09T00:06:00.620523684Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jan 09 00:06:00 multinode-173500 dockerd[1331]: time="2024-01-09T00:06:00.620600484Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 09 00:06:00 multinode-173500 dockerd[1331]: time="2024-01-09T00:06:00.620827885Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jan 09 00:06:00 multinode-173500 dockerd[1331]: time="2024-01-09T00:06:00.620955685Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 09 00:06:00 multinode-173500 dockerd[1331]: time="2024-01-09T00:06:00.669829343Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jan 09 00:06:00 multinode-173500 dockerd[1331]: time="2024-01-09T00:06:00.669976543Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 09 00:06:00 multinode-173500 dockerd[1331]: time="2024-01-09T00:06:00.670002343Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jan 09 00:06:00 multinode-173500 dockerd[1331]: time="2024-01-09T00:06:00.670122644Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 09 00:09:32 multinode-173500 dockerd[1331]: time="2024-01-09T00:09:32.095728136Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jan 09 00:09:32 multinode-173500 dockerd[1331]: time="2024-01-09T00:09:32.095880636Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 09 00:09:32 multinode-173500 dockerd[1331]: time="2024-01-09T00:09:32.095903736Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jan 09 00:09:32 multinode-173500 dockerd[1331]: time="2024-01-09T00:09:32.095920836Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 09 00:09:32 multinode-173500 cri-dockerd[1215]: time="2024-01-09T00:09:32Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/2f9750b321708f33ccba2cfbf5cc8ff1555b240b923ec80238f2950bc69c1f36/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Jan 09 00:09:33 multinode-173500 cri-dockerd[1215]: time="2024-01-09T00:09:33Z" level=info msg="Stop pulling image gcr.io/k8s-minikube/busybox:1.28: Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:1.28"
	Jan 09 00:09:34 multinode-173500 dockerd[1331]: time="2024-01-09T00:09:34.013907181Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jan 09 00:09:34 multinode-173500 dockerd[1331]: time="2024-01-09T00:09:34.013978981Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 09 00:09:34 multinode-173500 dockerd[1331]: time="2024-01-09T00:09:34.015341281Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jan 09 00:09:34 multinode-173500 dockerd[1331]: time="2024-01-09T00:09:34.015516681Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	d90035f998d24       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   50 seconds ago      Running             busybox                   0                   2f9750b321708       busybox-5bc68d56bd-cfnc7
	cc24fe03754e0       ead0a4a53df89                                                                                         4 minutes ago       Running             coredns                   0                   ea6b136c3ff5d       coredns-5dd5756b68-bkss9
	87cfa509bf083       6e38f40d628db                                                                                         4 minutes ago       Running             storage-provisioner       0                   95f02a16160ef       storage-provisioner
	73ce70f8eca1e       kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052              4 minutes ago       Running             kindnet-cni               0                   f8bc35a82f652       kindnet-ht547
	9faec0fdff890       83f6cc407eed8                                                                                         4 minutes ago       Running             kube-proxy                0                   4ab23b363c354       kube-proxy-qrtm6
	16fd62cddf8b2       73deb9a3f7025                                                                                         5 minutes ago       Running             etcd                      0                   f45ca2656d297       etcd-multinode-173500
	c6bc1bb3e368d       e3db313c6dbc0                                                                                         5 minutes ago       Running             kube-scheduler            0                   414e36a1f442f       kube-scheduler-multinode-173500
	aa0ba9733b8d8       d058aa5ab969c                                                                                         5 minutes ago       Running             kube-controller-manager   0                   1b9f9a6d5d523       kube-controller-manager-multinode-173500
	e4e40eb718ff1       7fe0e6f37db33                                                                                         5 minutes ago       Running             kube-apiserver            0                   ae920e11c3440       kube-apiserver-multinode-173500
	
	
	==> coredns [cc24fe03754e] <==
	[INFO] 10.244.0.3:42740 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0002139s
	[INFO] 10.244.1.2:59998 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0002018s
	[INFO] 10.244.1.2:59097 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.0000497s
	[INFO] 10.244.1.2:33857 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.0000706s
	[INFO] 10.244.1.2:51802 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.0000618s
	[INFO] 10.244.1.2:57262 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.0000549s
	[INFO] 10.244.1.2:52763 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.0001599s
	[INFO] 10.244.1.2:60132 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000068s
	[INFO] 10.244.1.2:52590 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0000511s
	[INFO] 10.244.0.3:37184 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0002542s
	[INFO] 10.244.0.3:36933 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.0000836s
	[INFO] 10.244.0.3:46781 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.0000592s
	[INFO] 10.244.0.3:43261 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0001146s
	[INFO] 10.244.1.2:36348 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0001666s
	[INFO] 10.244.1.2:44924 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000091s
	[INFO] 10.244.1.2:37397 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000104s
	[INFO] 10.244.1.2:47064 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0000527s
	[INFO] 10.244.0.3:58487 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0000938s
	[INFO] 10.244.0.3:38603 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.0001792s
	[INFO] 10.244.0.3:45614 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.0001165s
	[INFO] 10.244.0.3:36160 - 5 "PTR IN 1.96.24.172.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.0002146s
	[INFO] 10.244.1.2:39722 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0001182s
	[INFO] 10.244.1.2:60559 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.0001412s
	[INFO] 10.244.1.2:42442 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.0001273s
	[INFO] 10.244.1.2:38705 - 5 "PTR IN 1.96.24.172.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.0001219s
	
	
	==> describe nodes <==
	Name:               multinode-173500
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-173500
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a
	                    minikube.k8s.io/name=multinode-173500
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_01_09T00_05_33_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 09 Jan 2024 00:05:27 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-173500
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 09 Jan 2024 00:10:17 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 09 Jan 2024 00:10:08 +0000   Tue, 09 Jan 2024 00:05:24 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 09 Jan 2024 00:10:08 +0000   Tue, 09 Jan 2024 00:05:24 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 09 Jan 2024 00:10:08 +0000   Tue, 09 Jan 2024 00:05:24 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 09 Jan 2024 00:10:08 +0000   Tue, 09 Jan 2024 00:05:59 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.24.100.178
	  Hostname:    multinode-173500
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165980Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165980Ki
	  pods:               110
	System Info:
	  Machine ID:                 3419582c696c4f2690eecf0afe18d995
	  System UUID:                0ef18d3b-01b0-a246-9e9a-8c597fba2d09
	  Boot ID:                    eaa59322-6749-449b-9220-e83fe95acacf
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://24.0.7
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5bc68d56bd-cfnc7                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         52s
	  kube-system                 coredns-5dd5756b68-bkss9                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     4m38s
	  kube-system                 etcd-multinode-173500                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         4m52s
	  kube-system                 kindnet-ht547                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      4m38s
	  kube-system                 kube-apiserver-multinode-173500             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m52s
	  kube-system                 kube-controller-manager-multinode-173500    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m52s
	  kube-system                 kube-proxy-qrtm6                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m38s
	  kube-system                 kube-scheduler-multinode-173500             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m53s
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m30s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 4m37s  kube-proxy       
	  Normal  Starting                 4m52s  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m52s  kubelet          Node multinode-173500 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m52s  kubelet          Node multinode-173500 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m52s  kubelet          Node multinode-173500 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m52s  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m39s  node-controller  Node multinode-173500 event: Registered Node multinode-173500 in Controller
	  Normal  NodeReady                4m24s  kubelet          Node multinode-173500 status is now: NodeReady
	
	
	Name:               multinode-173500-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-173500-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a
	                    minikube.k8s.io/name=multinode-173500
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_01_09T00_08_44_0700
	                    minikube.k8s.io/version=v1.32.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 09 Jan 2024 00:08:43 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-173500-m02
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 09 Jan 2024 00:10:15 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 09 Jan 2024 00:09:45 +0000   Tue, 09 Jan 2024 00:08:43 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 09 Jan 2024 00:09:45 +0000   Tue, 09 Jan 2024 00:08:43 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 09 Jan 2024 00:09:45 +0000   Tue, 09 Jan 2024 00:08:43 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 09 Jan 2024 00:09:45 +0000   Tue, 09 Jan 2024 00:09:04 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.24.108.84
	  Hostname:    multinode-173500-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165980Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165980Ki
	  pods:               110
	System Info:
	  Machine ID:                 dd6e5193e26749459eed8832fdd6533b
	  System UUID:                59ca1e55-1c20-9b4a-8413-0653325c9061
	  Boot ID:                    ec8d9485-cffd-4cfb-91cc-bef1276ce5c1
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://24.0.7
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5bc68d56bd-txtnl    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         52s
	  kube-system                 kindnet-t72cs               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      100s
	  kube-system                 kube-proxy-4h4sv            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         100s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 90s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  100s (x5 over 102s)  kubelet          Node multinode-173500-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    100s (x5 over 102s)  kubelet          Node multinode-173500-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     100s (x5 over 102s)  kubelet          Node multinode-173500-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           99s                  node-controller  Node multinode-173500-m02 event: Registered Node multinode-173500-m02 in Controller
	  Normal  NodeReady                79s                  kubelet          Node multinode-173500-m02 status is now: NodeReady
	
	
	==> dmesg <==
	[  +1.328145] psmouse serio1: trackpoint: failed to get extended button data, assuming 3 buttons
	[  +1.052277] systemd-fstab-generator[113]: Ignoring "noauto" for root device
	[  +1.152935] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000003] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +8.059377] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Jan 9 00:04] systemd-fstab-generator[656]: Ignoring "noauto" for root device
	[  +0.146798] systemd-fstab-generator[667]: Ignoring "noauto" for root device
	[ +31.031958] systemd-fstab-generator[939]: Ignoring "noauto" for root device
	[  +0.600989] systemd-fstab-generator[979]: Ignoring "noauto" for root device
	[  +0.164778] systemd-fstab-generator[990]: Ignoring "noauto" for root device
	[  +0.196657] systemd-fstab-generator[1003]: Ignoring "noauto" for root device
	[  +1.362271] kauditd_printk_skb: 14 callbacks suppressed
	[  +0.390583] systemd-fstab-generator[1160]: Ignoring "noauto" for root device
	[  +0.171653] systemd-fstab-generator[1171]: Ignoring "noauto" for root device
	[  +0.170378] systemd-fstab-generator[1182]: Ignoring "noauto" for root device
	[  +0.177967] systemd-fstab-generator[1193]: Ignoring "noauto" for root device
	[  +0.209774] systemd-fstab-generator[1207]: Ignoring "noauto" for root device
	[Jan 9 00:05] systemd-fstab-generator[1316]: Ignoring "noauto" for root device
	[  +2.762181] kauditd_printk_skb: 29 callbacks suppressed
	[  +6.447141] systemd-fstab-generator[1695]: Ignoring "noauto" for root device
	[  +0.797737] kauditd_printk_skb: 29 callbacks suppressed
	[  +9.610228] systemd-fstab-generator[2683]: Ignoring "noauto" for root device
	[ +26.736956] kauditd_printk_skb: 14 callbacks suppressed
	
	
	==> etcd [16fd62cddf8b] <==
	{"level":"info","ts":"2024-01-09T00:05:25.351269Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 1a66b2354aff11e6 elected leader 1a66b2354aff11e6 at term 2"}
	{"level":"info","ts":"2024-01-09T00:05:25.358159Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-01-09T00:05:25.364252Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"1a66b2354aff11e6","local-member-attributes":"{Name:multinode-173500 ClientURLs:[https://172.24.100.178:2379]}","request-path":"/0/members/1a66b2354aff11e6/attributes","cluster-id":"e7775a1fec048288","publish-timeout":"7s"}
	{"level":"info","ts":"2024-01-09T00:05:25.365081Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-01-09T00:05:25.370951Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"172.24.100.178:2379"}
	{"level":"info","ts":"2024-01-09T00:05:25.375119Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-01-09T00:05:25.376358Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-01-09T00:05:25.376737Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-01-09T00:05:25.434414Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"e7775a1fec048288","local-member-id":"1a66b2354aff11e6","cluster-version":"3.5"}
	{"level":"info","ts":"2024-01-09T00:05:25.435025Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-01-09T00:05:25.435577Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-01-09T00:05:25.402501Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-01-09T00:06:13.17132Z","caller":"traceutil/trace.go:171","msg":"trace[1585924540] transaction","detail":"{read_only:false; response_revision:429; number_of_response:1; }","duration":"153.487014ms","start":"2024-01-09T00:06:13.017811Z","end":"2024-01-09T00:06:13.171298Z","steps":["trace[1585924540] 'process raft request'  (duration: 153.283214ms)"],"step_count":1}
	{"level":"info","ts":"2024-01-09T00:06:15.498467Z","caller":"traceutil/trace.go:171","msg":"trace[973550648] linearizableReadLoop","detail":"{readStateIndex:447; appliedIndex:446; }","duration":"168.235405ms","start":"2024-01-09T00:06:15.330213Z","end":"2024-01-09T00:06:15.498448Z","steps":["trace[973550648] 'read index received'  (duration: 167.961105ms)","trace[973550648] 'applied index is now lower than readState.Index'  (duration: 273.6µs)"],"step_count":2}
	{"level":"warn","ts":"2024-01-09T00:06:15.498964Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"168.599407ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-01-09T00:06:15.499152Z","caller":"traceutil/trace.go:171","msg":"trace[30833720] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:430; }","duration":"169.033307ms","start":"2024-01-09T00:06:15.330108Z","end":"2024-01-09T00:06:15.499141Z","steps":["trace[30833720] 'agreement among raft nodes before linearized reading'  (duration: 168.469007ms)"],"step_count":1}
	{"level":"info","ts":"2024-01-09T00:06:15.499567Z","caller":"traceutil/trace.go:171","msg":"trace[1350969726] transaction","detail":"{read_only:false; response_revision:430; number_of_response:1; }","duration":"312.678183ms","start":"2024-01-09T00:06:15.186878Z","end":"2024-01-09T00:06:15.499556Z","steps":["trace[1350969726] 'process raft request'  (duration: 311.365681ms)"],"step_count":1}
	{"level":"warn","ts":"2024-01-09T00:06:15.500925Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-01-09T00:06:15.186787Z","time spent":"312.835383ms","remote":"127.0.0.1:54358","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1101,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:429 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1028 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"info","ts":"2024-01-09T00:07:10.218215Z","caller":"traceutil/trace.go:171","msg":"trace[1857989248] transaction","detail":"{read_only:false; response_revision:473; number_of_response:1; }","duration":"107.214528ms","start":"2024-01-09T00:07:10.110982Z","end":"2024-01-09T00:07:10.218196Z","steps":["trace[1857989248] 'process raft request'  (duration: 106.97473ms)"],"step_count":1}
	{"level":"info","ts":"2024-01-09T00:07:10.222252Z","caller":"traceutil/trace.go:171","msg":"trace[1267309509] transaction","detail":"{read_only:false; response_revision:474; number_of_response:1; }","duration":"101.047973ms","start":"2024-01-09T00:07:10.121175Z","end":"2024-01-09T00:07:10.222223Z","steps":["trace[1267309509] 'process raft request'  (duration: 100.842274ms)"],"step_count":1}
	{"level":"warn","ts":"2024-01-09T00:08:35.926708Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"593.753911ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"warn","ts":"2024-01-09T00:08:35.926753Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"187.071734ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 serializable:true keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-01-09T00:08:35.926796Z","caller":"traceutil/trace.go:171","msg":"trace[1170955800] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:541; }","duration":"187.116434ms","start":"2024-01-09T00:08:35.739666Z","end":"2024-01-09T00:08:35.926783Z","steps":["trace[1170955800] 'range keys from in-memory index tree'  (duration: 187.036934ms)"],"step_count":1}
	{"level":"info","ts":"2024-01-09T00:08:35.926798Z","caller":"traceutil/trace.go:171","msg":"trace[696196105] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:541; }","duration":"593.854611ms","start":"2024-01-09T00:08:35.332916Z","end":"2024-01-09T00:08:35.92677Z","steps":["trace[696196105] 'range keys from in-memory index tree'  (duration: 593.663311ms)"],"step_count":1}
	{"level":"warn","ts":"2024-01-09T00:08:35.92683Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-01-09T00:08:35.332853Z","time spent":"593.969812ms","remote":"127.0.0.1:54322","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":28,"request content":"key:\"/registry/health\" "}
	
	
	==> kernel <==
	 00:10:24 up 7 min,  0 users,  load average: 0.41, 0.51, 0.26
	Linux multinode-173500 5.10.57 #1 SMP Sat Dec 16 11:03:54 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	
	==> kindnet [73ce70f8eca1] <==
	I0109 00:09:18.075519       1 main.go:250] Node multinode-173500-m02 has CIDR [10.244.1.0/24] 
	I0109 00:09:28.083280       1 main.go:223] Handling node with IPs: map[172.24.100.178:{}]
	I0109 00:09:28.083383       1 main.go:227] handling current node
	I0109 00:09:28.083398       1 main.go:223] Handling node with IPs: map[172.24.108.84:{}]
	I0109 00:09:28.083407       1 main.go:250] Node multinode-173500-m02 has CIDR [10.244.1.0/24] 
	I0109 00:09:38.102821       1 main.go:223] Handling node with IPs: map[172.24.100.178:{}]
	I0109 00:09:38.103200       1 main.go:227] handling current node
	I0109 00:09:38.103556       1 main.go:223] Handling node with IPs: map[172.24.108.84:{}]
	I0109 00:09:38.103930       1 main.go:250] Node multinode-173500-m02 has CIDR [10.244.1.0/24] 
	I0109 00:09:48.119113       1 main.go:223] Handling node with IPs: map[172.24.100.178:{}]
	I0109 00:09:48.119140       1 main.go:227] handling current node
	I0109 00:09:48.119170       1 main.go:223] Handling node with IPs: map[172.24.108.84:{}]
	I0109 00:09:48.119178       1 main.go:250] Node multinode-173500-m02 has CIDR [10.244.1.0/24] 
	I0109 00:09:58.124817       1 main.go:223] Handling node with IPs: map[172.24.100.178:{}]
	I0109 00:09:58.124937       1 main.go:227] handling current node
	I0109 00:09:58.124951       1 main.go:223] Handling node with IPs: map[172.24.108.84:{}]
	I0109 00:09:58.124958       1 main.go:250] Node multinode-173500-m02 has CIDR [10.244.1.0/24] 
	I0109 00:10:08.141521       1 main.go:223] Handling node with IPs: map[172.24.100.178:{}]
	I0109 00:10:08.145143       1 main.go:227] handling current node
	I0109 00:10:08.145164       1 main.go:223] Handling node with IPs: map[172.24.108.84:{}]
	I0109 00:10:08.145175       1 main.go:250] Node multinode-173500-m02 has CIDR [10.244.1.0/24] 
	I0109 00:10:18.151700       1 main.go:223] Handling node with IPs: map[172.24.100.178:{}]
	I0109 00:10:18.151829       1 main.go:227] handling current node
	I0109 00:10:18.151844       1 main.go:223] Handling node with IPs: map[172.24.108.84:{}]
	I0109 00:10:18.151852       1 main.go:250] Node multinode-173500-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [e4e40eb718ff] <==
	I0109 00:05:27.478450       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I0109 00:05:27.478825       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I0109 00:05:27.483927       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0109 00:05:27.485612       1 controller.go:624] quota admission added evaluator for: namespaces
	I0109 00:05:27.536870       1 cache.go:39] Caches are synced for autoregister controller
	I0109 00:05:27.554619       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0109 00:05:27.568616       1 shared_informer.go:318] Caches are synced for node_authorizer
	I0109 00:05:27.575190       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I0109 00:05:27.575449       1 shared_informer.go:318] Caches are synced for configmaps
	I0109 00:05:27.577314       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0109 00:05:28.393494       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0109 00:05:28.402672       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0109 00:05:28.402778       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0109 00:05:29.630687       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0109 00:05:29.730458       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0109 00:05:29.930424       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0109 00:05:29.953918       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [172.24.100.178]
	I0109 00:05:29.955199       1 controller.go:624] quota admission added evaluator for: endpoints
	I0109 00:05:29.964613       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0109 00:05:30.493162       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0109 00:05:31.384699       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0109 00:05:31.413652       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0109 00:05:31.434333       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0109 00:05:44.866587       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I0109 00:05:45.017455       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	
	
	==> kube-controller-manager [aa0ba9733b8d] <==
	I0109 00:05:45.381676       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="58.701µs"
	I0109 00:05:59.106913       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="143.1µs"
	I0109 00:05:59.162412       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="68.8µs"
	I0109 00:05:59.263727       1 node_lifecycle_controller.go:1048] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	I0109 00:06:01.445995       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="87.6µs"
	I0109 00:06:01.512149       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="26.921681ms"
	I0109 00:06:01.512449       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="98.3µs"
	I0109 00:08:43.424860       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-173500-m02\" does not exist"
	I0109 00:08:43.450521       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-173500-m02" podCIDRs=["10.244.1.0/24"]
	I0109 00:08:43.459716       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-t72cs"
	I0109 00:08:43.473685       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-4h4sv"
	I0109 00:08:44.298961       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-173500-m02"
	I0109 00:08:44.299517       1 event.go:307] "Event occurred" object="multinode-173500-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-173500-m02 event: Registered Node multinode-173500-m02 in Controller"
	I0109 00:09:04.342029       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-173500-m02"
	I0109 00:09:31.421371       1 event.go:307] "Event occurred" object="default/busybox" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set busybox-5bc68d56bd to 2"
	I0109 00:09:31.456608       1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-5bc68d56bd-txtnl"
	I0109 00:09:31.479283       1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-5bc68d56bd-cfnc7"
	I0109 00:09:31.501298       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="79.922723ms"
	I0109 00:09:31.532933       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="29.804809ms"
	I0109 00:09:31.535231       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="72.6µs"
	I0109 00:09:31.556951       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="65.4µs"
	I0109 00:09:34.653596       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="12.070002ms"
	I0109 00:09:34.654245       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="94.5µs"
	I0109 00:09:34.798017       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="16.779203ms"
	I0109 00:09:34.798369       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="273µs"
	
	
	==> kube-proxy [9faec0fdff89] <==
	I0109 00:05:46.392694       1 server_others.go:69] "Using iptables proxy"
	I0109 00:05:46.408193       1 node.go:141] Successfully retrieved node IP: 172.24.100.178
	I0109 00:05:46.459651       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0109 00:05:46.459700       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0109 00:05:46.463149       1 server_others.go:152] "Using iptables Proxier"
	I0109 00:05:46.463194       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0109 00:05:46.463690       1 server.go:846] "Version info" version="v1.28.4"
	I0109 00:05:46.463707       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0109 00:05:46.465493       1 config.go:188] "Starting service config controller"
	I0109 00:05:46.465591       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0109 00:05:46.465632       1 config.go:97] "Starting endpoint slice config controller"
	I0109 00:05:46.465657       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0109 00:05:46.469493       1 config.go:315] "Starting node config controller"
	I0109 00:05:46.469531       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0109 00:05:46.566029       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0109 00:05:46.566037       1 shared_informer.go:318] Caches are synced for service config
	I0109 00:05:46.569916       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [c6bc1bb3e368] <==
	W0109 00:05:28.459818       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0109 00:05:28.459869       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0109 00:05:28.649547       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0109 00:05:28.649828       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0109 00:05:28.730526       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0109 00:05:28.730560       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0109 00:05:28.747358       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0109 00:05:28.747423       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0109 00:05:28.777226       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0109 00:05:28.777767       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0109 00:05:28.800761       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0109 00:05:28.800818       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0109 00:05:28.843807       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0109 00:05:28.844417       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0109 00:05:28.888984       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0109 00:05:28.889016       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0109 00:05:28.937776       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0109 00:05:28.937898       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0109 00:05:28.955882       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0109 00:05:28.956129       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0109 00:05:29.004492       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0109 00:05:29.004621       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0109 00:05:29.046692       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0109 00:05:29.046989       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0109 00:05:30.083101       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	-- Journal begins at Tue 2024-01-09 00:03:32 UTC, ends at Tue 2024-01-09 00:10:24 UTC. --
	Jan 09 00:05:59 multinode-173500 kubelet[2708]: I0109 00:05:59.211279    2708 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/936240bb-4bdd-4681-91a9-cb458c623805-tmp\") pod \"storage-provisioner\" (UID: \"936240bb-4bdd-4681-91a9-cb458c623805\") " pod="kube-system/storage-provisioner"
	Jan 09 00:05:59 multinode-173500 kubelet[2708]: I0109 00:05:59.211465    2708 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f7xdl\" (UniqueName: \"kubernetes.io/projected/936240bb-4bdd-4681-91a9-cb458c623805-kube-api-access-f7xdl\") pod \"storage-provisioner\" (UID: \"936240bb-4bdd-4681-91a9-cb458c623805\") " pod="kube-system/storage-provisioner"
	Jan 09 00:06:00 multinode-173500 kubelet[2708]: I0109 00:06:00.388361    2708 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ea6b136c3ff5de888c757497a2d4eba3cc54dc7e0bd660e0f76c60e6969a2290"
	Jan 09 00:06:00 multinode-173500 kubelet[2708]: I0109 00:06:00.394820    2708 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="95f02a16160efe98834759312f119478583f5698d67a865708c5d3b0545ccfef"
	Jan 09 00:06:01 multinode-173500 kubelet[2708]: I0109 00:06:01.448634    2708 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-bkss9" podStartSLOduration=16.448596162 podCreationTimestamp="2024-01-09 00:05:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-01-09 00:06:01.44458675 +0000 UTC m=+30.160382330" watchObservedRunningTime="2024-01-09 00:06:01.448596162 +0000 UTC m=+30.164391642"
	Jan 09 00:06:31 multinode-173500 kubelet[2708]: E0109 00:06:31.692333    2708 iptables.go:575] "Could not set up iptables canary" err=<
	Jan 09 00:06:31 multinode-173500 kubelet[2708]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jan 09 00:06:31 multinode-173500 kubelet[2708]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jan 09 00:06:31 multinode-173500 kubelet[2708]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jan 09 00:07:31 multinode-173500 kubelet[2708]: E0109 00:07:31.690908    2708 iptables.go:575] "Could not set up iptables canary" err=<
	Jan 09 00:07:31 multinode-173500 kubelet[2708]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jan 09 00:07:31 multinode-173500 kubelet[2708]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jan 09 00:07:31 multinode-173500 kubelet[2708]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jan 09 00:08:31 multinode-173500 kubelet[2708]: E0109 00:08:31.704925    2708 iptables.go:575] "Could not set up iptables canary" err=<
	Jan 09 00:08:31 multinode-173500 kubelet[2708]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jan 09 00:08:31 multinode-173500 kubelet[2708]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jan 09 00:08:31 multinode-173500 kubelet[2708]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jan 09 00:09:31 multinode-173500 kubelet[2708]: I0109 00:09:31.502785    2708 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=218.502734971 podCreationTimestamp="2024-01-09 00:05:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-01-09 00:06:01.525676995 +0000 UTC m=+30.241472475" watchObservedRunningTime="2024-01-09 00:09:31.502734971 +0000 UTC m=+240.218530451"
	Jan 09 00:09:31 multinode-173500 kubelet[2708]: I0109 00:09:31.504891    2708 topology_manager.go:215] "Topology Admit Handler" podUID="e574852f-f9c9-4fde-9457-2f4309bfabf4" podNamespace="default" podName="busybox-5bc68d56bd-cfnc7"
	Jan 09 00:09:31 multinode-173500 kubelet[2708]: I0109 00:09:31.682763    2708 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5zhg8\" (UniqueName: \"kubernetes.io/projected/e574852f-f9c9-4fde-9457-2f4309bfabf4-kube-api-access-5zhg8\") pod \"busybox-5bc68d56bd-cfnc7\" (UID: \"e574852f-f9c9-4fde-9457-2f4309bfabf4\") " pod="default/busybox-5bc68d56bd-cfnc7"
	Jan 09 00:09:31 multinode-173500 kubelet[2708]: E0109 00:09:31.691763    2708 iptables.go:575] "Could not set up iptables canary" err=<
	Jan 09 00:09:31 multinode-173500 kubelet[2708]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jan 09 00:09:31 multinode-173500 kubelet[2708]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jan 09 00:09:31 multinode-173500 kubelet[2708]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jan 09 00:09:32 multinode-173500 kubelet[2708]: I0109 00:09:32.702511    2708 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2f9750b321708f33ccba2cfbf5cc8ff1555b240b923ec80238f2950bc69c1f36"
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0109 00:10:15.796569   11144 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p multinode-173500 -n multinode-173500
E0109 00:10:30.307389   14288 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-852800\client.crt: The system cannot find the path specified.
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p multinode-173500 -n multinode-173500: (12.4731107s)
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-173500 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/PingHostFrom2Pods FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (58.14s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (503.23s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe node list -p multinode-173500
multinode_test.go:318: (dbg) Run:  out/minikube-windows-amd64.exe stop -p multinode-173500
E0109 00:25:30.307064   14288 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-852800\client.crt: The system cannot find the path specified.
E0109 00:25:43.610168   14288 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-838800\client.crt: The system cannot find the path specified.
multinode_test.go:318: (dbg) Done: out/minikube-windows-amd64.exe stop -p multinode-173500: (1m23.8545563s)
multinode_test.go:323: (dbg) Run:  out/minikube-windows-amd64.exe start -p multinode-173500 --wait=true -v=8 --alsologtostderr
E0109 00:28:27.426544   14288 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ingress-addon-legacy-744200\client.crt: The system cannot find the path specified.
E0109 00:28:33.539942   14288 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-852800\client.crt: The system cannot find the path specified.
E0109 00:30:30.310059   14288 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-852800\client.crt: The system cannot find the path specified.
E0109 00:30:43.622230   14288 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-838800\client.crt: The system cannot find the path specified.
multinode_test.go:323: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p multinode-173500 --wait=true -v=8 --alsologtostderr: exit status 1 (6m20.846939s)

                                                
                                                
-- stdout --
	* [multinode-173500] minikube v1.32.0 on Microsoft Windows 10 Enterprise N 10.0.19045.3803 Build 19045.3803
	  - KUBECONFIG=C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube1\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=17830
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the hyperv driver based on existing profile
	* Starting control plane node multinode-173500 in cluster multinode-173500
	* Restarting existing hyperv VM for "multinode-173500" ...
	* Preparing Kubernetes v1.28.4 on Docker 24.0.7 ...
	* Configuring CNI (Container Networking Interface) ...
	* Enabled addons: 
	* Verifying Kubernetes components...
	* Starting worker node multinode-173500-m02 in cluster multinode-173500
	* Restarting existing hyperv VM for "multinode-173500-m02" ...
	* Found network options:
	  - NO_PROXY=172.24.109.120
	  - NO_PROXY=172.24.109.120
	* Preparing Kubernetes v1.28.4 on Docker 24.0.7 ...
	  - env NO_PROXY=172.24.109.120
	* Verifying Kubernetes components...
	* Starting worker node multinode-173500-m03 in cluster multinode-173500
	* Restarting existing hyperv VM for "multinode-173500-m03" ...

                                                
                                                
-- /stdout --
** stderr ** 
	W0109 00:26:05.717926   15272 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I0109 00:26:05.796557   15272 out.go:296] Setting OutFile to fd 928 ...
	I0109 00:26:05.797412   15272 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0109 00:26:05.797412   15272 out.go:309] Setting ErrFile to fd 660...
	I0109 00:26:05.797412   15272 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0109 00:26:05.821878   15272 out.go:303] Setting JSON to false
	I0109 00:26:05.824870   15272 start.go:128] hostinfo: {"hostname":"minikube1","uptime":7460,"bootTime":1704752505,"procs":197,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.3803 Build 19045.3803","kernelVersion":"10.0.19045.3803 Build 19045.3803","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f3192dc9-6fb2-4797-bdaa-5f567903ef41"}
	W0109 00:26:05.824870   15272 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0109 00:26:05.828936   15272 out.go:177] * [multinode-173500] minikube v1.32.0 on Microsoft Windows 10 Enterprise N 10.0.19045.3803 Build 19045.3803
	I0109 00:26:05.832758   15272 notify.go:220] Checking for updates...
	I0109 00:26:05.837297   15272 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0109 00:26:05.841744   15272 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0109 00:26:05.844776   15272 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube1\minikube-integration\.minikube
	I0109 00:26:05.847770   15272 out.go:177]   - MINIKUBE_LOCATION=17830
	I0109 00:26:05.850770   15272 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0109 00:26:05.853709   15272 config.go:182] Loaded profile config "multinode-173500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0109 00:26:05.853709   15272 driver.go:392] Setting default libvirt URI to qemu:///system
	I0109 00:26:11.301939   15272 out.go:177] * Using the hyperv driver based on existing profile
	I0109 00:26:11.305482   15272 start.go:298] selected driver: hyperv
	I0109 00:26:11.305482   15272 start.go:902] validating driver "hyperv" against &{Name:multinode-173500 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17806/minikube-v1.32.1-1702708929-17806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704751654-17830@sha256:cabd32f8d9e8d804966eb117ed5366660f6363a4d1415f0b5480de6e396be617 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCon
fig:{KubernetesVersion:v1.28.4 ClusterName:multinode-173500 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:172.24.100.178 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.24.108.84 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.24.100.87 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false ina
ccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I0109 00:26:11.305762   15272 start.go:913] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0109 00:26:11.359424   15272 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0109 00:26:11.359944   15272 cni.go:84] Creating CNI manager for ""
	I0109 00:26:11.359944   15272 cni.go:136] 3 nodes found, recommending kindnet
	I0109 00:26:11.359944   15272 start_flags.go:323] config:
	{Name:multinode-173500 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17806/minikube-v1.32.1-1702708929-17806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704751654-17830@sha256:cabd32f8d9e8d804966eb117ed5366660f6363a4d1415f0b5480de6e396be617 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-173500 Namespace:default AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:172.24.100.178 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.24.108.84 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.24.100.87 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false is
tio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fals
e CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I0109 00:26:11.360326   15272 iso.go:125] acquiring lock: {Name:mk6e44fd4f974e035b521383471f58bfbae3f4f1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0109 00:26:11.366202   15272 out.go:177] * Starting control plane node multinode-173500 in cluster multinode-173500
	I0109 00:26:11.368739   15272 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0109 00:26:11.368739   15272 preload.go:148] Found local preload: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4
	I0109 00:26:11.368739   15272 cache.go:56] Caching tarball of preloaded images
	I0109 00:26:11.369500   15272 preload.go:174] Found C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0109 00:26:11.369500   15272 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on docker
	I0109 00:26:11.369500   15272 profile.go:148] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-173500\config.json ...
	I0109 00:26:11.372555   15272 start.go:365] acquiring machines lock for multinode-173500: {Name:mk1756dfe9622c208593bf9b718faf6c1651aea2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0109 00:26:11.372555   15272 start.go:369] acquired machines lock for "multinode-173500" in 0s
	I0109 00:26:11.373207   15272 start.go:96] Skipping create...Using existing machine configuration
	I0109 00:26:11.373358   15272 fix.go:54] fixHost starting: 
	I0109 00:26:11.373525   15272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-173500 ).state
	I0109 00:26:14.140760   15272 main.go:141] libmachine: [stdout =====>] : Off
	
	I0109 00:26:14.140760   15272 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:26:14.140856   15272 fix.go:102] recreateIfNeeded on multinode-173500: state=Stopped err=<nil>
	W0109 00:26:14.140856   15272 fix.go:128] unexpected machine state, will restart: <nil>
	I0109 00:26:14.148515   15272 out.go:177] * Restarting existing hyperv VM for "multinode-173500" ...
	I0109 00:26:14.151421   15272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-173500
	I0109 00:26:17.292767   15272 main.go:141] libmachine: [stdout =====>] : 
	I0109 00:26:17.293006   15272 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:26:17.293006   15272 main.go:141] libmachine: Waiting for host to start...
	I0109 00:26:17.293173   15272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-173500 ).state
	I0109 00:26:19.597242   15272 main.go:141] libmachine: [stdout =====>] : Running
	
	I0109 00:26:19.597242   15272 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:26:19.597334   15272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-173500 ).networkadapters[0]).ipaddresses[0]
	I0109 00:26:22.196185   15272 main.go:141] libmachine: [stdout =====>] : 
	I0109 00:26:22.196185   15272 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:26:23.199462   15272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-173500 ).state
	I0109 00:26:25.515082   15272 main.go:141] libmachine: [stdout =====>] : Running
	
	I0109 00:26:25.515386   15272 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:26:25.515386   15272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-173500 ).networkadapters[0]).ipaddresses[0]
	I0109 00:26:28.168608   15272 main.go:141] libmachine: [stdout =====>] : 
	I0109 00:26:28.169013   15272 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:26:29.172345   15272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-173500 ).state
	I0109 00:26:31.475773   15272 main.go:141] libmachine: [stdout =====>] : Running
	
	I0109 00:26:31.475955   15272 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:26:31.476014   15272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-173500 ).networkadapters[0]).ipaddresses[0]
	I0109 00:26:34.092026   15272 main.go:141] libmachine: [stdout =====>] : 
	I0109 00:26:34.096270   15272 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:26:35.111630   15272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-173500 ).state
	I0109 00:26:37.334976   15272 main.go:141] libmachine: [stdout =====>] : Running
	
	I0109 00:26:37.334976   15272 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:26:37.335089   15272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-173500 ).networkadapters[0]).ipaddresses[0]
	I0109 00:26:39.871643   15272 main.go:141] libmachine: [stdout =====>] : 
	I0109 00:26:39.871643   15272 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:26:40.873116   15272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-173500 ).state
	I0109 00:26:43.106065   15272 main.go:141] libmachine: [stdout =====>] : Running
	
	I0109 00:26:43.106065   15272 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:26:43.106065   15272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-173500 ).networkadapters[0]).ipaddresses[0]
	I0109 00:26:45.685055   15272 main.go:141] libmachine: [stdout =====>] : 172.24.109.120
	
	I0109 00:26:45.685272   15272 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:26:45.688344   15272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-173500 ).state
	I0109 00:26:47.847455   15272 main.go:141] libmachine: [stdout =====>] : Running
	
	I0109 00:26:47.847455   15272 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:26:47.847584   15272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-173500 ).networkadapters[0]).ipaddresses[0]
	I0109 00:26:50.439066   15272 main.go:141] libmachine: [stdout =====>] : 172.24.109.120
	
	I0109 00:26:50.439066   15272 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:26:50.439066   15272 profile.go:148] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-173500\config.json ...
	I0109 00:26:50.443235   15272 machine.go:88] provisioning docker machine ...
	I0109 00:26:50.443393   15272 buildroot.go:166] provisioning hostname "multinode-173500"
	I0109 00:26:50.443568   15272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-173500 ).state
	I0109 00:26:52.602210   15272 main.go:141] libmachine: [stdout =====>] : Running
	
	I0109 00:26:52.602269   15272 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:26:52.602269   15272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-173500 ).networkadapters[0]).ipaddresses[0]
	I0109 00:26:55.183643   15272 main.go:141] libmachine: [stdout =====>] : 172.24.109.120
	
	I0109 00:26:55.183643   15272 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:26:55.187887   15272 main.go:141] libmachine: Using SSH client type: native
	I0109 00:26:55.190570   15272 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa46120] 0xa48c60 <nil>  [] 0s} 172.24.109.120 22 <nil> <nil>}
	I0109 00:26:55.190570   15272 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-173500 && echo "multinode-173500" | sudo tee /etc/hostname
	I0109 00:26:55.353683   15272 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-173500
	
	I0109 00:26:55.353683   15272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-173500 ).state
	I0109 00:26:57.561376   15272 main.go:141] libmachine: [stdout =====>] : Running
	
	I0109 00:26:57.561605   15272 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:26:57.561818   15272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-173500 ).networkadapters[0]).ipaddresses[0]
	I0109 00:27:00.210510   15272 main.go:141] libmachine: [stdout =====>] : 172.24.109.120
	
	I0109 00:27:00.210510   15272 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:27:00.216618   15272 main.go:141] libmachine: Using SSH client type: native
	I0109 00:27:00.217390   15272 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa46120] 0xa48c60 <nil>  [] 0s} 172.24.109.120 22 <nil> <nil>}
	I0109 00:27:00.217390   15272 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-173500' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-173500/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-173500' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0109 00:27:00.383176   15272 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0109 00:27:00.383176   15272 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube1\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube1\minikube-integration\.minikube}
	I0109 00:27:00.383713   15272 buildroot.go:174] setting up certificates
	I0109 00:27:00.383790   15272 provision.go:83] configureAuth start
	I0109 00:27:00.383926   15272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-173500 ).state
	I0109 00:27:02.531185   15272 main.go:141] libmachine: [stdout =====>] : Running
	
	I0109 00:27:02.531265   15272 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:27:02.531265   15272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-173500 ).networkadapters[0]).ipaddresses[0]
	I0109 00:27:05.108789   15272 main.go:141] libmachine: [stdout =====>] : 172.24.109.120
	
	I0109 00:27:05.108789   15272 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:27:05.108789   15272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-173500 ).state
	I0109 00:27:07.260927   15272 main.go:141] libmachine: [stdout =====>] : Running
	
	I0109 00:27:07.261129   15272 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:27:07.261129   15272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-173500 ).networkadapters[0]).ipaddresses[0]
	I0109 00:27:09.821413   15272 main.go:141] libmachine: [stdout =====>] : 172.24.109.120
	
	I0109 00:27:09.821668   15272 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:27:09.821668   15272 provision.go:138] copyHostCerts
	I0109 00:27:09.821940   15272 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem
	I0109 00:27:09.822260   15272 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem, removing ...
	I0109 00:27:09.822260   15272 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\key.pem
	I0109 00:27:09.822778   15272 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem (1675 bytes)
	I0109 00:27:09.824073   15272 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem
	I0109 00:27:09.824073   15272 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem, removing ...
	I0109 00:27:09.824073   15272 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.pem
	I0109 00:27:09.824073   15272 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0109 00:27:09.826298   15272 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem
	I0109 00:27:09.826877   15272 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem, removing ...
	I0109 00:27:09.826877   15272 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cert.pem
	I0109 00:27:09.827263   15272 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0109 00:27:09.828385   15272 provision.go:112] generating server cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-173500 san=[172.24.109.120 172.24.109.120 localhost 127.0.0.1 minikube multinode-173500]
	I0109 00:27:10.251479   15272 provision.go:172] copyRemoteCerts
	I0109 00:27:10.264450   15272 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0109 00:27:10.264450   15272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-173500 ).state
	I0109 00:27:12.422068   15272 main.go:141] libmachine: [stdout =====>] : Running
	
	I0109 00:27:12.422068   15272 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:27:12.422068   15272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-173500 ).networkadapters[0]).ipaddresses[0]
	I0109 00:27:14.983322   15272 main.go:141] libmachine: [stdout =====>] : 172.24.109.120
	
	I0109 00:27:14.983322   15272 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:27:14.983631   15272 sshutil.go:53] new ssh client: &{IP:172.24.109.120 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-173500\id_rsa Username:docker}
	I0109 00:27:15.094491   15272 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.8300406s)
	I0109 00:27:15.094491   15272 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0109 00:27:15.095120   15272 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0109 00:27:15.137900   15272 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0109 00:27:15.137900   15272 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0109 00:27:15.184708   15272 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0109 00:27:15.185298   15272 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1224 bytes)
	I0109 00:27:15.224204   15272 provision.go:86] duration metric: configureAuth took 14.8404119s
	I0109 00:27:15.224204   15272 buildroot.go:189] setting minikube options for container-runtime
	I0109 00:27:15.224759   15272 config.go:182] Loaded profile config "multinode-173500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0109 00:27:15.224974   15272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-173500 ).state
	I0109 00:27:17.382512   15272 main.go:141] libmachine: [stdout =====>] : Running
	
	I0109 00:27:17.382765   15272 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:27:17.382765   15272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-173500 ).networkadapters[0]).ipaddresses[0]
	I0109 00:27:19.979022   15272 main.go:141] libmachine: [stdout =====>] : 172.24.109.120
	
	I0109 00:27:19.979022   15272 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:27:19.988045   15272 main.go:141] libmachine: Using SSH client type: native
	I0109 00:27:19.988757   15272 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa46120] 0xa48c60 <nil>  [] 0s} 172.24.109.120 22 <nil> <nil>}
	I0109 00:27:19.988757   15272 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0109 00:27:20.128576   15272 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0109 00:27:20.128689   15272 buildroot.go:70] root file system type: tmpfs
	I0109 00:27:20.128929   15272 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0109 00:27:20.128929   15272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-173500 ).state
	I0109 00:27:22.266487   15272 main.go:141] libmachine: [stdout =====>] : Running
	
	I0109 00:27:22.266558   15272 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:27:22.266558   15272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-173500 ).networkadapters[0]).ipaddresses[0]
	I0109 00:27:24.834101   15272 main.go:141] libmachine: [stdout =====>] : 172.24.109.120
	
	I0109 00:27:24.834101   15272 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:27:24.840227   15272 main.go:141] libmachine: Using SSH client type: native
	I0109 00:27:24.840922   15272 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa46120] 0xa48c60 <nil>  [] 0s} 172.24.109.120 22 <nil> <nil>}
	I0109 00:27:24.840922   15272 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0109 00:27:25.002186   15272 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0109 00:27:25.002403   15272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-173500 ).state
	I0109 00:27:27.158104   15272 main.go:141] libmachine: [stdout =====>] : Running
	
	I0109 00:27:27.158300   15272 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:27:27.158300   15272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-173500 ).networkadapters[0]).ipaddresses[0]
	I0109 00:27:29.710265   15272 main.go:141] libmachine: [stdout =====>] : 172.24.109.120
	
	I0109 00:27:29.710265   15272 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:27:29.716276   15272 main.go:141] libmachine: Using SSH client type: native
	I0109 00:27:29.717065   15272 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa46120] 0xa48c60 <nil>  [] 0s} 172.24.109.120 22 <nil> <nil>}
	I0109 00:27:29.717065   15272 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0109 00:27:31.113088   15272 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0109 00:27:31.113369   15272 machine.go:91] provisioned docker machine in 40.6699723s
	I0109 00:27:31.113369   15272 start.go:300] post-start starting for "multinode-173500" (driver="hyperv")
	I0109 00:27:31.113369   15272 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0109 00:27:31.129608   15272 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0109 00:27:31.129608   15272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-173500 ).state
	I0109 00:27:33.280606   15272 main.go:141] libmachine: [stdout =====>] : Running
	
	I0109 00:27:33.280606   15272 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:27:33.280715   15272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-173500 ).networkadapters[0]).ipaddresses[0]
	I0109 00:27:35.823605   15272 main.go:141] libmachine: [stdout =====>] : 172.24.109.120
	
	I0109 00:27:35.823605   15272 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:27:35.823605   15272 sshutil.go:53] new ssh client: &{IP:172.24.109.120 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-173500\id_rsa Username:docker}
	I0109 00:27:35.934133   15272 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.8045242s)
	I0109 00:27:35.947961   15272 ssh_runner.go:195] Run: cat /etc/os-release
	I0109 00:27:35.955651   15272 command_runner.go:130] > NAME=Buildroot
	I0109 00:27:35.955842   15272 command_runner.go:130] > VERSION=2021.02.12-1-gae27a7b-dirty
	I0109 00:27:35.955842   15272 command_runner.go:130] > ID=buildroot
	I0109 00:27:35.955878   15272 command_runner.go:130] > VERSION_ID=2021.02.12
	I0109 00:27:35.955878   15272 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
	I0109 00:27:35.955878   15272 info.go:137] Remote host: Buildroot 2021.02.12
	I0109 00:27:35.955982   15272 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\addons for local assets ...
	I0109 00:27:35.956515   15272 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\files for local assets ...
	I0109 00:27:35.957602   15272 filesync.go:149] local asset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\142882.pem -> 142882.pem in /etc/ssl/certs
	I0109 00:27:35.957602   15272 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\142882.pem -> /etc/ssl/certs/142882.pem
	I0109 00:27:35.971825   15272 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0109 00:27:35.990487   15272 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\142882.pem --> /etc/ssl/certs/142882.pem (1708 bytes)
	I0109 00:27:36.030118   15272 start.go:303] post-start completed in 4.9167482s
	I0109 00:27:36.030247   15272 fix.go:56] fixHost completed within 1m24.656818s
	I0109 00:27:36.030247   15272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-173500 ).state
	I0109 00:27:38.193733   15272 main.go:141] libmachine: [stdout =====>] : Running
	
	I0109 00:27:38.193733   15272 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:27:38.193733   15272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-173500 ).networkadapters[0]).ipaddresses[0]
	I0109 00:27:40.759254   15272 main.go:141] libmachine: [stdout =====>] : 172.24.109.120
	
	I0109 00:27:40.759254   15272 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:27:40.765310   15272 main.go:141] libmachine: Using SSH client type: native
	I0109 00:27:40.765984   15272 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa46120] 0xa48c60 <nil>  [] 0s} 172.24.109.120 22 <nil> <nil>}
	I0109 00:27:40.765984   15272 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0109 00:27:40.906315   15272 main.go:141] libmachine: SSH cmd err, output: <nil>: 1704760060.905080714
	
	I0109 00:27:40.906315   15272 fix.go:206] guest clock: 1704760060.905080714
	I0109 00:27:40.906315   15272 fix.go:219] Guest: 2024-01-09 00:27:40.905080714 +0000 UTC Remote: 2024-01-09 00:27:36.0302478 +0000 UTC m=+90.414922801 (delta=4.874832914s)
	I0109 00:27:40.906854   15272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-173500 ).state
	I0109 00:27:43.034084   15272 main.go:141] libmachine: [stdout =====>] : Running
	
	I0109 00:27:43.034084   15272 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:27:43.034207   15272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-173500 ).networkadapters[0]).ipaddresses[0]
	I0109 00:27:45.557377   15272 main.go:141] libmachine: [stdout =====>] : 172.24.109.120
	
	I0109 00:27:45.557461   15272 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:27:45.565357   15272 main.go:141] libmachine: Using SSH client type: native
	I0109 00:27:45.566284   15272 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa46120] 0xa48c60 <nil>  [] 0s} 172.24.109.120 22 <nil> <nil>}
	I0109 00:27:45.566284   15272 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1704760060
	I0109 00:27:45.714317   15272 main.go:141] libmachine: SSH cmd err, output: <nil>: Tue Jan  9 00:27:40 UTC 2024
	
	I0109 00:27:45.714317   15272 fix.go:226] clock set: Tue Jan  9 00:27:40 UTC 2024
	 (err=<nil>)
	I0109 00:27:45.714317   15272 start.go:83] releasing machines lock for "multinode-173500", held for 1m34.3417528s
	I0109 00:27:45.714317   15272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-173500 ).state
	I0109 00:27:47.833066   15272 main.go:141] libmachine: [stdout =====>] : Running
	
	I0109 00:27:47.833066   15272 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:27:47.833385   15272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-173500 ).networkadapters[0]).ipaddresses[0]
	I0109 00:27:50.342357   15272 main.go:141] libmachine: [stdout =====>] : 172.24.109.120
	
	I0109 00:27:50.342442   15272 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:27:50.347800   15272 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0109 00:27:50.347891   15272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-173500 ).state
	I0109 00:27:50.359859   15272 ssh_runner.go:195] Run: cat /version.json
	I0109 00:27:50.359859   15272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-173500 ).state
	I0109 00:27:52.532476   15272 main.go:141] libmachine: [stdout =====>] : Running
	
	I0109 00:27:52.532476   15272 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:27:52.532476   15272 main.go:141] libmachine: [stdout =====>] : Running
	
	I0109 00:27:52.532649   15272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-173500 ).networkadapters[0]).ipaddresses[0]
	I0109 00:27:52.532649   15272 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:27:52.532808   15272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-173500 ).networkadapters[0]).ipaddresses[0]
	I0109 00:27:55.206958   15272 main.go:141] libmachine: [stdout =====>] : 172.24.109.120
	
	I0109 00:27:55.207067   15272 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:27:55.207232   15272 sshutil.go:53] new ssh client: &{IP:172.24.109.120 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-173500\id_rsa Username:docker}
	I0109 00:27:55.225892   15272 main.go:141] libmachine: [stdout =====>] : 172.24.109.120
	
	I0109 00:27:55.225892   15272 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:27:55.225892   15272 sshutil.go:53] new ssh client: &{IP:172.24.109.120 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-173500\id_rsa Username:docker}
	I0109 00:27:55.308703   15272 command_runner.go:130] > {"iso_version": "v1.32.1-1702708929-17806", "kicbase_version": "v0.0.42-1702660877-17806", "minikube_version": "v1.32.0", "commit": "957da21b08687cca2533dd65b67e68ead277b79e"}
	I0109 00:27:55.308703   15272 ssh_runner.go:235] Completed: cat /version.json: (4.9488429s)
	I0109 00:27:55.322918   15272 ssh_runner.go:195] Run: systemctl --version
	I0109 00:27:55.444170   15272 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0109 00:27:55.444170   15272 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (5.0963694s)
	I0109 00:27:55.444298   15272 command_runner.go:130] > systemd 247 (247)
	I0109 00:27:55.444298   15272 command_runner.go:130] > -PAM -AUDIT -SELINUX -IMA -APPARMOR -SMACK -SYSVINIT -UTMP -LIBCRYPTSETUP -GCRYPT -GNUTLS +ACL +XZ +LZ4 -ZSTD +SECCOMP +BLKID -ELFUTILS +KMOD -IDN2 -IDN -PCRE2 default-hierarchy=hybrid
	I0109 00:27:55.458482   15272 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0109 00:27:55.467342   15272 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0109 00:27:55.468108   15272 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0109 00:27:55.482594   15272 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0109 00:27:55.504269   15272 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0109 00:27:55.504269   15272 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0109 00:27:55.504269   15272 start.go:475] detecting cgroup driver to use...
	I0109 00:27:55.504572   15272 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0109 00:27:55.531907   15272 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0109 00:27:55.545771   15272 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0109 00:27:55.582095   15272 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0109 00:27:55.598763   15272 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0109 00:27:55.613075   15272 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0109 00:27:55.642850   15272 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0109 00:27:55.672642   15272 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0109 00:27:55.703130   15272 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0109 00:27:55.734472   15272 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0109 00:27:55.765166   15272 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0109 00:27:55.795649   15272 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0109 00:27:55.811632   15272 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0109 00:27:55.823898   15272 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0109 00:27:55.853749   15272 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0109 00:27:56.025872   15272 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0109 00:27:56.057491   15272 start.go:475] detecting cgroup driver to use...
	I0109 00:27:56.073810   15272 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0109 00:27:56.098189   15272 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0109 00:27:56.098310   15272 command_runner.go:130] > [Unit]
	I0109 00:27:56.098310   15272 command_runner.go:130] > Description=Docker Application Container Engine
	I0109 00:27:56.098310   15272 command_runner.go:130] > Documentation=https://docs.docker.com
	I0109 00:27:56.098310   15272 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0109 00:27:56.098310   15272 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0109 00:27:56.098310   15272 command_runner.go:130] > StartLimitBurst=3
	I0109 00:27:56.098446   15272 command_runner.go:130] > StartLimitIntervalSec=60
	I0109 00:27:56.098498   15272 command_runner.go:130] > [Service]
	I0109 00:27:56.098498   15272 command_runner.go:130] > Type=notify
	I0109 00:27:56.098498   15272 command_runner.go:130] > Restart=on-failure
	I0109 00:27:56.098592   15272 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0109 00:27:56.098653   15272 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0109 00:27:56.098653   15272 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0109 00:27:56.098653   15272 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0109 00:27:56.098653   15272 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0109 00:27:56.098801   15272 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0109 00:27:56.098801   15272 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0109 00:27:56.098841   15272 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0109 00:27:56.098841   15272 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0109 00:27:56.098841   15272 command_runner.go:130] > ExecStart=
	I0109 00:27:56.098971   15272 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0109 00:27:56.098971   15272 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0109 00:27:56.099100   15272 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0109 00:27:56.099100   15272 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0109 00:27:56.099100   15272 command_runner.go:130] > LimitNOFILE=infinity
	I0109 00:27:56.099100   15272 command_runner.go:130] > LimitNPROC=infinity
	I0109 00:27:56.099100   15272 command_runner.go:130] > LimitCORE=infinity
	I0109 00:27:56.099232   15272 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0109 00:27:56.099232   15272 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0109 00:27:56.099232   15272 command_runner.go:130] > TasksMax=infinity
	I0109 00:27:56.099232   15272 command_runner.go:130] > TimeoutStartSec=0
	I0109 00:27:56.099232   15272 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0109 00:27:56.099352   15272 command_runner.go:130] > Delegate=yes
	I0109 00:27:56.099352   15272 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0109 00:27:56.099352   15272 command_runner.go:130] > KillMode=process
	I0109 00:27:56.099352   15272 command_runner.go:130] > [Install]
	I0109 00:27:56.099484   15272 command_runner.go:130] > WantedBy=multi-user.target
	I0109 00:27:56.118127   15272 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0109 00:27:56.150121   15272 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0109 00:27:56.196289   15272 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0109 00:27:56.229165   15272 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0109 00:27:56.266734   15272 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0109 00:27:56.335927   15272 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0109 00:27:56.358258   15272 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0109 00:27:56.385273   15272 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0109 00:27:56.404337   15272 ssh_runner.go:195] Run: which cri-dockerd
	I0109 00:27:56.410401   15272 command_runner.go:130] > /usr/bin/cri-dockerd
	I0109 00:27:56.425464   15272 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0109 00:27:56.443462   15272 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0109 00:27:56.483958   15272 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0109 00:27:56.659193   15272 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0109 00:27:56.810052   15272 docker.go:560] configuring docker to use "cgroupfs" as cgroup driver...
	I0109 00:27:56.811275   15272 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0109 00:27:56.864361   15272 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0109 00:27:57.032798   15272 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0109 00:27:58.746706   15272 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.7129039s)
	I0109 00:27:58.761092   15272 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0109 00:27:58.923197   15272 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0109 00:27:59.096043   15272 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0109 00:27:59.259656   15272 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0109 00:27:59.433097   15272 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0109 00:27:59.471868   15272 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0109 00:27:59.641390   15272 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I0109 00:27:59.744721   15272 start.go:522] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0109 00:27:59.762778   15272 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0109 00:27:59.770587   15272 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0109 00:27:59.770587   15272 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0109 00:27:59.770587   15272 command_runner.go:130] > Device: 16h/22d	Inode: 942         Links: 1
	I0109 00:27:59.770587   15272 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0109 00:27:59.770587   15272 command_runner.go:130] > Access: 2024-01-09 00:27:59.660292301 +0000
	I0109 00:27:59.770587   15272 command_runner.go:130] > Modify: 2024-01-09 00:27:59.660292301 +0000
	I0109 00:27:59.770587   15272 command_runner.go:130] > Change: 2024-01-09 00:27:59.664292301 +0000
	I0109 00:27:59.770587   15272 command_runner.go:130] >  Birth: -
	I0109 00:27:59.770587   15272 start.go:543] Will wait 60s for crictl version
	I0109 00:27:59.784516   15272 ssh_runner.go:195] Run: which crictl
	I0109 00:27:59.789727   15272 command_runner.go:130] > /usr/bin/crictl
	I0109 00:27:59.807020   15272 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0109 00:27:59.875440   15272 command_runner.go:130] > Version:  0.1.0
	I0109 00:27:59.875440   15272 command_runner.go:130] > RuntimeName:  docker
	I0109 00:27:59.876010   15272 command_runner.go:130] > RuntimeVersion:  24.0.7
	I0109 00:27:59.876010   15272 command_runner.go:130] > RuntimeApiVersion:  v1
	I0109 00:27:59.877983   15272 start.go:559] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  24.0.7
	RuntimeApiVersion:  v1
	I0109 00:27:59.888680   15272 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0109 00:27:59.924220   15272 command_runner.go:130] > 24.0.7
	I0109 00:27:59.936026   15272 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0109 00:27:59.973526   15272 command_runner.go:130] > 24.0.7
	I0109 00:27:59.977739   15272 out.go:204] * Preparing Kubernetes v1.28.4 on Docker 24.0.7 ...
	I0109 00:27:59.977832   15272 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0109 00:27:59.982044   15272 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0109 00:27:59.982623   15272 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0109 00:27:59.982623   15272 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0109 00:27:59.982623   15272 ip.go:207] Found interface: {Index:13 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:c4:61:0a Flags:up|broadcast|multicast|running}
	I0109 00:27:59.984509   15272 ip.go:210] interface addr: fe80::3fa5:15f5:46dc:dc8f/64
	I0109 00:27:59.984509   15272 ip.go:210] interface addr: 172.24.96.1/20
	I0109 00:27:59.998587   15272 ssh_runner.go:195] Run: grep 172.24.96.1	host.minikube.internal$ /etc/hosts
	I0109 00:28:00.003697   15272 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.24.96.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0109 00:28:00.024488   15272 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0109 00:28:00.035326   15272 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0109 00:28:00.065275   15272 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.28.4
	I0109 00:28:00.066331   15272 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.28.4
	I0109 00:28:00.066331   15272 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.28.4
	I0109 00:28:00.066378   15272 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.28.4
	I0109 00:28:00.066378   15272 command_runner.go:130] > kindest/kindnetd:v20230809-80a64d96
	I0109 00:28:00.066378   15272 command_runner.go:130] > registry.k8s.io/etcd:3.5.9-0
	I0109 00:28:00.066378   15272 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.10.1
	I0109 00:28:00.066378   15272 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0109 00:28:00.066378   15272 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0109 00:28:00.066378   15272 command_runner.go:130] > gcr.io/k8s-minikube/busybox:1.28
	I0109 00:28:00.066592   15272 docker.go:671] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.28.4
	registry.k8s.io/kube-scheduler:v1.28.4
	registry.k8s.io/kube-controller-manager:v1.28.4
	registry.k8s.io/kube-proxy:v1.28.4
	kindest/kindnetd:v20230809-80a64d96
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0109 00:28:00.066628   15272 docker.go:601] Images already preloaded, skipping extraction
	I0109 00:28:00.077733   15272 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0109 00:28:00.104485   15272 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.28.4
	I0109 00:28:00.104628   15272 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.28.4
	I0109 00:28:00.104687   15272 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.28.4
	I0109 00:28:00.104736   15272 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.28.4
	I0109 00:28:00.104736   15272 command_runner.go:130] > kindest/kindnetd:v20230809-80a64d96
	I0109 00:28:00.104736   15272 command_runner.go:130] > registry.k8s.io/etcd:3.5.9-0
	I0109 00:28:00.104779   15272 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.10.1
	I0109 00:28:00.104779   15272 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0109 00:28:00.104779   15272 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0109 00:28:00.104779   15272 command_runner.go:130] > gcr.io/k8s-minikube/busybox:1.28
	I0109 00:28:00.104880   15272 docker.go:671] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.28.4
	registry.k8s.io/kube-scheduler:v1.28.4
	registry.k8s.io/kube-controller-manager:v1.28.4
	registry.k8s.io/kube-proxy:v1.28.4
	kindest/kindnetd:v20230809-80a64d96
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0109 00:28:00.104933   15272 cache_images.go:84] Images are preloaded, skipping loading
	I0109 00:28:00.116044   15272 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0109 00:28:00.154237   15272 command_runner.go:130] > cgroupfs
	I0109 00:28:00.154454   15272 cni.go:84] Creating CNI manager for ""
	I0109 00:28:00.154596   15272 cni.go:136] 3 nodes found, recommending kindnet
	I0109 00:28:00.154596   15272 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0109 00:28:00.154596   15272 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.24.109.120 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-173500 NodeName:multinode-173500 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.24.109.120"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.24.109.120 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0109 00:28:00.154596   15272 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.24.109.120
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "multinode-173500"
	  kubeletExtraArgs:
	    node-ip: 172.24.109.120
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.24.109.120"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0109 00:28:00.155201   15272 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=multinode-173500 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.24.109.120
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:multinode-173500 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0109 00:28:00.171114   15272 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0109 00:28:00.187746   15272 command_runner.go:130] > kubeadm
	I0109 00:28:00.187746   15272 command_runner.go:130] > kubectl
	I0109 00:28:00.187746   15272 command_runner.go:130] > kubelet
	I0109 00:28:00.187746   15272 binaries.go:44] Found k8s binaries, skipping transfer
	I0109 00:28:00.202563   15272 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0109 00:28:00.217548   15272 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (379 bytes)
	I0109 00:28:00.243759   15272 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0109 00:28:00.269429   15272 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2105 bytes)
	I0109 00:28:00.313316   15272 ssh_runner.go:195] Run: grep 172.24.109.120	control-plane.minikube.internal$ /etc/hosts
	I0109 00:28:00.321091   15272 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.24.109.120	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0109 00:28:00.339703   15272 certs.go:56] Setting up C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-173500 for IP: 172.24.109.120
	I0109 00:28:00.340019   15272 certs.go:190] acquiring lock for shared ca certs: {Name:mk7484196a709b348d442d7deac4228c8c4b804e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0109 00:28:00.340795   15272 certs.go:199] skipping minikubeCA CA generation: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key
	I0109 00:28:00.341148   15272 certs.go:199] skipping proxyClientCA CA generation: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key
	I0109 00:28:00.341989   15272 certs.go:315] skipping minikube-user signed cert generation: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-173500\client.key
	I0109 00:28:00.342152   15272 certs.go:319] generating minikube signed cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-173500\apiserver.key.bbfd95bd
	I0109 00:28:00.342237   15272 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-173500\apiserver.crt.bbfd95bd with IP's: [172.24.109.120 10.96.0.1 127.0.0.1 10.0.0.1]
	I0109 00:28:00.798419   15272 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-173500\apiserver.crt.bbfd95bd ...
	I0109 00:28:00.800410   15272 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-173500\apiserver.crt.bbfd95bd: {Name:mk9251a5692d3b9d1e3ab6651d92285071b27f44 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0109 00:28:00.802316   15272 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-173500\apiserver.key.bbfd95bd ...
	I0109 00:28:00.802316   15272 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-173500\apiserver.key.bbfd95bd: {Name:mk669cd331a0c838d1aad5edde66451e49f2ffcd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0109 00:28:00.803348   15272 certs.go:337] copying C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-173500\apiserver.crt.bbfd95bd -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-173500\apiserver.crt
	I0109 00:28:00.814062   15272 certs.go:341] copying C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-173500\apiserver.key.bbfd95bd -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-173500\apiserver.key
	I0109 00:28:00.815730   15272 certs.go:315] skipping aggregator signed cert generation: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-173500\proxy-client.key
	I0109 00:28:00.815730   15272 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-173500\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0109 00:28:00.816292   15272 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-173500\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0109 00:28:00.816828   15272 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-173500\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0109 00:28:00.817117   15272 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-173500\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0109 00:28:00.817222   15272 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0109 00:28:00.817222   15272 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0109 00:28:00.817222   15272 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0109 00:28:00.817757   15272 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0109 00:28:00.817806   15272 certs.go:437] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\14288.pem (1338 bytes)
	W0109 00:28:00.818617   15272 certs.go:433] ignoring C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\14288_empty.pem, impossibly tiny 0 bytes
	I0109 00:28:00.819004   15272 certs.go:437] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I0109 00:28:00.819004   15272 certs.go:437] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0109 00:28:00.819640   15272 certs.go:437] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0109 00:28:00.819640   15272 certs.go:437] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I0109 00:28:00.820755   15272 certs.go:437] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\142882.pem (1708 bytes)
	I0109 00:28:00.821077   15272 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\142882.pem -> /usr/share/ca-certificates/142882.pem
	I0109 00:28:00.821191   15272 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0109 00:28:00.821191   15272 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\14288.pem -> /usr/share/ca-certificates/14288.pem
	I0109 00:28:00.822469   15272 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-173500\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0109 00:28:00.864045   15272 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-173500\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0109 00:28:00.902393   15272 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-173500\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0109 00:28:00.948511   15272 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-173500\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0109 00:28:00.987660   15272 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0109 00:28:01.026065   15272 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0109 00:28:01.067516   15272 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0109 00:28:01.111611   15272 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0109 00:28:01.150594   15272 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\142882.pem --> /usr/share/ca-certificates/142882.pem (1708 bytes)
	I0109 00:28:01.189867   15272 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0109 00:28:01.228818   15272 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\14288.pem --> /usr/share/ca-certificates/14288.pem (1338 bytes)
	I0109 00:28:01.265944   15272 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0109 00:28:01.309461   15272 ssh_runner.go:195] Run: openssl version
	I0109 00:28:01.316336   15272 command_runner.go:130] > OpenSSL 1.1.1n  15 Mar 2022
	I0109 00:28:01.330567   15272 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/142882.pem && ln -fs /usr/share/ca-certificates/142882.pem /etc/ssl/certs/142882.pem"
	I0109 00:28:01.361114   15272 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/142882.pem
	I0109 00:28:01.367222   15272 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Jan  8 23:11 /usr/share/ca-certificates/142882.pem
	I0109 00:28:01.367222   15272 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan  8 23:11 /usr/share/ca-certificates/142882.pem
	I0109 00:28:01.383942   15272 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/142882.pem
	I0109 00:28:01.391524   15272 command_runner.go:130] > 3ec20f2e
	I0109 00:28:01.405125   15272 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/142882.pem /etc/ssl/certs/3ec20f2e.0"
	I0109 00:28:01.434762   15272 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0109 00:28:01.465937   15272 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0109 00:28:01.472018   15272 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Jan  8 22:56 /usr/share/ca-certificates/minikubeCA.pem
	I0109 00:28:01.472167   15272 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan  8 22:56 /usr/share/ca-certificates/minikubeCA.pem
	I0109 00:28:01.486134   15272 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0109 00:28:01.496357   15272 command_runner.go:130] > b5213941
	I0109 00:28:01.511397   15272 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0109 00:28:01.542749   15272 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14288.pem && ln -fs /usr/share/ca-certificates/14288.pem /etc/ssl/certs/14288.pem"
	I0109 00:28:01.573936   15272 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14288.pem
	I0109 00:28:01.579591   15272 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Jan  8 23:11 /usr/share/ca-certificates/14288.pem
	I0109 00:28:01.579591   15272 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan  8 23:11 /usr/share/ca-certificates/14288.pem
	I0109 00:28:01.593099   15272 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14288.pem
	I0109 00:28:01.601244   15272 command_runner.go:130] > 51391683
	I0109 00:28:01.615639   15272 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14288.pem /etc/ssl/certs/51391683.0"
	I0109 00:28:01.647696   15272 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0109 00:28:01.654760   15272 command_runner.go:130] > ca.crt
	I0109 00:28:01.654760   15272 command_runner.go:130] > ca.key
	I0109 00:28:01.654760   15272 command_runner.go:130] > healthcheck-client.crt
	I0109 00:28:01.654760   15272 command_runner.go:130] > healthcheck-client.key
	I0109 00:28:01.654760   15272 command_runner.go:130] > peer.crt
	I0109 00:28:01.654760   15272 command_runner.go:130] > peer.key
	I0109 00:28:01.654760   15272 command_runner.go:130] > server.crt
	I0109 00:28:01.654760   15272 command_runner.go:130] > server.key
	I0109 00:28:01.668796   15272 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0109 00:28:01.677235   15272 command_runner.go:130] > Certificate will not expire
	I0109 00:28:01.690640   15272 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0109 00:28:01.698852   15272 command_runner.go:130] > Certificate will not expire
	I0109 00:28:01.712364   15272 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0109 00:28:01.720975   15272 command_runner.go:130] > Certificate will not expire
	I0109 00:28:01.735702   15272 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0109 00:28:01.744720   15272 command_runner.go:130] > Certificate will not expire
	I0109 00:28:01.757920   15272 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0109 00:28:01.764801   15272 command_runner.go:130] > Certificate will not expire
	I0109 00:28:01.779125   15272 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0109 00:28:01.786741   15272 command_runner.go:130] > Certificate will not expire
	I0109 00:28:01.788239   15272 kubeadm.go:404] StartCluster: {Name:multinode-173500 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17806/minikube-v1.32.1-1702708929-17806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704751654-17830@sha256:cabd32f8d9e8d804966eb117ed5366660f6363a4d1415f0b5480de6e396be617 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersi
on:v1.28.4 ClusterName:multinode-173500 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:172.24.109.120 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.24.108.84 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.24.100.87 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ing
ress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:d
ocker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I0109 00:28:01.799303   15272 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0109 00:28:01.841802   15272 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0109 00:28:01.861426   15272 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I0109 00:28:01.861503   15272 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I0109 00:28:01.861503   15272 command_runner.go:130] > /var/lib/minikube/etcd:
	I0109 00:28:01.861503   15272 command_runner.go:130] > member
	I0109 00:28:01.861565   15272 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0109 00:28:01.861647   15272 kubeadm.go:636] restartCluster start
	I0109 00:28:01.874359   15272 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0109 00:28:01.891474   15272 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0109 00:28:01.892667   15272 kubeconfig.go:135] verify returned: extract IP: "multinode-173500" does not appear in C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0109 00:28:01.892772   15272 kubeconfig.go:146] "multinode-173500" context is missing from C:\Users\jenkins.minikube1\minikube-integration\kubeconfig - will repair!
	I0109 00:28:01.893342   15272 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\kubeconfig: {Name:mk2e2755bd0ffee2cfcc8bbf22c26f99d53697ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0109 00:28:01.905617   15272 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0109 00:28:01.906575   15272 kapi.go:59] client config for multinode-173500: &rest.Config{Host:"https://172.24.109.120:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\multinode-173500/client.crt", KeyFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\multinode-173500/client.key", CAFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CADat
a:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1e2c140), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0109 00:28:01.908130   15272 cert_rotation.go:137] Starting client certificate rotation controller
	I0109 00:28:01.921594   15272 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0109 00:28:01.940453   15272 command_runner.go:130] > --- /var/tmp/minikube/kubeadm.yaml
	I0109 00:28:01.940453   15272 command_runner.go:130] > +++ /var/tmp/minikube/kubeadm.yaml.new
	I0109 00:28:01.940453   15272 command_runner.go:130] > @@ -1,7 +1,7 @@
	I0109 00:28:01.940453   15272 command_runner.go:130] >  apiVersion: kubeadm.k8s.io/v1beta3
	I0109 00:28:01.940453   15272 command_runner.go:130] >  kind: InitConfiguration
	I0109 00:28:01.940453   15272 command_runner.go:130] >  localAPIEndpoint:
	I0109 00:28:01.941491   15272 command_runner.go:130] > -  advertiseAddress: 172.24.100.178
	I0109 00:28:01.941491   15272 command_runner.go:130] > +  advertiseAddress: 172.24.109.120
	I0109 00:28:01.941491   15272 command_runner.go:130] >    bindPort: 8443
	I0109 00:28:01.941491   15272 command_runner.go:130] >  bootstrapTokens:
	I0109 00:28:01.941491   15272 command_runner.go:130] >    - groups:
	I0109 00:28:01.941491   15272 command_runner.go:130] > @@ -14,13 +14,13 @@
	I0109 00:28:01.941491   15272 command_runner.go:130] >    criSocket: unix:///var/run/cri-dockerd.sock
	I0109 00:28:01.941491   15272 command_runner.go:130] >    name: "multinode-173500"
	I0109 00:28:01.941491   15272 command_runner.go:130] >    kubeletExtraArgs:
	I0109 00:28:01.941491   15272 command_runner.go:130] > -    node-ip: 172.24.100.178
	I0109 00:28:01.941491   15272 command_runner.go:130] > +    node-ip: 172.24.109.120
	I0109 00:28:01.941491   15272 command_runner.go:130] >    taints: []
	I0109 00:28:01.941491   15272 command_runner.go:130] >  ---
	I0109 00:28:01.941491   15272 command_runner.go:130] >  apiVersion: kubeadm.k8s.io/v1beta3
	I0109 00:28:01.941491   15272 command_runner.go:130] >  kind: ClusterConfiguration
	I0109 00:28:01.941491   15272 command_runner.go:130] >  apiServer:
	I0109 00:28:01.941491   15272 command_runner.go:130] > -  certSANs: ["127.0.0.1", "localhost", "172.24.100.178"]
	I0109 00:28:01.941491   15272 command_runner.go:130] > +  certSANs: ["127.0.0.1", "localhost", "172.24.109.120"]
	I0109 00:28:01.941491   15272 command_runner.go:130] >    extraArgs:
	I0109 00:28:01.941491   15272 command_runner.go:130] >      enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	I0109 00:28:01.941491   15272 command_runner.go:130] >  controllerManager:
	I0109 00:28:01.941491   15272 kubeadm.go:602] needs reconfigure: configs differ:
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -1,7 +1,7 @@
	 apiVersion: kubeadm.k8s.io/v1beta3
	 kind: InitConfiguration
	 localAPIEndpoint:
	-  advertiseAddress: 172.24.100.178
	+  advertiseAddress: 172.24.109.120
	   bindPort: 8443
	 bootstrapTokens:
	   - groups:
	@@ -14,13 +14,13 @@
	   criSocket: unix:///var/run/cri-dockerd.sock
	   name: "multinode-173500"
	   kubeletExtraArgs:
	-    node-ip: 172.24.100.178
	+    node-ip: 172.24.109.120
	   taints: []
	 ---
	 apiVersion: kubeadm.k8s.io/v1beta3
	 kind: ClusterConfiguration
	 apiServer:
	-  certSANs: ["127.0.0.1", "localhost", "172.24.100.178"]
	+  certSANs: ["127.0.0.1", "localhost", "172.24.109.120"]
	   extraArgs:
	     enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	 controllerManager:
	
	-- /stdout --
	I0109 00:28:01.941491   15272 kubeadm.go:1135] stopping kube-system containers ...
	I0109 00:28:01.950457   15272 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0109 00:28:01.982991   15272 command_runner.go:130] > cc24fe03754e
	I0109 00:28:01.982991   15272 command_runner.go:130] > 87cfa509bf08
	I0109 00:28:01.982991   15272 command_runner.go:130] > 95f02a16160e
	I0109 00:28:01.982991   15272 command_runner.go:130] > ea6b136c3ff5
	I0109 00:28:01.982991   15272 command_runner.go:130] > 73ce70f8eca1
	I0109 00:28:01.982991   15272 command_runner.go:130] > 9faec0fdff89
	I0109 00:28:01.982991   15272 command_runner.go:130] > f8bc35a82f65
	I0109 00:28:01.982991   15272 command_runner.go:130] > 4ab23b363c35
	I0109 00:28:01.982991   15272 command_runner.go:130] > 16fd62cddf8b
	I0109 00:28:01.982991   15272 command_runner.go:130] > c6bc1bb3e368
	I0109 00:28:01.982991   15272 command_runner.go:130] > aa0ba9733b8d
	I0109 00:28:01.982991   15272 command_runner.go:130] > e4e40eb718ff
	I0109 00:28:01.982991   15272 command_runner.go:130] > 414e36a1f442
	I0109 00:28:01.982991   15272 command_runner.go:130] > 1b9f9a6d5d52
	I0109 00:28:01.982991   15272 command_runner.go:130] > f45ca2656d29
	I0109 00:28:01.982991   15272 command_runner.go:130] > ae920e11c344
	I0109 00:28:01.982991   15272 docker.go:469] Stopping containers: [cc24fe03754e 87cfa509bf08 95f02a16160e ea6b136c3ff5 73ce70f8eca1 9faec0fdff89 f8bc35a82f65 4ab23b363c35 16fd62cddf8b c6bc1bb3e368 aa0ba9733b8d e4e40eb718ff 414e36a1f442 1b9f9a6d5d52 f45ca2656d29 ae920e11c344]
	I0109 00:28:01.994478   15272 ssh_runner.go:195] Run: docker stop cc24fe03754e 87cfa509bf08 95f02a16160e ea6b136c3ff5 73ce70f8eca1 9faec0fdff89 f8bc35a82f65 4ab23b363c35 16fd62cddf8b c6bc1bb3e368 aa0ba9733b8d e4e40eb718ff 414e36a1f442 1b9f9a6d5d52 f45ca2656d29 ae920e11c344
	I0109 00:28:02.021557   15272 command_runner.go:130] > cc24fe03754e
	I0109 00:28:02.021557   15272 command_runner.go:130] > 87cfa509bf08
	I0109 00:28:02.021557   15272 command_runner.go:130] > 95f02a16160e
	I0109 00:28:02.021557   15272 command_runner.go:130] > ea6b136c3ff5
	I0109 00:28:02.021557   15272 command_runner.go:130] > 73ce70f8eca1
	I0109 00:28:02.021557   15272 command_runner.go:130] > 9faec0fdff89
	I0109 00:28:02.021557   15272 command_runner.go:130] > f8bc35a82f65
	I0109 00:28:02.021557   15272 command_runner.go:130] > 4ab23b363c35
	I0109 00:28:02.021557   15272 command_runner.go:130] > 16fd62cddf8b
	I0109 00:28:02.021688   15272 command_runner.go:130] > c6bc1bb3e368
	I0109 00:28:02.021688   15272 command_runner.go:130] > aa0ba9733b8d
	I0109 00:28:02.021688   15272 command_runner.go:130] > e4e40eb718ff
	I0109 00:28:02.021688   15272 command_runner.go:130] > 414e36a1f442
	I0109 00:28:02.021688   15272 command_runner.go:130] > 1b9f9a6d5d52
	I0109 00:28:02.021738   15272 command_runner.go:130] > f45ca2656d29
	I0109 00:28:02.021738   15272 command_runner.go:130] > ae920e11c344
	I0109 00:28:02.035591   15272 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0109 00:28:02.076835   15272 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0109 00:28:02.092005   15272 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I0109 00:28:02.092271   15272 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I0109 00:28:02.092271   15272 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I0109 00:28:02.092326   15272 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0109 00:28:02.092530   15272 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0109 00:28:02.107619   15272 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0109 00:28:02.122226   15272 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0109 00:28:02.122226   15272 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0109 00:28:02.538013   15272 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0109 00:28:02.538013   15272 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0109 00:28:02.538013   15272 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0109 00:28:02.538185   15272 command_runner.go:130] > [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0109 00:28:02.538185   15272 command_runner.go:130] > [certs] Using existing front-proxy-ca certificate authority
	I0109 00:28:02.538185   15272 command_runner.go:130] > [certs] Using existing front-proxy-client certificate and key on disk
	I0109 00:28:02.538185   15272 command_runner.go:130] > [certs] Using existing etcd/ca certificate authority
	I0109 00:28:02.538185   15272 command_runner.go:130] > [certs] Using existing etcd/server certificate and key on disk
	I0109 00:28:02.538185   15272 command_runner.go:130] > [certs] Using existing etcd/peer certificate and key on disk
	I0109 00:28:02.538185   15272 command_runner.go:130] > [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0109 00:28:02.538185   15272 command_runner.go:130] > [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0109 00:28:02.538185   15272 command_runner.go:130] > [certs] Using the existing "sa" key
	I0109 00:28:02.538185   15272 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0109 00:28:03.908879   15272 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0109 00:28:03.908963   15272 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I0109 00:28:03.909045   15272 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0109 00:28:03.909045   15272 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0109 00:28:03.909045   15272 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0109 00:28:03.909111   15272 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.3709258s)
	I0109 00:28:03.909111   15272 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0109 00:28:04.189591   15272 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0109 00:28:04.189591   15272 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0109 00:28:04.189591   15272 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0109 00:28:04.190535   15272 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0109 00:28:04.285548   15272 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0109 00:28:04.285644   15272 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0109 00:28:04.285644   15272 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0109 00:28:04.285644   15272 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0109 00:28:04.285729   15272 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0109 00:28:04.370779   15272 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0109 00:28:04.370779   15272 api_server.go:52] waiting for apiserver process to appear ...
	I0109 00:28:04.385515   15272 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0109 00:28:04.888886   15272 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0109 00:28:05.393956   15272 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0109 00:28:05.898445   15272 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0109 00:28:06.391926   15272 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0109 00:28:06.901453   15272 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0109 00:28:07.396758   15272 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0109 00:28:07.434753   15272 command_runner.go:130] > 1838
	I0109 00:28:07.437808   15272 api_server.go:72] duration metric: took 3.0670063s to wait for apiserver process to appear ...
	I0109 00:28:07.437808   15272 api_server.go:88] waiting for apiserver healthz status ...
	I0109 00:28:07.437871   15272 api_server.go:253] Checking apiserver healthz at https://172.24.109.120:8443/healthz ...
	I0109 00:28:11.926831   15272 api_server.go:279] https://172.24.109.120:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0109 00:28:11.927431   15272 api_server.go:103] status: https://172.24.109.120:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0109 00:28:11.927541   15272 api_server.go:253] Checking apiserver healthz at https://172.24.109.120:8443/healthz ...
	I0109 00:28:11.998616   15272 api_server.go:279] https://172.24.109.120:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0109 00:28:11.999185   15272 api_server.go:103] status: https://172.24.109.120:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0109 00:28:11.999185   15272 api_server.go:253] Checking apiserver healthz at https://172.24.109.120:8443/healthz ...
	I0109 00:28:12.025074   15272 api_server.go:279] https://172.24.109.120:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0109 00:28:12.025074   15272 api_server.go:103] status: https://172.24.109.120:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0109 00:28:12.445306   15272 api_server.go:253] Checking apiserver healthz at https://172.24.109.120:8443/healthz ...
	I0109 00:28:12.454566   15272 api_server.go:279] https://172.24.109.120:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0109 00:28:12.454566   15272 api_server.go:103] status: https://172.24.109.120:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0109 00:28:12.946830   15272 api_server.go:253] Checking apiserver healthz at https://172.24.109.120:8443/healthz ...
	I0109 00:28:12.955551   15272 api_server.go:279] https://172.24.109.120:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0109 00:28:12.955721   15272 api_server.go:103] status: https://172.24.109.120:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0109 00:28:13.450913   15272 api_server.go:253] Checking apiserver healthz at https://172.24.109.120:8443/healthz ...
	I0109 00:28:13.460096   15272 api_server.go:279] https://172.24.109.120:8443/healthz returned 200:
	ok
	I0109 00:28:13.460550   15272 round_trippers.go:463] GET https://172.24.109.120:8443/version
	I0109 00:28:13.460550   15272 round_trippers.go:469] Request Headers:
	I0109 00:28:13.460550   15272 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:28:13.460550   15272 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0109 00:28:13.474318   15272 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I0109 00:28:13.474392   15272 round_trippers.go:577] Response Headers:
	I0109 00:28:13.474392   15272 round_trippers.go:580]     Audit-Id: 845d0f29-8073-49bb-83e3-7a5c9701a899
	I0109 00:28:13.474392   15272 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:28:13.474392   15272 round_trippers.go:580]     Content-Type: application/json
	I0109 00:28:13.474392   15272 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b65f6d3f-e21a-46b0-805d-9b71486b6c1c
	I0109 00:28:13.474486   15272 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 160c1d93-30c8-4413-9986-7aa0d5f0fdd1
	I0109 00:28:13.474486   15272 round_trippers.go:580]     Content-Length: 264
	I0109 00:28:13.474486   15272 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:28:13 GMT
	I0109 00:28:13.474559   15272 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "28",
	  "gitVersion": "v1.28.4",
	  "gitCommit": "bae2c62678db2b5053817bc97181fcc2e8388103",
	  "gitTreeState": "clean",
	  "buildDate": "2023-11-15T16:48:54Z",
	  "goVersion": "go1.20.11",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0109 00:28:13.474653   15272 api_server.go:141] control plane version: v1.28.4
	I0109 00:28:13.474741   15272 api_server.go:131] duration metric: took 6.036933s to wait for apiserver health ...
	I0109 00:28:13.474741   15272 cni.go:84] Creating CNI manager for ""
	I0109 00:28:13.474741   15272 cni.go:136] 3 nodes found, recommending kindnet
	I0109 00:28:13.477562   15272 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0109 00:28:13.493679   15272 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0109 00:28:13.501626   15272 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0109 00:28:13.501626   15272 command_runner.go:130] >   Size: 2685752   	Blocks: 5248       IO Block: 4096   regular file
	I0109 00:28:13.501714   15272 command_runner.go:130] > Device: 11h/17d	Inode: 3544        Links: 1
	I0109 00:28:13.501714   15272 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0109 00:28:13.501714   15272 command_runner.go:130] > Access: 2024-01-09 00:26:43.947705700 +0000
	I0109 00:28:13.501714   15272 command_runner.go:130] > Modify: 2023-12-16 11:53:47.000000000 +0000
	I0109 00:28:13.501714   15272 command_runner.go:130] > Change: 2024-01-09 00:26:31.489000000 +0000
	I0109 00:28:13.501714   15272 command_runner.go:130] >  Birth: -
	I0109 00:28:13.501810   15272 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I0109 00:28:13.501867   15272 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0109 00:28:13.548345   15272 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0109 00:28:16.132925   15272 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I0109 00:28:16.133000   15272 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I0109 00:28:16.133000   15272 command_runner.go:130] > serviceaccount/kindnet unchanged
	I0109 00:28:16.133000   15272 command_runner.go:130] > daemonset.apps/kindnet configured
	I0109 00:28:16.133000   15272 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (2.5846543s)
	I0109 00:28:16.133216   15272 system_pods.go:43] waiting for kube-system pods to appear ...
	I0109 00:28:16.133442   15272 round_trippers.go:463] GET https://172.24.109.120:8443/api/v1/namespaces/kube-system/pods
	I0109 00:28:16.133442   15272 round_trippers.go:469] Request Headers:
	I0109 00:28:16.133442   15272 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:28:16.133512   15272 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0109 00:28:16.138831   15272 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0109 00:28:16.139858   15272 round_trippers.go:577] Response Headers:
	I0109 00:28:16.139902   15272 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:28:16 GMT
	I0109 00:28:16.139902   15272 round_trippers.go:580]     Audit-Id: 46182e30-0800-4ab0-b236-c403a7e5ddf6
	I0109 00:28:16.139902   15272 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:28:16.139902   15272 round_trippers.go:580]     Content-Type: application/json
	I0109 00:28:16.139902   15272 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b65f6d3f-e21a-46b0-805d-9b71486b6c1c
	I0109 00:28:16.139902   15272 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 160c1d93-30c8-4413-9986-7aa0d5f0fdd1
	I0109 00:28:16.141190   15272 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1734"},"items":[{"metadata":{"name":"coredns-5dd5756b68-bkss9","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"463fb6c6-1e85-419f-9c13-96e58a2ec22e","resourceVersion":"1670","creationTimestamp":"2024-01-09T00:05:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"391af85f-9c35-497b-9b4f-c347a35d4a42","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-09T00:05:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"391af85f-9c35-497b-9b4f-c347a35d4a42\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 84171 chars]
	I0109 00:28:16.147800   15272 system_pods.go:59] 12 kube-system pods found
	I0109 00:28:16.147800   15272 system_pods.go:61] "coredns-5dd5756b68-bkss9" [463fb6c6-1e85-419f-9c13-96e58a2ec22e] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0109 00:28:16.147800   15272 system_pods.go:61] "etcd-multinode-173500" [bbcb3d33-7daf-43d9-b596-66cbce3552ad] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0109 00:28:16.147800   15272 system_pods.go:61] "kindnet-6nz87" [73ad6ec4-cbfb-4b93-888c-3d430f3c7bf2] Running
	I0109 00:28:16.147800   15272 system_pods.go:61] "kindnet-ht547" [711faf1a-9f77-487c-bd84-1e227ab9c51a] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0109 00:28:16.147800   15272 system_pods.go:61] "kindnet-t72cs" [63893803-de87-4df9-ac98-3772bd46603c] Running
	I0109 00:28:16.147800   15272 system_pods.go:61] "kube-apiserver-multinode-173500" [6ec45d85-b2d5-483f-afdd-ee98dbb0edd1] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0109 00:28:16.147800   15272 system_pods.go:61] "kube-controller-manager-multinode-173500" [a0252ea5-5d6a-4303-b7e6-151481d4cd8a] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0109 00:28:16.147800   15272 system_pods.go:61] "kube-proxy-4h4sv" [a45861ba-73e0-452f-a535-f66e154ea1c6] Running
	I0109 00:28:16.147800   15272 system_pods.go:61] "kube-proxy-mj6ks" [bd23c4c8-d363-4a3f-b750-a3de2346a3bb] Running
	I0109 00:28:16.147800   15272 system_pods.go:61] "kube-proxy-qrtm6" [37d066e0-6ff3-4f22-abc3-6bddfa64736e] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0109 00:28:16.147800   15272 system_pods.go:61] "kube-scheduler-multinode-173500" [31d8cdf6-292f-4b3c-87c5-951fc34d0ea4] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0109 00:28:16.147800   15272 system_pods.go:61] "storage-provisioner" [936240bb-4bdd-4681-91a9-cb458c623805] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0109 00:28:16.147800   15272 system_pods.go:74] duration metric: took 14.5839ms to wait for pod list to return data ...
	I0109 00:28:16.147800   15272 node_conditions.go:102] verifying NodePressure condition ...
	I0109 00:28:16.147800   15272 round_trippers.go:463] GET https://172.24.109.120:8443/api/v1/nodes
	I0109 00:28:16.147800   15272 round_trippers.go:469] Request Headers:
	I0109 00:28:16.147800   15272 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:28:16.147800   15272 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0109 00:28:16.153789   15272 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0109 00:28:16.153789   15272 round_trippers.go:577] Response Headers:
	I0109 00:28:16.153789   15272 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:28:16.153789   15272 round_trippers.go:580]     Content-Type: application/json
	I0109 00:28:16.153789   15272 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b65f6d3f-e21a-46b0-805d-9b71486b6c1c
	I0109 00:28:16.153789   15272 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 160c1d93-30c8-4413-9986-7aa0d5f0fdd1
	I0109 00:28:16.153789   15272 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:28:16 GMT
	I0109 00:28:16.153789   15272 round_trippers.go:580]     Audit-Id: 2293575c-ba4e-439b-ae5d-f108447b3fef
	I0109 00:28:16.153789   15272 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"1734"},"items":[{"metadata":{"name":"multinode-173500","uid":"5e5c58e3-1ae8-4346-8766-7537fea36975","resourceVersion":"1661","creationTimestamp":"2024-01-09T00:05:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-173500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-173500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_09T00_05_33_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFi
elds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","tim [truncated 14858 chars]
	I0109 00:28:16.154796   15272 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0109 00:28:16.155785   15272 node_conditions.go:123] node cpu capacity is 2
	I0109 00:28:16.155785   15272 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0109 00:28:16.155785   15272 node_conditions.go:123] node cpu capacity is 2
	I0109 00:28:16.155785   15272 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0109 00:28:16.155785   15272 node_conditions.go:123] node cpu capacity is 2
	I0109 00:28:16.155785   15272 node_conditions.go:105] duration metric: took 7.9855ms to run NodePressure ...
	I0109 00:28:16.155785   15272 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0109 00:28:16.645878   15272 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0109 00:28:16.645878   15272 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0109 00:28:16.646033   15272 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0109 00:28:16.646155   15272 round_trippers.go:463] GET https://172.24.109.120:8443/api/v1/namespaces/kube-system/pods?labelSelector=tier%3Dcontrol-plane
	I0109 00:28:16.646235   15272 round_trippers.go:469] Request Headers:
	I0109 00:28:16.646235   15272 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:28:16.646235   15272 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0109 00:28:16.651110   15272 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0109 00:28:16.651110   15272 round_trippers.go:577] Response Headers:
	I0109 00:28:16.651110   15272 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 160c1d93-30c8-4413-9986-7aa0d5f0fdd1
	I0109 00:28:16.651110   15272 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:28:16 GMT
	I0109 00:28:16.651110   15272 round_trippers.go:580]     Audit-Id: 1c1375f4-94a5-4965-887a-9fac15f9a697
	I0109 00:28:16.651110   15272 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:28:16.651110   15272 round_trippers.go:580]     Content-Type: application/json
	I0109 00:28:16.651110   15272 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b65f6d3f-e21a-46b0-805d-9b71486b6c1c
	I0109 00:28:16.651594   15272 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1739"},"items":[{"metadata":{"name":"etcd-multinode-173500","namespace":"kube-system","uid":"bbcb3d33-7daf-43d9-b596-66cbce3552ad","resourceVersion":"1660","creationTimestamp":"2024-01-09T00:05:31Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.24.100.178:2379","kubernetes.io/config.hash":"8b9b6f8e7be121dc69cce9e8aca59417","kubernetes.io/config.mirror":"8b9b6f8e7be121dc69cce9e8aca59417","kubernetes.io/config.seen":"2024-01-09T00:05:31.606498270Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-173500","uid":"5e5c58e3-1ae8-4346-8766-7537fea36975","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-09T00:05:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotati
ons":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-client-urls":{},"f [truncated 29377 chars]
	I0109 00:28:16.653731   15272 kubeadm.go:787] kubelet initialised
	I0109 00:28:16.653800   15272 kubeadm.go:788] duration metric: took 7.7669ms waiting for restarted kubelet to initialise ...
	I0109 00:28:16.653800   15272 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0109 00:28:16.653942   15272 round_trippers.go:463] GET https://172.24.109.120:8443/api/v1/namespaces/kube-system/pods
	I0109 00:28:16.653942   15272 round_trippers.go:469] Request Headers:
	I0109 00:28:16.653942   15272 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0109 00:28:16.654011   15272 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:28:16.658451   15272 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0109 00:28:16.658451   15272 round_trippers.go:577] Response Headers:
	I0109 00:28:16.658451   15272 round_trippers.go:580]     Audit-Id: 62fe31ad-e159-40fb-ace1-2860b1cbe504
	I0109 00:28:16.658451   15272 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:28:16.659466   15272 round_trippers.go:580]     Content-Type: application/json
	I0109 00:28:16.659466   15272 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b65f6d3f-e21a-46b0-805d-9b71486b6c1c
	I0109 00:28:16.659510   15272 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 160c1d93-30c8-4413-9986-7aa0d5f0fdd1
	I0109 00:28:16.659510   15272 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:28:16 GMT
	I0109 00:28:16.661558   15272 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1739"},"items":[{"metadata":{"name":"coredns-5dd5756b68-bkss9","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"463fb6c6-1e85-419f-9c13-96e58a2ec22e","resourceVersion":"1670","creationTimestamp":"2024-01-09T00:05:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"391af85f-9c35-497b-9b4f-c347a35d4a42","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-09T00:05:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"391af85f-9c35-497b-9b4f-c347a35d4a42\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 84171 chars]
	I0109 00:28:16.665707   15272 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-bkss9" in "kube-system" namespace to be "Ready" ...
	I0109 00:28:16.665845   15272 round_trippers.go:463] GET https://172.24.109.120:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-bkss9
	I0109 00:28:16.665949   15272 round_trippers.go:469] Request Headers:
	I0109 00:28:16.665949   15272 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:28:16.665989   15272 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0109 00:28:16.669365   15272 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0109 00:28:16.669365   15272 round_trippers.go:577] Response Headers:
	I0109 00:28:16.669365   15272 round_trippers.go:580]     Audit-Id: c0e760a2-bf91-4bfd-9982-72b42bebd44d
	I0109 00:28:16.669365   15272 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:28:16.670367   15272 round_trippers.go:580]     Content-Type: application/json
	I0109 00:28:16.670367   15272 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b65f6d3f-e21a-46b0-805d-9b71486b6c1c
	I0109 00:28:16.670367   15272 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 160c1d93-30c8-4413-9986-7aa0d5f0fdd1
	I0109 00:28:16.670367   15272 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:28:16 GMT
	I0109 00:28:16.670608   15272 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-bkss9","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"463fb6c6-1e85-419f-9c13-96e58a2ec22e","resourceVersion":"1670","creationTimestamp":"2024-01-09T00:05:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"391af85f-9c35-497b-9b4f-c347a35d4a42","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-09T00:05:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"391af85f-9c35-497b-9b4f-c347a35d4a42\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6546 chars]
	I0109 00:28:16.671258   15272 round_trippers.go:463] GET https://172.24.109.120:8443/api/v1/nodes/multinode-173500
	I0109 00:28:16.671258   15272 round_trippers.go:469] Request Headers:
	I0109 00:28:16.671258   15272 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:28:16.671332   15272 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0109 00:28:16.676637   15272 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0109 00:28:16.676637   15272 round_trippers.go:577] Response Headers:
	I0109 00:28:16.676637   15272 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b65f6d3f-e21a-46b0-805d-9b71486b6c1c
	I0109 00:28:16.676637   15272 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 160c1d93-30c8-4413-9986-7aa0d5f0fdd1
	I0109 00:28:16.676637   15272 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:28:16 GMT
	I0109 00:28:16.676637   15272 round_trippers.go:580]     Audit-Id: b79e19a1-86a2-43f7-b713-ffa7655775c7
	I0109 00:28:16.676637   15272 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:28:16.676637   15272 round_trippers.go:580]     Content-Type: application/json
	I0109 00:28:16.676637   15272 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-173500","uid":"5e5c58e3-1ae8-4346-8766-7537fea36975","resourceVersion":"1661","creationTimestamp":"2024-01-09T00:05:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-173500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-173500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_09T00_05_33_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-01-09T00:05:27Z","fieldsType":"FieldsV1","f [truncated 5365 chars]
	I0109 00:28:16.677421   15272 pod_ready.go:97] node "multinode-173500" hosting pod "coredns-5dd5756b68-bkss9" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-173500" has status "Ready":"False"
	I0109 00:28:16.677421   15272 pod_ready.go:81] duration metric: took 11.7141ms waiting for pod "coredns-5dd5756b68-bkss9" in "kube-system" namespace to be "Ready" ...
	E0109 00:28:16.677421   15272 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-173500" hosting pod "coredns-5dd5756b68-bkss9" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-173500" has status "Ready":"False"
	I0109 00:28:16.677421   15272 pod_ready.go:78] waiting up to 4m0s for pod "etcd-multinode-173500" in "kube-system" namespace to be "Ready" ...
	I0109 00:28:16.677421   15272 round_trippers.go:463] GET https://172.24.109.120:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-173500
	I0109 00:28:16.677421   15272 round_trippers.go:469] Request Headers:
	I0109 00:28:16.677421   15272 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0109 00:28:16.677421   15272 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:28:16.681724   15272 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0109 00:28:16.681724   15272 round_trippers.go:577] Response Headers:
	I0109 00:28:16.681724   15272 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b65f6d3f-e21a-46b0-805d-9b71486b6c1c
	I0109 00:28:16.681724   15272 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 160c1d93-30c8-4413-9986-7aa0d5f0fdd1
	I0109 00:28:16.681724   15272 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:28:16 GMT
	I0109 00:28:16.681724   15272 round_trippers.go:580]     Audit-Id: 1c666e58-77b3-49bc-9d0e-f15ae83cc4fe
	I0109 00:28:16.681724   15272 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:28:16.681724   15272 round_trippers.go:580]     Content-Type: application/json
	I0109 00:28:16.682105   15272 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-173500","namespace":"kube-system","uid":"bbcb3d33-7daf-43d9-b596-66cbce3552ad","resourceVersion":"1660","creationTimestamp":"2024-01-09T00:05:31Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.24.100.178:2379","kubernetes.io/config.hash":"8b9b6f8e7be121dc69cce9e8aca59417","kubernetes.io/config.mirror":"8b9b6f8e7be121dc69cce9e8aca59417","kubernetes.io/config.seen":"2024-01-09T00:05:31.606498270Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-173500","uid":"5e5c58e3-1ae8-4346-8766-7537fea36975","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-09T00:05:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise
-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/conf [truncated 6097 chars]
	I0109 00:28:16.682681   15272 round_trippers.go:463] GET https://172.24.109.120:8443/api/v1/nodes/multinode-173500
	I0109 00:28:16.682738   15272 round_trippers.go:469] Request Headers:
	I0109 00:28:16.682738   15272 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:28:16.682810   15272 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0109 00:28:16.685784   15272 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0109 00:28:16.685907   15272 round_trippers.go:577] Response Headers:
	I0109 00:28:16.685907   15272 round_trippers.go:580]     Audit-Id: b9c95b47-e43f-4979-8194-764ea91d789c
	I0109 00:28:16.686006   15272 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:28:16.686006   15272 round_trippers.go:580]     Content-Type: application/json
	I0109 00:28:16.686006   15272 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b65f6d3f-e21a-46b0-805d-9b71486b6c1c
	I0109 00:28:16.686006   15272 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 160c1d93-30c8-4413-9986-7aa0d5f0fdd1
	I0109 00:28:16.686084   15272 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:28:16 GMT
	I0109 00:28:16.686161   15272 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-173500","uid":"5e5c58e3-1ae8-4346-8766-7537fea36975","resourceVersion":"1661","creationTimestamp":"2024-01-09T00:05:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-173500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-173500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_09T00_05_33_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-01-09T00:05:27Z","fieldsType":"FieldsV1","f [truncated 5365 chars]
	I0109 00:28:16.686699   15272 pod_ready.go:97] node "multinode-173500" hosting pod "etcd-multinode-173500" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-173500" has status "Ready":"False"
	I0109 00:28:16.686760   15272 pod_ready.go:81] duration metric: took 9.3389ms waiting for pod "etcd-multinode-173500" in "kube-system" namespace to be "Ready" ...
	E0109 00:28:16.686760   15272 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-173500" hosting pod "etcd-multinode-173500" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-173500" has status "Ready":"False"
	I0109 00:28:16.686842   15272 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-multinode-173500" in "kube-system" namespace to be "Ready" ...
	I0109 00:28:16.686915   15272 round_trippers.go:463] GET https://172.24.109.120:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-173500
	I0109 00:28:16.686915   15272 round_trippers.go:469] Request Headers:
	I0109 00:28:16.686915   15272 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:28:16.686915   15272 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0109 00:28:16.689118   15272 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0109 00:28:16.689118   15272 round_trippers.go:577] Response Headers:
	I0109 00:28:16.689118   15272 round_trippers.go:580]     Audit-Id: 838d4199-e440-4dc3-990a-0e99ae3707e6
	I0109 00:28:16.689118   15272 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:28:16.689118   15272 round_trippers.go:580]     Content-Type: application/json
	I0109 00:28:16.689118   15272 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b65f6d3f-e21a-46b0-805d-9b71486b6c1c
	I0109 00:28:16.689118   15272 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 160c1d93-30c8-4413-9986-7aa0d5f0fdd1
	I0109 00:28:16.689118   15272 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:28:16 GMT
	I0109 00:28:16.690158   15272 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-173500","namespace":"kube-system","uid":"6ec45d85-b2d5-483f-afdd-ee98dbb0edd1","resourceVersion":"1664","creationTimestamp":"2024-01-09T00:05:31Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.24.100.178:8443","kubernetes.io/config.hash":"6d4780fbf78826137e2d0549410b3c52","kubernetes.io/config.mirror":"6d4780fbf78826137e2d0549410b3c52","kubernetes.io/config.seen":"2024-01-09T00:05:31.606503570Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-173500","uid":"5e5c58e3-1ae8-4346-8766-7537fea36975","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-09T00:05:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.k
ubernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernete [truncated 7653 chars]
	I0109 00:28:16.690158   15272 round_trippers.go:463] GET https://172.24.109.120:8443/api/v1/nodes/multinode-173500
	I0109 00:28:16.690158   15272 round_trippers.go:469] Request Headers:
	I0109 00:28:16.690158   15272 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:28:16.690158   15272 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0109 00:28:16.694120   15272 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0109 00:28:16.694120   15272 round_trippers.go:577] Response Headers:
	I0109 00:28:16.694406   15272 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 160c1d93-30c8-4413-9986-7aa0d5f0fdd1
	I0109 00:28:16.694406   15272 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:28:16 GMT
	I0109 00:28:16.694406   15272 round_trippers.go:580]     Audit-Id: fcfa1c6d-ab20-484a-8cbf-ce288cdd93e6
	I0109 00:28:16.694406   15272 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:28:16.694406   15272 round_trippers.go:580]     Content-Type: application/json
	I0109 00:28:16.694406   15272 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b65f6d3f-e21a-46b0-805d-9b71486b6c1c
	I0109 00:28:16.694709   15272 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-173500","uid":"5e5c58e3-1ae8-4346-8766-7537fea36975","resourceVersion":"1661","creationTimestamp":"2024-01-09T00:05:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-173500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-173500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_09T00_05_33_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-01-09T00:05:27Z","fieldsType":"FieldsV1","f [truncated 5365 chars]
	I0109 00:28:16.695173   15272 pod_ready.go:97] node "multinode-173500" hosting pod "kube-apiserver-multinode-173500" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-173500" has status "Ready":"False"
	I0109 00:28:16.695265   15272 pod_ready.go:81] duration metric: took 8.4222ms waiting for pod "kube-apiserver-multinode-173500" in "kube-system" namespace to be "Ready" ...
	E0109 00:28:16.695265   15272 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-173500" hosting pod "kube-apiserver-multinode-173500" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-173500" has status "Ready":"False"
	I0109 00:28:16.695265   15272 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-multinode-173500" in "kube-system" namespace to be "Ready" ...
	I0109 00:28:16.695265   15272 round_trippers.go:463] GET https://172.24.109.120:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-173500
	I0109 00:28:16.695265   15272 round_trippers.go:469] Request Headers:
	I0109 00:28:16.695265   15272 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:28:16.695265   15272 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0109 00:28:16.697875   15272 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0109 00:28:16.697875   15272 round_trippers.go:577] Response Headers:
	I0109 00:28:16.697875   15272 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b65f6d3f-e21a-46b0-805d-9b71486b6c1c
	I0109 00:28:16.697875   15272 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 160c1d93-30c8-4413-9986-7aa0d5f0fdd1
	I0109 00:28:16.697875   15272 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:28:16 GMT
	I0109 00:28:16.697875   15272 round_trippers.go:580]     Audit-Id: 6d4d3f48-ad44-40a2-a989-800ffa185c2a
	I0109 00:28:16.697875   15272 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:28:16.697875   15272 round_trippers.go:580]     Content-Type: application/json
	I0109 00:28:16.697875   15272 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-173500","namespace":"kube-system","uid":"a0252ea5-5d6a-4303-b7e6-151481d4cd8a","resourceVersion":"1712","creationTimestamp":"2024-01-09T00:05:31Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"f6b180d5a2686dc98b0355b6df7f53ea","kubernetes.io/config.mirror":"f6b180d5a2686dc98b0355b6df7f53ea","kubernetes.io/config.seen":"2024-01-09T00:05:31.606504770Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-173500","uid":"5e5c58e3-1ae8-4346-8766-7537fea36975","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-09T00:05:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.
io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".": [truncated 7441 chars]
	I0109 00:28:16.698876   15272 round_trippers.go:463] GET https://172.24.109.120:8443/api/v1/nodes/multinode-173500
	I0109 00:28:16.698876   15272 round_trippers.go:469] Request Headers:
	I0109 00:28:16.698876   15272 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:28:16.698876   15272 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0109 00:28:16.701875   15272 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0109 00:28:16.701875   15272 round_trippers.go:577] Response Headers:
	I0109 00:28:16.701875   15272 round_trippers.go:580]     Content-Type: application/json
	I0109 00:28:16.701875   15272 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b65f6d3f-e21a-46b0-805d-9b71486b6c1c
	I0109 00:28:16.701875   15272 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 160c1d93-30c8-4413-9986-7aa0d5f0fdd1
	I0109 00:28:16.702322   15272 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:28:16 GMT
	I0109 00:28:16.702322   15272 round_trippers.go:580]     Audit-Id: a25caa60-adf0-456f-871b-4b0c22d4a104
	I0109 00:28:16.702375   15272 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:28:16.702735   15272 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-173500","uid":"5e5c58e3-1ae8-4346-8766-7537fea36975","resourceVersion":"1661","creationTimestamp":"2024-01-09T00:05:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-173500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-173500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_09T00_05_33_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-01-09T00:05:27Z","fieldsType":"FieldsV1","f [truncated 5365 chars]
	I0109 00:28:16.702735   15272 pod_ready.go:97] node "multinode-173500" hosting pod "kube-controller-manager-multinode-173500" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-173500" has status "Ready":"False"
	I0109 00:28:16.702735   15272 pod_ready.go:81] duration metric: took 7.4705ms waiting for pod "kube-controller-manager-multinode-173500" in "kube-system" namespace to be "Ready" ...
	E0109 00:28:16.702735   15272 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-173500" hosting pod "kube-controller-manager-multinode-173500" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-173500" has status "Ready":"False"
	I0109 00:28:16.703293   15272 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-4h4sv" in "kube-system" namespace to be "Ready" ...
	I0109 00:28:16.858341   15272 request.go:629] Waited for 154.7254ms due to client-side throttling, not priority and fairness, request: GET:https://172.24.109.120:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4h4sv
	I0109 00:28:16.858478   15272 round_trippers.go:463] GET https://172.24.109.120:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4h4sv
	I0109 00:28:16.858478   15272 round_trippers.go:469] Request Headers:
	I0109 00:28:16.858478   15272 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:28:16.858478   15272 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0109 00:28:16.864201   15272 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0109 00:28:16.864201   15272 round_trippers.go:577] Response Headers:
	I0109 00:28:16.864201   15272 round_trippers.go:580]     Audit-Id: e1755f9b-a866-41c0-be63-8fb3151bd3be
	I0109 00:28:16.864201   15272 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:28:16.864201   15272 round_trippers.go:580]     Content-Type: application/json
	I0109 00:28:16.864201   15272 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b65f6d3f-e21a-46b0-805d-9b71486b6c1c
	I0109 00:28:16.864201   15272 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 160c1d93-30c8-4413-9986-7aa0d5f0fdd1
	I0109 00:28:16.864201   15272 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:28:16 GMT
	I0109 00:28:16.864483   15272 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-4h4sv","generateName":"kube-proxy-","namespace":"kube-system","uid":"a45861ba-73e0-452f-a535-f66e154ea1c6","resourceVersion":"592","creationTimestamp":"2024-01-09T00:08:43Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"ec2434b9-012c-4df1-b401-04556fed7700","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-09T00:08:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ec2434b9-012c-4df1-b401-04556fed7700\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5541 chars]
	I0109 00:28:17.061026   15272 request.go:629] Waited for 195.998ms due to client-side throttling, not priority and fairness, request: GET:https://172.24.109.120:8443/api/v1/nodes/multinode-173500-m02
	I0109 00:28:17.061187   15272 round_trippers.go:463] GET https://172.24.109.120:8443/api/v1/nodes/multinode-173500-m02
	I0109 00:28:17.061187   15272 round_trippers.go:469] Request Headers:
	I0109 00:28:17.061187   15272 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0109 00:28:17.061404   15272 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:28:17.065801   15272 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0109 00:28:17.065801   15272 round_trippers.go:577] Response Headers:
	I0109 00:28:17.065801   15272 round_trippers.go:580]     Content-Type: application/json
	I0109 00:28:17.065801   15272 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b65f6d3f-e21a-46b0-805d-9b71486b6c1c
	I0109 00:28:17.065801   15272 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 160c1d93-30c8-4413-9986-7aa0d5f0fdd1
	I0109 00:28:17.066117   15272 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:28:17 GMT
	I0109 00:28:17.066117   15272 round_trippers.go:580]     Audit-Id: d369ee4a-f561-4513-b463-93fb9ba94bb5
	I0109 00:28:17.066117   15272 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:28:17.066311   15272 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-173500-m02","uid":"2696f851-45f3-47f4-953f-d03a5dc2fac0","resourceVersion":"1573","creationTimestamp":"2024-01-09T00:08:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-173500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-173500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_09T00_23_52_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-09T00:08:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:anno
tations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager": [truncated 3819 chars]
	I0109 00:28:17.066776   15272 pod_ready.go:92] pod "kube-proxy-4h4sv" in "kube-system" namespace has status "Ready":"True"
	I0109 00:28:17.066841   15272 pod_ready.go:81] duration metric: took 363.5477ms waiting for pod "kube-proxy-4h4sv" in "kube-system" namespace to be "Ready" ...
	I0109 00:28:17.066841   15272 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-mj6ks" in "kube-system" namespace to be "Ready" ...
	I0109 00:28:17.248845   15272 request.go:629] Waited for 181.6881ms due to client-side throttling, not priority and fairness, request: GET:https://172.24.109.120:8443/api/v1/namespaces/kube-system/pods/kube-proxy-mj6ks
	I0109 00:28:17.248941   15272 round_trippers.go:463] GET https://172.24.109.120:8443/api/v1/namespaces/kube-system/pods/kube-proxy-mj6ks
	I0109 00:28:17.248941   15272 round_trippers.go:469] Request Headers:
	I0109 00:28:17.248941   15272 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:28:17.249035   15272 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0109 00:28:17.253453   15272 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0109 00:28:17.253453   15272 round_trippers.go:577] Response Headers:
	I0109 00:28:17.253453   15272 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 160c1d93-30c8-4413-9986-7aa0d5f0fdd1
	I0109 00:28:17.253453   15272 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:28:17 GMT
	I0109 00:28:17.253453   15272 round_trippers.go:580]     Audit-Id: e2e4bbfb-22ea-429e-ac28-382b573059ba
	I0109 00:28:17.253453   15272 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:28:17.253453   15272 round_trippers.go:580]     Content-Type: application/json
	I0109 00:28:17.253453   15272 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b65f6d3f-e21a-46b0-805d-9b71486b6c1c
	I0109 00:28:17.254084   15272 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-mj6ks","generateName":"kube-proxy-","namespace":"kube-system","uid":"bd23c4c8-d363-4a3f-b750-a3de2346a3bb","resourceVersion":"1587","creationTimestamp":"2024-01-09T00:13:24Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"ec2434b9-012c-4df1-b401-04556fed7700","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-09T00:13:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ec2434b9-012c-4df1-b401-04556fed7700\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5747 chars]
	I0109 00:28:17.453694   15272 request.go:629] Waited for 198.6122ms due to client-side throttling, not priority and fairness, request: GET:https://172.24.109.120:8443/api/v1/nodes/multinode-173500-m03
	I0109 00:28:17.453927   15272 round_trippers.go:463] GET https://172.24.109.120:8443/api/v1/nodes/multinode-173500-m03
	I0109 00:28:17.453927   15272 round_trippers.go:469] Request Headers:
	I0109 00:28:17.453927   15272 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:28:17.454027   15272 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0109 00:28:17.457991   15272 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0109 00:28:17.457991   15272 round_trippers.go:577] Response Headers:
	I0109 00:28:17.457991   15272 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b65f6d3f-e21a-46b0-805d-9b71486b6c1c
	I0109 00:28:17.457991   15272 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 160c1d93-30c8-4413-9986-7aa0d5f0fdd1
	I0109 00:28:17.457991   15272 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:28:17 GMT
	I0109 00:28:17.457991   15272 round_trippers.go:580]     Audit-Id: 797b0fce-0d07-4493-b613-e1e500c6475d
	I0109 00:28:17.457991   15272 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:28:17.457991   15272 round_trippers.go:580]     Content-Type: application/json
	I0109 00:28:17.459176   15272 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-173500-m03","uid":"9d8a783a-d01b-498d-94ae-1e3f65e7667c","resourceVersion":"1603","creationTimestamp":"2024-01-09T00:23:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-173500-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-173500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_09T00_23_52_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-09T00:23:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:anno
tations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-det [truncated 3636 chars]
	I0109 00:28:17.459520   15272 pod_ready.go:92] pod "kube-proxy-mj6ks" in "kube-system" namespace has status "Ready":"True"
	I0109 00:28:17.459640   15272 pod_ready.go:81] duration metric: took 392.7988ms waiting for pod "kube-proxy-mj6ks" in "kube-system" namespace to be "Ready" ...
	I0109 00:28:17.459640   15272 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-qrtm6" in "kube-system" namespace to be "Ready" ...
	I0109 00:28:17.658797   15272 request.go:629] Waited for 198.9374ms due to client-side throttling, not priority and fairness, request: GET:https://172.24.109.120:8443/api/v1/namespaces/kube-system/pods/kube-proxy-qrtm6
	I0109 00:28:17.658797   15272 round_trippers.go:463] GET https://172.24.109.120:8443/api/v1/namespaces/kube-system/pods/kube-proxy-qrtm6
	I0109 00:28:17.658797   15272 round_trippers.go:469] Request Headers:
	I0109 00:28:17.658797   15272 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0109 00:28:17.658797   15272 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:28:17.663701   15272 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0109 00:28:17.663701   15272 round_trippers.go:577] Response Headers:
	I0109 00:28:17.663701   15272 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b65f6d3f-e21a-46b0-805d-9b71486b6c1c
	I0109 00:28:17.663701   15272 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 160c1d93-30c8-4413-9986-7aa0d5f0fdd1
	I0109 00:28:17.663701   15272 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:28:17 GMT
	I0109 00:28:17.663701   15272 round_trippers.go:580]     Audit-Id: a15d7487-5c0a-4f60-9399-3cddb281509c
	I0109 00:28:17.663701   15272 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:28:17.663701   15272 round_trippers.go:580]     Content-Type: application/json
	I0109 00:28:17.663701   15272 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-qrtm6","generateName":"kube-proxy-","namespace":"kube-system","uid":"37d066e0-6ff3-4f22-abc3-6bddfa64736e","resourceVersion":"1659","creationTimestamp":"2024-01-09T00:05:45Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"ec2434b9-012c-4df1-b401-04556fed7700","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-09T00:05:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ec2434b9-012c-4df1-b401-04556fed7700\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5933 chars]
	I0109 00:28:17.846599   15272 request.go:629] Waited for 181.974ms due to client-side throttling, not priority and fairness, request: GET:https://172.24.109.120:8443/api/v1/nodes/multinode-173500
	I0109 00:28:17.846599   15272 round_trippers.go:463] GET https://172.24.109.120:8443/api/v1/nodes/multinode-173500
	I0109 00:28:17.846599   15272 round_trippers.go:469] Request Headers:
	I0109 00:28:17.846599   15272 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:28:17.846599   15272 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0109 00:28:17.851434   15272 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0109 00:28:17.851434   15272 round_trippers.go:577] Response Headers:
	I0109 00:28:17.851528   15272 round_trippers.go:580]     Audit-Id: 8f0edb2b-65ad-4f85-a91f-7ff1b75dd82b
	I0109 00:28:17.851528   15272 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:28:17.851528   15272 round_trippers.go:580]     Content-Type: application/json
	I0109 00:28:17.851528   15272 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b65f6d3f-e21a-46b0-805d-9b71486b6c1c
	I0109 00:28:17.851528   15272 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 160c1d93-30c8-4413-9986-7aa0d5f0fdd1
	I0109 00:28:17.851528   15272 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:28:17 GMT
	I0109 00:28:17.851851   15272 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-173500","uid":"5e5c58e3-1ae8-4346-8766-7537fea36975","resourceVersion":"1661","creationTimestamp":"2024-01-09T00:05:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-173500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-173500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_09T00_05_33_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-01-09T00:05:27Z","fieldsType":"FieldsV1","f [truncated 5365 chars]
	I0109 00:28:17.852323   15272 pod_ready.go:97] node "multinode-173500" hosting pod "kube-proxy-qrtm6" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-173500" has status "Ready":"False"
	I0109 00:28:17.852417   15272 pod_ready.go:81] duration metric: took 392.7774ms waiting for pod "kube-proxy-qrtm6" in "kube-system" namespace to be "Ready" ...
	E0109 00:28:17.852417   15272 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-173500" hosting pod "kube-proxy-qrtm6" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-173500" has status "Ready":"False"
	I0109 00:28:17.852417   15272 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-multinode-173500" in "kube-system" namespace to be "Ready" ...
	I0109 00:28:18.049110   15272 request.go:629] Waited for 196.3633ms due to client-side throttling, not priority and fairness, request: GET:https://172.24.109.120:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-173500
	I0109 00:28:18.049300   15272 round_trippers.go:463] GET https://172.24.109.120:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-173500
	I0109 00:28:18.049359   15272 round_trippers.go:469] Request Headers:
	I0109 00:28:18.049359   15272 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:28:18.049359   15272 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0109 00:28:18.053633   15272 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0109 00:28:18.054376   15272 round_trippers.go:577] Response Headers:
	I0109 00:28:18.054376   15272 round_trippers.go:580]     Audit-Id: 7b9aaa9d-2b80-497c-bab3-0e264d561aab
	I0109 00:28:18.054376   15272 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:28:18.054376   15272 round_trippers.go:580]     Content-Type: application/json
	I0109 00:28:18.054376   15272 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b65f6d3f-e21a-46b0-805d-9b71486b6c1c
	I0109 00:28:18.054376   15272 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 160c1d93-30c8-4413-9986-7aa0d5f0fdd1
	I0109 00:28:18.054376   15272 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:28:18 GMT
	I0109 00:28:18.054544   15272 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-173500","namespace":"kube-system","uid":"31d8cdf6-292f-4b3c-87c5-951fc34d0ea4","resourceVersion":"1663","creationTimestamp":"2024-01-09T00:05:30Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"70306498a200a6bbe0aa0b41e240d63b","kubernetes.io/config.mirror":"70306498a200a6bbe0aa0b41e240d63b","kubernetes.io/config.seen":"2024-01-09T00:05:21.481168866Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-173500","uid":"5e5c58e3-1ae8-4346-8766-7537fea36975","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-09T00:05:30Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{}
,"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{ [truncated 5153 chars]
	I0109 00:28:18.252644   15272 request.go:629] Waited for 197.7817ms due to client-side throttling, not priority and fairness, request: GET:https://172.24.109.120:8443/api/v1/nodes/multinode-173500
	I0109 00:28:18.252724   15272 round_trippers.go:463] GET https://172.24.109.120:8443/api/v1/nodes/multinode-173500
	I0109 00:28:18.252724   15272 round_trippers.go:469] Request Headers:
	I0109 00:28:18.252793   15272 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:28:18.252793   15272 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0109 00:28:18.257335   15272 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0109 00:28:18.257335   15272 round_trippers.go:577] Response Headers:
	I0109 00:28:18.257335   15272 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:28:18 GMT
	I0109 00:28:18.257335   15272 round_trippers.go:580]     Audit-Id: 51295b7b-4924-446a-8bf5-a99ac6c843e3
	I0109 00:28:18.257335   15272 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:28:18.257335   15272 round_trippers.go:580]     Content-Type: application/json
	I0109 00:28:18.257335   15272 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b65f6d3f-e21a-46b0-805d-9b71486b6c1c
	I0109 00:28:18.257335   15272 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 160c1d93-30c8-4413-9986-7aa0d5f0fdd1
	I0109 00:28:18.257961   15272 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-173500","uid":"5e5c58e3-1ae8-4346-8766-7537fea36975","resourceVersion":"1661","creationTimestamp":"2024-01-09T00:05:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-173500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-173500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_09T00_05_33_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-01-09T00:05:27Z","fieldsType":"FieldsV1","f [truncated 5365 chars]
	I0109 00:28:18.257961   15272 pod_ready.go:97] node "multinode-173500" hosting pod "kube-scheduler-multinode-173500" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-173500" has status "Ready":"False"
	I0109 00:28:18.257961   15272 pod_ready.go:81] duration metric: took 405.5438ms waiting for pod "kube-scheduler-multinode-173500" in "kube-system" namespace to be "Ready" ...
	E0109 00:28:18.257961   15272 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-173500" hosting pod "kube-scheduler-multinode-173500" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-173500" has status "Ready":"False"
	I0109 00:28:18.257961   15272 pod_ready.go:38] duration metric: took 1.6041611s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0109 00:28:18.258506   15272 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0109 00:28:18.276527   15272 command_runner.go:130] > -16
	I0109 00:28:18.277572   15272 ops.go:34] apiserver oom_adj: -16
	I0109 00:28:18.278136   15272 kubeadm.go:640] restartCluster took 16.4159093s
	I0109 00:28:18.278136   15272 kubeadm.go:406] StartCluster complete in 16.489958s
	I0109 00:28:18.278136   15272 settings.go:142] acquiring lock: {Name:mkab556291043b7dcd90a9d60c03aa7fa181e125 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0109 00:28:18.278390   15272 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0109 00:28:18.279954   15272 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\kubeconfig: {Name:mk2e2755bd0ffee2cfcc8bbf22c26f99d53697ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0109 00:28:18.281407   15272 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0109 00:28:18.281562   15272 addons.go:505] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0109 00:28:18.287760   15272 out.go:177] * Enabled addons: 
	I0109 00:28:18.281931   15272 config.go:182] Loaded profile config "multinode-173500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0109 00:28:18.294796   15272 addons.go:508] enable addons completed in 13.2337ms: enabled=[]
	I0109 00:28:18.296350   15272 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0109 00:28:18.297407   15272 kapi.go:59] client config for multinode-173500: &rest.Config{Host:"https://172.24.109.120:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\multinode-173500\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\multinode-173500\\client.key", CAFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CA
Data:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1e2c140), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0109 00:28:18.299050   15272 cert_rotation.go:137] Starting client certificate rotation controller
	I0109 00:28:18.299419   15272 round_trippers.go:463] GET https://172.24.109.120:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0109 00:28:18.299483   15272 round_trippers.go:469] Request Headers:
	I0109 00:28:18.299483   15272 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:28:18.299483   15272 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0109 00:28:18.315417   15272 round_trippers.go:574] Response Status: 200 OK in 15 milliseconds
	I0109 00:28:18.315417   15272 round_trippers.go:577] Response Headers:
	I0109 00:28:18.315500   15272 round_trippers.go:580]     Content-Type: application/json
	I0109 00:28:18.315500   15272 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b65f6d3f-e21a-46b0-805d-9b71486b6c1c
	I0109 00:28:18.315500   15272 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 160c1d93-30c8-4413-9986-7aa0d5f0fdd1
	I0109 00:28:18.315500   15272 round_trippers.go:580]     Content-Length: 292
	I0109 00:28:18.315500   15272 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:28:18 GMT
	I0109 00:28:18.315500   15272 round_trippers.go:580]     Audit-Id: 8308034a-c7ea-4e35-9ca0-c70ece8c0672
	I0109 00:28:18.315570   15272 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:28:18.315600   15272 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"40c365d2-2414-4cb8-9731-fc615f6d2dcd","resourceVersion":"1737","creationTimestamp":"2024-01-09T00:05:31Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0109 00:28:18.315908   15272 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-173500" context rescaled to 1 replicas
	I0109 00:28:18.315908   15272 start.go:223] Will wait 6m0s for node &{Name: IP:172.24.109.120 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0109 00:28:18.319540   15272 out.go:177] * Verifying Kubernetes components...
	I0109 00:28:18.335530   15272 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0109 00:28:18.426525   15272 command_runner.go:130] > apiVersion: v1
	I0109 00:28:18.426525   15272 command_runner.go:130] > data:
	I0109 00:28:18.426525   15272 command_runner.go:130] >   Corefile: |
	I0109 00:28:18.426525   15272 command_runner.go:130] >     .:53 {
	I0109 00:28:18.426525   15272 command_runner.go:130] >         log
	I0109 00:28:18.427531   15272 command_runner.go:130] >         errors
	I0109 00:28:18.427531   15272 command_runner.go:130] >         health {
	I0109 00:28:18.427556   15272 command_runner.go:130] >            lameduck 5s
	I0109 00:28:18.427556   15272 command_runner.go:130] >         }
	I0109 00:28:18.427556   15272 command_runner.go:130] >         ready
	I0109 00:28:18.427556   15272 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I0109 00:28:18.427556   15272 command_runner.go:130] >            pods insecure
	I0109 00:28:18.427556   15272 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I0109 00:28:18.427556   15272 command_runner.go:130] >            ttl 30
	I0109 00:28:18.427627   15272 command_runner.go:130] >         }
	I0109 00:28:18.427627   15272 command_runner.go:130] >         prometheus :9153
	I0109 00:28:18.427627   15272 command_runner.go:130] >         hosts {
	I0109 00:28:18.427627   15272 command_runner.go:130] >            172.24.96.1 host.minikube.internal
	I0109 00:28:18.427627   15272 command_runner.go:130] >            fallthrough
	I0109 00:28:18.427627   15272 command_runner.go:130] >         }
	I0109 00:28:18.427627   15272 command_runner.go:130] >         forward . /etc/resolv.conf {
	I0109 00:28:18.427695   15272 command_runner.go:130] >            max_concurrent 1000
	I0109 00:28:18.427695   15272 command_runner.go:130] >         }
	I0109 00:28:18.427695   15272 command_runner.go:130] >         cache 30
	I0109 00:28:18.427695   15272 command_runner.go:130] >         loop
	I0109 00:28:18.427695   15272 command_runner.go:130] >         reload
	I0109 00:28:18.427695   15272 command_runner.go:130] >         loadbalance
	I0109 00:28:18.427695   15272 command_runner.go:130] >     }
	I0109 00:28:18.427766   15272 command_runner.go:130] > kind: ConfigMap
	I0109 00:28:18.427766   15272 command_runner.go:130] > metadata:
	I0109 00:28:18.427766   15272 command_runner.go:130] >   creationTimestamp: "2024-01-09T00:05:31Z"
	I0109 00:28:18.427766   15272 command_runner.go:130] >   name: coredns
	I0109 00:28:18.427766   15272 command_runner.go:130] >   namespace: kube-system
	I0109 00:28:18.427836   15272 command_runner.go:130] >   resourceVersion: "362"
	I0109 00:28:18.427836   15272 command_runner.go:130] >   uid: 3f96b20d-2896-4a3f-95df-633f61fcd852
	I0109 00:28:18.434124   15272 node_ready.go:35] waiting up to 6m0s for node "multinode-173500" to be "Ready" ...
	I0109 00:28:18.434769   15272 start.go:902] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0109 00:28:18.455715   15272 round_trippers.go:463] GET https://172.24.109.120:8443/api/v1/nodes/multinode-173500
	I0109 00:28:18.455715   15272 round_trippers.go:469] Request Headers:
	I0109 00:28:18.455715   15272 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:28:18.455789   15272 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0109 00:28:18.459323   15272 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0109 00:28:18.459323   15272 round_trippers.go:577] Response Headers:
	I0109 00:28:18.459323   15272 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:28:18 GMT
	I0109 00:28:18.459323   15272 round_trippers.go:580]     Audit-Id: 993219df-cf52-422d-a584-f4b15030d824
	I0109 00:28:18.459323   15272 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:28:18.459323   15272 round_trippers.go:580]     Content-Type: application/json
	I0109 00:28:18.459323   15272 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b65f6d3f-e21a-46b0-805d-9b71486b6c1c
	I0109 00:28:18.459323   15272 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 160c1d93-30c8-4413-9986-7aa0d5f0fdd1
	I0109 00:28:18.459628   15272 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-173500","uid":"5e5c58e3-1ae8-4346-8766-7537fea36975","resourceVersion":"1661","creationTimestamp":"2024-01-09T00:05:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-173500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-173500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_09T00_05_33_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-01-09T00:05:27Z","fieldsType":"FieldsV1","f [truncated 5365 chars]
	I0109 00:28:18.937326   15272 round_trippers.go:463] GET https://172.24.109.120:8443/api/v1/nodes/multinode-173500
	I0109 00:28:18.937326   15272 round_trippers.go:469] Request Headers:
	I0109 00:28:18.937326   15272 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:28:18.937326   15272 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0109 00:28:18.941941   15272 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0109 00:28:18.941941   15272 round_trippers.go:577] Response Headers:
	I0109 00:28:18.941941   15272 round_trippers.go:580]     Audit-Id: 7b40a18a-81b0-4bee-b73e-1ef7a6289414
	I0109 00:28:18.942395   15272 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:28:18.942395   15272 round_trippers.go:580]     Content-Type: application/json
	I0109 00:28:18.942395   15272 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b65f6d3f-e21a-46b0-805d-9b71486b6c1c
	I0109 00:28:18.942550   15272 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 160c1d93-30c8-4413-9986-7aa0d5f0fdd1
	I0109 00:28:18.942550   15272 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:28:18 GMT
	I0109 00:28:18.942787   15272 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-173500","uid":"5e5c58e3-1ae8-4346-8766-7537fea36975","resourceVersion":"1661","creationTimestamp":"2024-01-09T00:05:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-173500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-173500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_09T00_05_33_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-01-09T00:05:27Z","fieldsType":"FieldsV1","f [truncated 5365 chars]
	I0109 00:28:19.444674   15272 round_trippers.go:463] GET https://172.24.109.120:8443/api/v1/nodes/multinode-173500
	I0109 00:28:19.444803   15272 round_trippers.go:469] Request Headers:
	I0109 00:28:19.444803   15272 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:28:19.444803   15272 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0109 00:28:19.449415   15272 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0109 00:28:19.449415   15272 round_trippers.go:577] Response Headers:
	I0109 00:28:19.449415   15272 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b65f6d3f-e21a-46b0-805d-9b71486b6c1c
	I0109 00:28:19.449415   15272 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 160c1d93-30c8-4413-9986-7aa0d5f0fdd1
	I0109 00:28:19.449415   15272 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:28:19 GMT
	I0109 00:28:19.449415   15272 round_trippers.go:580]     Audit-Id: 3d427490-3bc0-4acf-bf6a-349b4d6425df
	I0109 00:28:19.449415   15272 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:28:19.449415   15272 round_trippers.go:580]     Content-Type: application/json
	I0109 00:28:19.450370   15272 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-173500","uid":"5e5c58e3-1ae8-4346-8766-7537fea36975","resourceVersion":"1661","creationTimestamp":"2024-01-09T00:05:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-173500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-173500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_09T00_05_33_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-01-09T00:05:27Z","fieldsType":"FieldsV1","f [truncated 5365 chars]
	I0109 00:28:19.950100   15272 round_trippers.go:463] GET https://172.24.109.120:8443/api/v1/nodes/multinode-173500
	I0109 00:28:19.950232   15272 round_trippers.go:469] Request Headers:
	I0109 00:28:19.950232   15272 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:28:19.950232   15272 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0109 00:28:19.955871   15272 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0109 00:28:19.955871   15272 round_trippers.go:577] Response Headers:
	I0109 00:28:19.955871   15272 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:28:19.956867   15272 round_trippers.go:580]     Content-Type: application/json
	I0109 00:28:19.956900   15272 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b65f6d3f-e21a-46b0-805d-9b71486b6c1c
	I0109 00:28:19.956900   15272 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 160c1d93-30c8-4413-9986-7aa0d5f0fdd1
	I0109 00:28:19.956900   15272 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:28:19 GMT
	I0109 00:28:19.956900   15272 round_trippers.go:580]     Audit-Id: be011850-b7bb-4947-a210-b1d1985be30b
	I0109 00:28:19.959553   15272 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-173500","uid":"5e5c58e3-1ae8-4346-8766-7537fea36975","resourceVersion":"1661","creationTimestamp":"2024-01-09T00:05:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-173500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-173500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_09T00_05_33_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-01-09T00:05:27Z","fieldsType":"FieldsV1","f [truncated 5365 chars]
	I0109 00:28:20.438398   15272 round_trippers.go:463] GET https://172.24.109.120:8443/api/v1/nodes/multinode-173500
	I0109 00:28:20.438549   15272 round_trippers.go:469] Request Headers:
	I0109 00:28:20.438549   15272 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:28:20.438549   15272 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0109 00:28:20.442955   15272 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0109 00:28:20.442955   15272 round_trippers.go:577] Response Headers:
	I0109 00:28:20.442955   15272 round_trippers.go:580]     Audit-Id: d83ca35f-9b0f-4e6f-a5a8-b97ac628cac5
	I0109 00:28:20.442955   15272 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:28:20.442955   15272 round_trippers.go:580]     Content-Type: application/json
	I0109 00:28:20.442955   15272 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b65f6d3f-e21a-46b0-805d-9b71486b6c1c
	I0109 00:28:20.442955   15272 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 160c1d93-30c8-4413-9986-7aa0d5f0fdd1
	I0109 00:28:20.442955   15272 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:28:20 GMT
	I0109 00:28:20.444406   15272 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-173500","uid":"5e5c58e3-1ae8-4346-8766-7537fea36975","resourceVersion":"1661","creationTimestamp":"2024-01-09T00:05:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-173500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-173500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_09T00_05_33_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-01-09T00:05:27Z","fieldsType":"FieldsV1","f [truncated 5365 chars]
	I0109 00:28:20.444619   15272 node_ready.go:58] node "multinode-173500" has status "Ready":"False"
	I0109 00:28:20.938040   15272 round_trippers.go:463] GET https://172.24.109.120:8443/api/v1/nodes/multinode-173500
	I0109 00:28:20.938142   15272 round_trippers.go:469] Request Headers:
	I0109 00:28:20.938142   15272 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:28:20.938142   15272 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0109 00:28:20.942470   15272 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0109 00:28:20.942470   15272 round_trippers.go:577] Response Headers:
	I0109 00:28:20.942470   15272 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b65f6d3f-e21a-46b0-805d-9b71486b6c1c
	I0109 00:28:20.942623   15272 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 160c1d93-30c8-4413-9986-7aa0d5f0fdd1
	I0109 00:28:20.942623   15272 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:28:20 GMT
	I0109 00:28:20.942623   15272 round_trippers.go:580]     Audit-Id: 51874a23-d834-456d-bf78-bab7d0128779
	I0109 00:28:20.942623   15272 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:28:20.942623   15272 round_trippers.go:580]     Content-Type: application/json
	I0109 00:28:20.943064   15272 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-173500","uid":"5e5c58e3-1ae8-4346-8766-7537fea36975","resourceVersion":"1661","creationTimestamp":"2024-01-09T00:05:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-173500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-173500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_09T00_05_33_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-01-09T00:05:27Z","fieldsType":"FieldsV1","f [truncated 5365 chars]
	I0109 00:28:21.438969   15272 round_trippers.go:463] GET https://172.24.109.120:8443/api/v1/nodes/multinode-173500
	I0109 00:28:21.438969   15272 round_trippers.go:469] Request Headers:
	I0109 00:28:21.438969   15272 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:28:21.438969   15272 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0109 00:28:21.446847   15272 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0109 00:28:21.447675   15272 round_trippers.go:577] Response Headers:
	I0109 00:28:21.447675   15272 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:28:21.447675   15272 round_trippers.go:580]     Content-Type: application/json
	I0109 00:28:21.447675   15272 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b65f6d3f-e21a-46b0-805d-9b71486b6c1c
	I0109 00:28:21.447675   15272 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 160c1d93-30c8-4413-9986-7aa0d5f0fdd1
	I0109 00:28:21.447675   15272 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:28:21 GMT
	I0109 00:28:21.447675   15272 round_trippers.go:580]     Audit-Id: 888aa3bf-e6f3-4864-88c4-ad186d6d66fc
	I0109 00:28:21.447833   15272 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-173500","uid":"5e5c58e3-1ae8-4346-8766-7537fea36975","resourceVersion":"1661","creationTimestamp":"2024-01-09T00:05:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-173500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-173500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_09T00_05_33_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-01-09T00:05:27Z","fieldsType":"FieldsV1","f [truncated 5365 chars]
	I0109 00:28:21.940252   15272 round_trippers.go:463] GET https://172.24.109.120:8443/api/v1/nodes/multinode-173500
	I0109 00:28:21.940407   15272 round_trippers.go:469] Request Headers:
	I0109 00:28:21.940407   15272 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:28:21.940519   15272 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0109 00:28:21.944355   15272 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0109 00:28:21.944355   15272 round_trippers.go:577] Response Headers:
	I0109 00:28:21.944355   15272 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:28:21 GMT
	I0109 00:28:21.944355   15272 round_trippers.go:580]     Audit-Id: 0cb3f857-7e75-466f-9380-6f4884561a0b
	I0109 00:28:21.944355   15272 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:28:21.945041   15272 round_trippers.go:580]     Content-Type: application/json
	I0109 00:28:21.945041   15272 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b65f6d3f-e21a-46b0-805d-9b71486b6c1c
	I0109 00:28:21.945041   15272 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 160c1d93-30c8-4413-9986-7aa0d5f0fdd1
	I0109 00:28:21.945485   15272 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-173500","uid":"5e5c58e3-1ae8-4346-8766-7537fea36975","resourceVersion":"1661","creationTimestamp":"2024-01-09T00:05:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-173500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-173500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_09T00_05_33_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-01-09T00:05:27Z","fieldsType":"FieldsV1","f [truncated 5365 chars]
	I0109 00:28:22.443373   15272 round_trippers.go:463] GET https://172.24.109.120:8443/api/v1/nodes/multinode-173500
	I0109 00:28:22.443485   15272 round_trippers.go:469] Request Headers:
	I0109 00:28:22.443485   15272 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:28:22.443485   15272 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0109 00:28:22.447428   15272 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0109 00:28:22.447428   15272 round_trippers.go:577] Response Headers:
	I0109 00:28:22.448011   15272 round_trippers.go:580]     Audit-Id: 61b1f586-d1e0-40c7-8891-0f7bf701e3dc
	I0109 00:28:22.448011   15272 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:28:22.448011   15272 round_trippers.go:580]     Content-Type: application/json
	I0109 00:28:22.448011   15272 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b65f6d3f-e21a-46b0-805d-9b71486b6c1c
	I0109 00:28:22.448011   15272 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 160c1d93-30c8-4413-9986-7aa0d5f0fdd1
	I0109 00:28:22.448011   15272 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:28:22 GMT
	I0109 00:28:22.448620   15272 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-173500","uid":"5e5c58e3-1ae8-4346-8766-7537fea36975","resourceVersion":"1661","creationTimestamp":"2024-01-09T00:05:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-173500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-173500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_09T00_05_33_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-01-09T00:05:27Z","fieldsType":"FieldsV1","f [truncated 5365 chars]
	I0109 00:28:22.449796   15272 node_ready.go:58] node "multinode-173500" has status "Ready":"False"
	I0109 00:28:22.940679   15272 round_trippers.go:463] GET https://172.24.109.120:8443/api/v1/nodes/multinode-173500
	I0109 00:28:22.940679   15272 round_trippers.go:469] Request Headers:
	I0109 00:28:22.940679   15272 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:28:22.940679   15272 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0109 00:28:22.945209   15272 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0109 00:28:22.945209   15272 round_trippers.go:577] Response Headers:
	I0109 00:28:22.945209   15272 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 160c1d93-30c8-4413-9986-7aa0d5f0fdd1
	I0109 00:28:22.946212   15272 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:28:22 GMT
	I0109 00:28:22.946212   15272 round_trippers.go:580]     Audit-Id: 80f314d1-eac9-4531-baca-ba564796fb43
	I0109 00:28:22.946212   15272 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:28:22.946212   15272 round_trippers.go:580]     Content-Type: application/json
	I0109 00:28:22.946260   15272 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b65f6d3f-e21a-46b0-805d-9b71486b6c1c
	I0109 00:28:22.946575   15272 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-173500","uid":"5e5c58e3-1ae8-4346-8766-7537fea36975","resourceVersion":"1661","creationTimestamp":"2024-01-09T00:05:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-173500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-173500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_09T00_05_33_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-01-09T00:05:27Z","fieldsType":"FieldsV1","f [truncated 5365 chars]
	I0109 00:28:23.439325   15272 round_trippers.go:463] GET https://172.24.109.120:8443/api/v1/nodes/multinode-173500
	I0109 00:28:23.439325   15272 round_trippers.go:469] Request Headers:
	I0109 00:28:23.439325   15272 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:28:23.439325   15272 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0109 00:28:23.447586   15272 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0109 00:28:23.447586   15272 round_trippers.go:577] Response Headers:
	I0109 00:28:23.447586   15272 round_trippers.go:580]     Audit-Id: 4a9938cd-4058-40f5-83ca-d2da6d897915
	I0109 00:28:23.447586   15272 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:28:23.447586   15272 round_trippers.go:580]     Content-Type: application/json
	I0109 00:28:23.447586   15272 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b65f6d3f-e21a-46b0-805d-9b71486b6c1c
	I0109 00:28:23.447586   15272 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 160c1d93-30c8-4413-9986-7aa0d5f0fdd1
	I0109 00:28:23.447586   15272 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:28:23 GMT
	I0109 00:28:23.447586   15272 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-173500","uid":"5e5c58e3-1ae8-4346-8766-7537fea36975","resourceVersion":"1661","creationTimestamp":"2024-01-09T00:05:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-173500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-173500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_09T00_05_33_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-01-09T00:05:27Z","fieldsType":"FieldsV1","f [truncated 5365 chars]
	I0109 00:28:23.940391   15272 round_trippers.go:463] GET https://172.24.109.120:8443/api/v1/nodes/multinode-173500
	I0109 00:28:23.940391   15272 round_trippers.go:469] Request Headers:
	I0109 00:28:23.940476   15272 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:28:23.940476   15272 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0109 00:28:23.944725   15272 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0109 00:28:23.944725   15272 round_trippers.go:577] Response Headers:
	I0109 00:28:23.944725   15272 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b65f6d3f-e21a-46b0-805d-9b71486b6c1c
	I0109 00:28:23.944725   15272 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 160c1d93-30c8-4413-9986-7aa0d5f0fdd1
	I0109 00:28:23.944725   15272 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:28:23 GMT
	I0109 00:28:23.944725   15272 round_trippers.go:580]     Audit-Id: dc88aff8-8b2b-4554-bc45-2d149ef195db
	I0109 00:28:23.944725   15272 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:28:23.944725   15272 round_trippers.go:580]     Content-Type: application/json
	I0109 00:28:23.945347   15272 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-173500","uid":"5e5c58e3-1ae8-4346-8766-7537fea36975","resourceVersion":"1661","creationTimestamp":"2024-01-09T00:05:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-173500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-173500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_09T00_05_33_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-01-09T00:05:27Z","fieldsType":"FieldsV1","f [truncated 5365 chars]
	I0109 00:28:24.445978   15272 round_trippers.go:463] GET https://172.24.109.120:8443/api/v1/nodes/multinode-173500
	I0109 00:28:24.446272   15272 round_trippers.go:469] Request Headers:
	I0109 00:28:24.446272   15272 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:28:24.446272   15272 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0109 00:28:24.450749   15272 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0109 00:28:24.450749   15272 round_trippers.go:577] Response Headers:
	I0109 00:28:24.450749   15272 round_trippers.go:580]     Audit-Id: 85e80c24-7594-487c-b5fa-7f5a82af18da
	I0109 00:28:24.450749   15272 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:28:24.450749   15272 round_trippers.go:580]     Content-Type: application/json
	I0109 00:28:24.451206   15272 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b65f6d3f-e21a-46b0-805d-9b71486b6c1c
	I0109 00:28:24.451206   15272 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 160c1d93-30c8-4413-9986-7aa0d5f0fdd1
	I0109 00:28:24.451206   15272 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:28:24 GMT
	I0109 00:28:24.452060   15272 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-173500","uid":"5e5c58e3-1ae8-4346-8766-7537fea36975","resourceVersion":"1661","creationTimestamp":"2024-01-09T00:05:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-173500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-173500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_09T00_05_33_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-01-09T00:05:27Z","fieldsType":"FieldsV1","f [truncated 5365 chars]
	I0109 00:28:24.452628   15272 node_ready.go:58] node "multinode-173500" has status "Ready":"False"
	I0109 00:28:24.948798   15272 round_trippers.go:463] GET https://172.24.109.120:8443/api/v1/nodes/multinode-173500
	I0109 00:28:24.948798   15272 round_trippers.go:469] Request Headers:
	I0109 00:28:24.948798   15272 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:28:24.948798   15272 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0109 00:28:24.952997   15272 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0109 00:28:24.952997   15272 round_trippers.go:577] Response Headers:
	I0109 00:28:24.952997   15272 round_trippers.go:580]     Audit-Id: 7bdc33c5-d0bd-4d6d-80be-61af823dcace
	I0109 00:28:24.952997   15272 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:28:24.952997   15272 round_trippers.go:580]     Content-Type: application/json
	I0109 00:28:24.953540   15272 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b65f6d3f-e21a-46b0-805d-9b71486b6c1c
	I0109 00:28:24.953593   15272 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 160c1d93-30c8-4413-9986-7aa0d5f0fdd1
	I0109 00:28:24.953593   15272 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:28:24 GMT
	I0109 00:28:24.953664   15272 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-173500","uid":"5e5c58e3-1ae8-4346-8766-7537fea36975","resourceVersion":"1661","creationTimestamp":"2024-01-09T00:05:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-173500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-173500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_09T00_05_33_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-01-09T00:05:27Z","fieldsType":"FieldsV1","f [truncated 5365 chars]
	I0109 00:28:25.438423   15272 round_trippers.go:463] GET https://172.24.109.120:8443/api/v1/nodes/multinode-173500
	I0109 00:28:25.438423   15272 round_trippers.go:469] Request Headers:
	I0109 00:28:25.438423   15272 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:28:25.438423   15272 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0109 00:28:25.442461   15272 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0109 00:28:25.442461   15272 round_trippers.go:577] Response Headers:
	I0109 00:28:25.442461   15272 round_trippers.go:580]     Audit-Id: f41201be-abf8-4697-b0f1-d6f9775f4f69
	I0109 00:28:25.443058   15272 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:28:25.443058   15272 round_trippers.go:580]     Content-Type: application/json
	I0109 00:28:25.443058   15272 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b65f6d3f-e21a-46b0-805d-9b71486b6c1c
	I0109 00:28:25.443058   15272 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 160c1d93-30c8-4413-9986-7aa0d5f0fdd1
	I0109 00:28:25.443058   15272 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:28:25 GMT
	I0109 00:28:25.443153   15272 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-173500","uid":"5e5c58e3-1ae8-4346-8766-7537fea36975","resourceVersion":"1789","creationTimestamp":"2024-01-09T00:05:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-173500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-173500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_09T00_05_33_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-01-09T00:05:27Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0109 00:28:25.939965   15272 round_trippers.go:463] GET https://172.24.109.120:8443/api/v1/nodes/multinode-173500
	I0109 00:28:25.939965   15272 round_trippers.go:469] Request Headers:
	I0109 00:28:25.939965   15272 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:28:25.939965   15272 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0109 00:28:25.944529   15272 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0109 00:28:25.944529   15272 round_trippers.go:577] Response Headers:
	I0109 00:28:25.944529   15272 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:28:25.944665   15272 round_trippers.go:580]     Content-Type: application/json
	I0109 00:28:25.944665   15272 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b65f6d3f-e21a-46b0-805d-9b71486b6c1c
	I0109 00:28:25.944665   15272 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 160c1d93-30c8-4413-9986-7aa0d5f0fdd1
	I0109 00:28:25.944665   15272 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:28:25 GMT
	I0109 00:28:25.944665   15272 round_trippers.go:580]     Audit-Id: 6d2f64c1-5f0a-4b9f-affb-4b6fd2e9278e
	I0109 00:28:25.944665   15272 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-173500","uid":"5e5c58e3-1ae8-4346-8766-7537fea36975","resourceVersion":"1789","creationTimestamp":"2024-01-09T00:05:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-173500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-173500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_09T00_05_33_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-01-09T00:05:27Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0109 00:28:26.445289   15272 round_trippers.go:463] GET https://172.24.109.120:8443/api/v1/nodes/multinode-173500
	I0109 00:28:26.445289   15272 round_trippers.go:469] Request Headers:
	I0109 00:28:26.445289   15272 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:28:26.445289   15272 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0109 00:28:26.449699   15272 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0109 00:28:26.449699   15272 round_trippers.go:577] Response Headers:
	I0109 00:28:26.449699   15272 round_trippers.go:580]     Audit-Id: 276a8a9a-bb1e-4334-822e-feeacfb7d57a
	I0109 00:28:26.450180   15272 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:28:26.450180   15272 round_trippers.go:580]     Content-Type: application/json
	I0109 00:28:26.450180   15272 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b65f6d3f-e21a-46b0-805d-9b71486b6c1c
	I0109 00:28:26.450180   15272 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 160c1d93-30c8-4413-9986-7aa0d5f0fdd1
	I0109 00:28:26.450180   15272 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:28:26 GMT
	I0109 00:28:26.450491   15272 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-173500","uid":"5e5c58e3-1ae8-4346-8766-7537fea36975","resourceVersion":"1789","creationTimestamp":"2024-01-09T00:05:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-173500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-173500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_09T00_05_33_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-01-09T00:05:27Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0109 00:28:26.944803   15272 round_trippers.go:463] GET https://172.24.109.120:8443/api/v1/nodes/multinode-173500
	I0109 00:28:26.944803   15272 round_trippers.go:469] Request Headers:
	I0109 00:28:26.944803   15272 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:28:26.944803   15272 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0109 00:28:26.949218   15272 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0109 00:28:26.949407   15272 round_trippers.go:577] Response Headers:
	I0109 00:28:26.949407   15272 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 160c1d93-30c8-4413-9986-7aa0d5f0fdd1
	I0109 00:28:26.949407   15272 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:28:26 GMT
	I0109 00:28:26.949407   15272 round_trippers.go:580]     Audit-Id: 9f85f2ee-b659-4b1d-a006-ecf1362e5609
	I0109 00:28:26.949407   15272 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:28:26.949407   15272 round_trippers.go:580]     Content-Type: application/json
	I0109 00:28:26.949407   15272 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b65f6d3f-e21a-46b0-805d-9b71486b6c1c
	I0109 00:28:26.950018   15272 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-173500","uid":"5e5c58e3-1ae8-4346-8766-7537fea36975","resourceVersion":"1789","creationTimestamp":"2024-01-09T00:05:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-173500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-173500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_09T00_05_33_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-01-09T00:05:27Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0109 00:28:26.950169   15272 node_ready.go:58] node "multinode-173500" has status "Ready":"False"
	I0109 00:28:27.441714   15272 round_trippers.go:463] GET https://172.24.109.120:8443/api/v1/nodes/multinode-173500
	I0109 00:28:27.441805   15272 round_trippers.go:469] Request Headers:
	I0109 00:28:27.441805   15272 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:28:27.441805   15272 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0109 00:28:27.448169   15272 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0109 00:28:27.448169   15272 round_trippers.go:577] Response Headers:
	I0109 00:28:27.448169   15272 round_trippers.go:580]     Audit-Id: c31f3323-cd2d-422f-a169-538601cd9316
	I0109 00:28:27.448169   15272 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:28:27.448169   15272 round_trippers.go:580]     Content-Type: application/json
	I0109 00:28:27.448169   15272 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b65f6d3f-e21a-46b0-805d-9b71486b6c1c
	I0109 00:28:27.448169   15272 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 160c1d93-30c8-4413-9986-7aa0d5f0fdd1
	I0109 00:28:27.448169   15272 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:28:27 GMT
	I0109 00:28:27.448707   15272 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-173500","uid":"5e5c58e3-1ae8-4346-8766-7537fea36975","resourceVersion":"1789","creationTimestamp":"2024-01-09T00:05:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-173500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-173500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_09T00_05_33_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-01-09T00:05:27Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0109 00:28:27.943254   15272 round_trippers.go:463] GET https://172.24.109.120:8443/api/v1/nodes/multinode-173500
	I0109 00:28:27.943317   15272 round_trippers.go:469] Request Headers:
	I0109 00:28:27.943362   15272 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:28:27.943362   15272 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0109 00:28:27.947378   15272 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0109 00:28:27.947731   15272 round_trippers.go:577] Response Headers:
	I0109 00:28:27.947731   15272 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 160c1d93-30c8-4413-9986-7aa0d5f0fdd1
	I0109 00:28:27.947731   15272 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:28:27 GMT
	I0109 00:28:27.947731   15272 round_trippers.go:580]     Audit-Id: 02bb5ede-3c3f-4378-b790-30ee6d60f184
	I0109 00:28:27.947731   15272 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:28:27.947731   15272 round_trippers.go:580]     Content-Type: application/json
	I0109 00:28:27.947731   15272 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b65f6d3f-e21a-46b0-805d-9b71486b6c1c
	I0109 00:28:27.947871   15272 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-173500","uid":"5e5c58e3-1ae8-4346-8766-7537fea36975","resourceVersion":"1789","creationTimestamp":"2024-01-09T00:05:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-173500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-173500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_09T00_05_33_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-01-09T00:05:27Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0109 00:28:28.435602   15272 round_trippers.go:463] GET https://172.24.109.120:8443/api/v1/nodes/multinode-173500
	I0109 00:28:28.435659   15272 round_trippers.go:469] Request Headers:
	I0109 00:28:28.435659   15272 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:28:28.435659   15272 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0109 00:28:28.444082   15272 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0109 00:28:28.444082   15272 round_trippers.go:577] Response Headers:
	I0109 00:28:28.444082   15272 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:28:28 GMT
	I0109 00:28:28.444082   15272 round_trippers.go:580]     Audit-Id: 4fe884b3-4771-4f9c-8af5-da9c6d6f27cc
	I0109 00:28:28.444082   15272 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:28:28.444082   15272 round_trippers.go:580]     Content-Type: application/json
	I0109 00:28:28.444082   15272 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b65f6d3f-e21a-46b0-805d-9b71486b6c1c
	I0109 00:28:28.444082   15272 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 160c1d93-30c8-4413-9986-7aa0d5f0fdd1
	I0109 00:28:28.444754   15272 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-173500","uid":"5e5c58e3-1ae8-4346-8766-7537fea36975","resourceVersion":"1789","creationTimestamp":"2024-01-09T00:05:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-173500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-173500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_09T00_05_33_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-01-09T00:05:27Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0109 00:28:28.943145   15272 round_trippers.go:463] GET https://172.24.109.120:8443/api/v1/nodes/multinode-173500
	I0109 00:28:28.943145   15272 round_trippers.go:469] Request Headers:
	I0109 00:28:28.943204   15272 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0109 00:28:28.943204   15272 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:28:28.947621   15272 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0109 00:28:28.947621   15272 round_trippers.go:577] Response Headers:
	I0109 00:28:28.947621   15272 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:28:28 GMT
	I0109 00:28:28.948601   15272 round_trippers.go:580]     Audit-Id: d0e2d802-2c23-48e4-a080-76c5690afc3e
	I0109 00:28:28.948601   15272 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:28:28.948601   15272 round_trippers.go:580]     Content-Type: application/json
	I0109 00:28:28.948601   15272 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b65f6d3f-e21a-46b0-805d-9b71486b6c1c
	I0109 00:28:28.948601   15272 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 160c1d93-30c8-4413-9986-7aa0d5f0fdd1
	I0109 00:28:28.949450   15272 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-173500","uid":"5e5c58e3-1ae8-4346-8766-7537fea36975","resourceVersion":"1789","creationTimestamp":"2024-01-09T00:05:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-173500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-173500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_09T00_05_33_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-01-09T00:05:27Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0109 00:28:29.449610   15272 round_trippers.go:463] GET https://172.24.109.120:8443/api/v1/nodes/multinode-173500
	I0109 00:28:29.449610   15272 round_trippers.go:469] Request Headers:
	I0109 00:28:29.449610   15272 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:28:29.449610   15272 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0109 00:28:29.455250   15272 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0109 00:28:29.455250   15272 round_trippers.go:577] Response Headers:
	I0109 00:28:29.455250   15272 round_trippers.go:580]     Audit-Id: ac51c96d-a389-4380-ab4f-8c9105b17b05
	I0109 00:28:29.455250   15272 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:28:29.455250   15272 round_trippers.go:580]     Content-Type: application/json
	I0109 00:28:29.455250   15272 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b65f6d3f-e21a-46b0-805d-9b71486b6c1c
	I0109 00:28:29.455250   15272 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 160c1d93-30c8-4413-9986-7aa0d5f0fdd1
	I0109 00:28:29.455250   15272 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:28:29 GMT
	I0109 00:28:29.455250   15272 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-173500","uid":"5e5c58e3-1ae8-4346-8766-7537fea36975","resourceVersion":"1789","creationTimestamp":"2024-01-09T00:05:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-173500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-173500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_09T00_05_33_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-01-09T00:05:27Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0109 00:28:29.455250   15272 node_ready.go:58] node "multinode-173500" has status "Ready":"False"
	I0109 00:28:29.947563   15272 round_trippers.go:463] GET https://172.24.109.120:8443/api/v1/nodes/multinode-173500
	I0109 00:28:29.947563   15272 round_trippers.go:469] Request Headers:
	I0109 00:28:29.947645   15272 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:28:29.947645   15272 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0109 00:28:29.952003   15272 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0109 00:28:29.952659   15272 round_trippers.go:577] Response Headers:
	I0109 00:28:29.952659   15272 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:28:29.952659   15272 round_trippers.go:580]     Content-Type: application/json
	I0109 00:28:29.952659   15272 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b65f6d3f-e21a-46b0-805d-9b71486b6c1c
	I0109 00:28:29.952659   15272 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 160c1d93-30c8-4413-9986-7aa0d5f0fdd1
	I0109 00:28:29.952659   15272 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:28:29 GMT
	I0109 00:28:29.952659   15272 round_trippers.go:580]     Audit-Id: 16fae90c-6723-457e-9418-996682856d23
	I0109 00:28:29.953004   15272 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-173500","uid":"5e5c58e3-1ae8-4346-8766-7537fea36975","resourceVersion":"1789","creationTimestamp":"2024-01-09T00:05:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-173500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-173500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_09T00_05_33_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-01-09T00:05:27Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0109 00:28:30.447379   15272 round_trippers.go:463] GET https://172.24.109.120:8443/api/v1/nodes/multinode-173500
	I0109 00:28:30.447438   15272 round_trippers.go:469] Request Headers:
	I0109 00:28:30.447438   15272 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0109 00:28:30.447516   15272 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:28:30.475742   15272 round_trippers.go:574] Response Status: 200 OK in 28 milliseconds
	I0109 00:28:30.475742   15272 round_trippers.go:577] Response Headers:
	I0109 00:28:30.475742   15272 round_trippers.go:580]     Audit-Id: c01c8ea9-1731-4838-86d3-cb2b5fad6784
	I0109 00:28:30.475742   15272 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:28:30.476369   15272 round_trippers.go:580]     Content-Type: application/json
	I0109 00:28:30.476369   15272 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b65f6d3f-e21a-46b0-805d-9b71486b6c1c
	I0109 00:28:30.476369   15272 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 160c1d93-30c8-4413-9986-7aa0d5f0fdd1
	I0109 00:28:30.476369   15272 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:28:30 GMT
	I0109 00:28:30.477092   15272 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-173500","uid":"5e5c58e3-1ae8-4346-8766-7537fea36975","resourceVersion":"1789","creationTimestamp":"2024-01-09T00:05:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-173500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-173500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_09T00_05_33_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-01-09T00:05:27Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0109 00:28:30.948209   15272 round_trippers.go:463] GET https://172.24.109.120:8443/api/v1/nodes/multinode-173500
	I0109 00:28:30.948293   15272 round_trippers.go:469] Request Headers:
	I0109 00:28:30.948293   15272 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:28:30.948293   15272 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0109 00:28:30.951709   15272 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0109 00:28:30.951709   15272 round_trippers.go:577] Response Headers:
	I0109 00:28:30.951709   15272 round_trippers.go:580]     Audit-Id: 412f08bd-5073-4475-a6c5-f40cb7dca553
	I0109 00:28:30.951709   15272 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:28:30.951709   15272 round_trippers.go:580]     Content-Type: application/json
	I0109 00:28:30.951709   15272 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b65f6d3f-e21a-46b0-805d-9b71486b6c1c
	I0109 00:28:30.951966   15272 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 160c1d93-30c8-4413-9986-7aa0d5f0fdd1
	I0109 00:28:30.951966   15272 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:28:30 GMT
	I0109 00:28:30.952297   15272 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-173500","uid":"5e5c58e3-1ae8-4346-8766-7537fea36975","resourceVersion":"1789","creationTimestamp":"2024-01-09T00:05:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-173500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-173500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_09T00_05_33_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-01-09T00:05:27Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0109 00:28:31.444615   15272 round_trippers.go:463] GET https://172.24.109.120:8443/api/v1/nodes/multinode-173500
	I0109 00:28:31.444615   15272 round_trippers.go:469] Request Headers:
	I0109 00:28:31.444615   15272 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:28:31.444615   15272 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0109 00:28:31.449211   15272 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0109 00:28:31.449211   15272 round_trippers.go:577] Response Headers:
	I0109 00:28:31.449211   15272 round_trippers.go:580]     Audit-Id: 71fc9b03-98e3-4a9e-b0be-3c56a176fdb5
	I0109 00:28:31.449211   15272 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:28:31.449211   15272 round_trippers.go:580]     Content-Type: application/json
	I0109 00:28:31.449211   15272 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b65f6d3f-e21a-46b0-805d-9b71486b6c1c
	I0109 00:28:31.449211   15272 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 160c1d93-30c8-4413-9986-7aa0d5f0fdd1
	I0109 00:28:31.449211   15272 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:28:31 GMT
	I0109 00:28:31.450934   15272 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-173500","uid":"5e5c58e3-1ae8-4346-8766-7537fea36975","resourceVersion":"1789","creationTimestamp":"2024-01-09T00:05:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-173500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-173500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_09T00_05_33_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-01-09T00:05:27Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0109 00:28:31.947835   15272 round_trippers.go:463] GET https://172.24.109.120:8443/api/v1/nodes/multinode-173500
	I0109 00:28:31.947923   15272 round_trippers.go:469] Request Headers:
	I0109 00:28:31.947923   15272 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:28:31.947923   15272 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0109 00:28:31.958655   15272 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0109 00:28:31.958655   15272 round_trippers.go:577] Response Headers:
	I0109 00:28:31.958655   15272 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:28:31.958655   15272 round_trippers.go:580]     Content-Type: application/json
	I0109 00:28:31.958655   15272 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b65f6d3f-e21a-46b0-805d-9b71486b6c1c
	I0109 00:28:31.958655   15272 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 160c1d93-30c8-4413-9986-7aa0d5f0fdd1
	I0109 00:28:31.958655   15272 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:28:31 GMT
	I0109 00:28:31.958655   15272 round_trippers.go:580]     Audit-Id: 650cca2f-066e-4256-bf0e-c72adfe38b4a
	I0109 00:28:31.960067   15272 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-173500","uid":"5e5c58e3-1ae8-4346-8766-7537fea36975","resourceVersion":"1789","creationTimestamp":"2024-01-09T00:05:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-173500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-173500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_09T00_05_33_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-01-09T00:05:27Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0109 00:28:31.960629   15272 node_ready.go:58] node "multinode-173500" has status "Ready":"False"
	I0109 00:28:32.450648   15272 round_trippers.go:463] GET https://172.24.109.120:8443/api/v1/nodes/multinode-173500
	I0109 00:28:32.450718   15272 round_trippers.go:469] Request Headers:
	I0109 00:28:32.450718   15272 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:28:32.450718   15272 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0109 00:28:32.460134   15272 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0109 00:28:32.460134   15272 round_trippers.go:577] Response Headers:
	I0109 00:28:32.460134   15272 round_trippers.go:580]     Audit-Id: 11a586e0-6826-4d35-8528-c8df0a94f1e6
	I0109 00:28:32.460134   15272 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:28:32.460134   15272 round_trippers.go:580]     Content-Type: application/json
	I0109 00:28:32.460134   15272 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b65f6d3f-e21a-46b0-805d-9b71486b6c1c
	I0109 00:28:32.460134   15272 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 160c1d93-30c8-4413-9986-7aa0d5f0fdd1
	I0109 00:28:32.460134   15272 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:28:32 GMT
	I0109 00:28:32.460729   15272 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-173500","uid":"5e5c58e3-1ae8-4346-8766-7537fea36975","resourceVersion":"1789","creationTimestamp":"2024-01-09T00:05:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-173500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-173500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_09T00_05_33_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-01-09T00:05:27Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0109 00:28:32.948451   15272 round_trippers.go:463] GET https://172.24.109.120:8443/api/v1/nodes/multinode-173500
	I0109 00:28:32.948451   15272 round_trippers.go:469] Request Headers:
	I0109 00:28:32.948451   15272 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:28:32.948451   15272 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0109 00:28:32.956122   15272 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0109 00:28:32.956316   15272 round_trippers.go:577] Response Headers:
	I0109 00:28:32.956316   15272 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:28:32.956316   15272 round_trippers.go:580]     Content-Type: application/json
	I0109 00:28:32.956316   15272 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b65f6d3f-e21a-46b0-805d-9b71486b6c1c
	I0109 00:28:32.956316   15272 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 160c1d93-30c8-4413-9986-7aa0d5f0fdd1
	I0109 00:28:32.956316   15272 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:28:32 GMT
	I0109 00:28:32.956316   15272 round_trippers.go:580]     Audit-Id: b33f1d04-e83a-41e6-ae90-5ab14d9a8437
	I0109 00:28:32.956659   15272 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-173500","uid":"5e5c58e3-1ae8-4346-8766-7537fea36975","resourceVersion":"1823","creationTimestamp":"2024-01-09T00:05:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-173500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-173500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_09T00_05_33_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-01-09T00:05:27Z","fieldsType":"FieldsV1","f [truncated 5358 chars]
	I0109 00:28:32.957064   15272 node_ready.go:49] node "multinode-173500" has status "Ready":"True"
	I0109 00:28:32.957178   15272 node_ready.go:38] duration metric: took 14.5230535s waiting for node "multinode-173500" to be "Ready" ...
	I0109 00:28:32.957178   15272 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0109 00:28:32.957328   15272 round_trippers.go:463] GET https://172.24.109.120:8443/api/v1/namespaces/kube-system/pods
	I0109 00:28:32.957388   15272 round_trippers.go:469] Request Headers:
	I0109 00:28:32.957388   15272 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:28:32.957388   15272 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0109 00:28:32.967540   15272 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0109 00:28:32.967540   15272 round_trippers.go:577] Response Headers:
	I0109 00:28:32.967540   15272 round_trippers.go:580]     Audit-Id: 386416e1-5b2c-49af-b161-98df9f2ed30f
	I0109 00:28:32.967540   15272 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:28:32.967540   15272 round_trippers.go:580]     Content-Type: application/json
	I0109 00:28:32.967540   15272 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b65f6d3f-e21a-46b0-805d-9b71486b6c1c
	I0109 00:28:32.967540   15272 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 160c1d93-30c8-4413-9986-7aa0d5f0fdd1
	I0109 00:28:32.967540   15272 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:28:32 GMT
	I0109 00:28:32.970209   15272 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1825"},"items":[{"metadata":{"name":"coredns-5dd5756b68-bkss9","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"463fb6c6-1e85-419f-9c13-96e58a2ec22e","resourceVersion":"1809","creationTimestamp":"2024-01-09T00:05:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"391af85f-9c35-497b-9b4f-c347a35d4a42","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-09T00:05:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"391af85f-9c35-497b-9b4f-c347a35d4a42\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 83455 chars]
	I0109 00:28:32.974968   15272 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-bkss9" in "kube-system" namespace to be "Ready" ...
	I0109 00:28:32.975010   15272 round_trippers.go:463] GET https://172.24.109.120:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-bkss9
	I0109 00:28:32.975010   15272 round_trippers.go:469] Request Headers:
	I0109 00:28:32.975010   15272 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:28:32.975010   15272 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0109 00:28:32.978213   15272 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0109 00:28:32.978213   15272 round_trippers.go:577] Response Headers:
	I0109 00:28:32.978213   15272 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:28:32.978213   15272 round_trippers.go:580]     Content-Type: application/json
	I0109 00:28:32.978213   15272 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b65f6d3f-e21a-46b0-805d-9b71486b6c1c
	I0109 00:28:32.978213   15272 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 160c1d93-30c8-4413-9986-7aa0d5f0fdd1
	I0109 00:28:32.978213   15272 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:28:32 GMT
	I0109 00:28:32.978213   15272 round_trippers.go:580]     Audit-Id: 63773b39-d89e-48f7-9d91-fc1946268c10
	I0109 00:28:32.979555   15272 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-bkss9","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"463fb6c6-1e85-419f-9c13-96e58a2ec22e","resourceVersion":"1809","creationTimestamp":"2024-01-09T00:05:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"391af85f-9c35-497b-9b4f-c347a35d4a42","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-09T00:05:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"391af85f-9c35-497b-9b4f-c347a35d4a42\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6494 chars]
	I0109 00:28:32.980366   15272 round_trippers.go:463] GET https://172.24.109.120:8443/api/v1/nodes/multinode-173500
	I0109 00:28:32.980366   15272 round_trippers.go:469] Request Headers:
	I0109 00:28:32.980366   15272 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:28:32.980366   15272 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0109 00:28:32.983263   15272 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0109 00:28:32.983263   15272 round_trippers.go:577] Response Headers:
	I0109 00:28:32.983263   15272 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b65f6d3f-e21a-46b0-805d-9b71486b6c1c
	I0109 00:28:32.983263   15272 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 160c1d93-30c8-4413-9986-7aa0d5f0fdd1
	I0109 00:28:32.983263   15272 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:28:32 GMT
	I0109 00:28:32.983263   15272 round_trippers.go:580]     Audit-Id: 3d55f3ba-7468-4cad-a784-b6076c410de4
	I0109 00:28:32.983263   15272 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:28:32.983263   15272 round_trippers.go:580]     Content-Type: application/json
	I0109 00:28:32.984469   15272 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-173500","uid":"5e5c58e3-1ae8-4346-8766-7537fea36975","resourceVersion":"1823","creationTimestamp":"2024-01-09T00:05:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-173500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-173500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_09T00_05_33_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-01-09T00:05:27Z","fieldsType":"FieldsV1","f [truncated 5358 chars]
	I0109 00:28:32.984865   15272 pod_ready.go:92] pod "coredns-5dd5756b68-bkss9" in "kube-system" namespace has status "Ready":"True"
	I0109 00:28:32.984865   15272 pod_ready.go:81] duration metric: took 9.8547ms waiting for pod "coredns-5dd5756b68-bkss9" in "kube-system" namespace to be "Ready" ...
	I0109 00:28:32.984865   15272 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-173500" in "kube-system" namespace to be "Ready" ...
	I0109 00:28:32.984943   15272 round_trippers.go:463] GET https://172.24.109.120:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-173500
	I0109 00:28:32.984943   15272 round_trippers.go:469] Request Headers:
	I0109 00:28:32.984943   15272 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:28:32.984943   15272 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0109 00:28:32.987713   15272 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0109 00:28:32.987713   15272 round_trippers.go:577] Response Headers:
	I0109 00:28:32.987713   15272 round_trippers.go:580]     Audit-Id: a5ad06e6-0cba-49a8-8a91-9a6ab9c38a7f
	I0109 00:28:32.987713   15272 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:28:32.987713   15272 round_trippers.go:580]     Content-Type: application/json
	I0109 00:28:32.987713   15272 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b65f6d3f-e21a-46b0-805d-9b71486b6c1c
	I0109 00:28:32.987713   15272 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 160c1d93-30c8-4413-9986-7aa0d5f0fdd1
	I0109 00:28:32.987713   15272 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:28:32 GMT
	I0109 00:28:32.988926   15272 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-173500","namespace":"kube-system","uid":"43da51b9-2249-4c4d-a9c0-4c899270d870","resourceVersion":"1777","creationTimestamp":"2024-01-09T00:28:18Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.24.109.120:2379","kubernetes.io/config.hash":"d21425b7f4d2774c35dc812132e81582","kubernetes.io/config.mirror":"d21425b7f4d2774c35dc812132e81582","kubernetes.io/config.seen":"2024-01-09T00:28:04.947418401Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-173500","uid":"5e5c58e3-1ae8-4346-8766-7537fea36975","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-09T00:28:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise
-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/conf [truncated 5873 chars]
	I0109 00:28:32.989532   15272 round_trippers.go:463] GET https://172.24.109.120:8443/api/v1/nodes/multinode-173500
	I0109 00:28:32.989643   15272 round_trippers.go:469] Request Headers:
	I0109 00:28:32.989643   15272 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:28:32.989643   15272 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0109 00:28:32.995986   15272 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0109 00:28:32.995986   15272 round_trippers.go:577] Response Headers:
	I0109 00:28:32.995986   15272 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:28:32.995986   15272 round_trippers.go:580]     Content-Type: application/json
	I0109 00:28:32.995986   15272 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b65f6d3f-e21a-46b0-805d-9b71486b6c1c
	I0109 00:28:32.995986   15272 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 160c1d93-30c8-4413-9986-7aa0d5f0fdd1
	I0109 00:28:32.995986   15272 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:28:32 GMT
	I0109 00:28:32.995986   15272 round_trippers.go:580]     Audit-Id: cdcdf480-0f21-4591-9644-06c21adc87bd
	I0109 00:28:32.996943   15272 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-173500","uid":"5e5c58e3-1ae8-4346-8766-7537fea36975","resourceVersion":"1823","creationTimestamp":"2024-01-09T00:05:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-173500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-173500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_09T00_05_33_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-01-09T00:05:27Z","fieldsType":"FieldsV1","f [truncated 5358 chars]
	I0109 00:28:32.997223   15272 pod_ready.go:92] pod "etcd-multinode-173500" in "kube-system" namespace has status "Ready":"True"
	I0109 00:28:32.997223   15272 pod_ready.go:81] duration metric: took 12.3584ms waiting for pod "etcd-multinode-173500" in "kube-system" namespace to be "Ready" ...
	I0109 00:28:32.997223   15272 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-173500" in "kube-system" namespace to be "Ready" ...
	I0109 00:28:32.997223   15272 round_trippers.go:463] GET https://172.24.109.120:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-173500
	I0109 00:28:32.997223   15272 round_trippers.go:469] Request Headers:
	I0109 00:28:32.997223   15272 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:28:32.997223   15272 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0109 00:28:33.002861   15272 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0109 00:28:33.002861   15272 round_trippers.go:577] Response Headers:
	I0109 00:28:33.002861   15272 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:28:33.002861   15272 round_trippers.go:580]     Content-Type: application/json
	I0109 00:28:33.003620   15272 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b65f6d3f-e21a-46b0-805d-9b71486b6c1c
	I0109 00:28:33.003620   15272 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 160c1d93-30c8-4413-9986-7aa0d5f0fdd1
	I0109 00:28:33.003620   15272 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:28:33 GMT
	I0109 00:28:33.003620   15272 round_trippers.go:580]     Audit-Id: feba445f-904d-4abd-8653-3b628208b67c
	I0109 00:28:33.003843   15272 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-173500","namespace":"kube-system","uid":"5c089ac2-fe84-48d8-9727-a932903b646d","resourceVersion":"1772","creationTimestamp":"2024-01-09T00:28:19Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.24.109.120:8443","kubernetes.io/config.hash":"3af26441278f10d0a9196ab55837c292","kubernetes.io/config.mirror":"3af26441278f10d0a9196ab55837c292","kubernetes.io/config.seen":"2024-01-09T00:28:04.947424101Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-173500","uid":"5e5c58e3-1ae8-4346-8766-7537fea36975","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-09T00:28:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.k
ubernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernete [truncated 7634 chars]
	I0109 00:28:33.004329   15272 round_trippers.go:463] GET https://172.24.109.120:8443/api/v1/nodes/multinode-173500
	I0109 00:28:33.004394   15272 round_trippers.go:469] Request Headers:
	I0109 00:28:33.004394   15272 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:28:33.004394   15272 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0109 00:28:33.008357   15272 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0109 00:28:33.008357   15272 round_trippers.go:577] Response Headers:
	I0109 00:28:33.008357   15272 round_trippers.go:580]     Audit-Id: d4228a35-8084-41ca-ba1a-c9a5930fb54d
	I0109 00:28:33.008357   15272 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:28:33.008357   15272 round_trippers.go:580]     Content-Type: application/json
	I0109 00:28:33.008357   15272 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b65f6d3f-e21a-46b0-805d-9b71486b6c1c
	I0109 00:28:33.008357   15272 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 160c1d93-30c8-4413-9986-7aa0d5f0fdd1
	I0109 00:28:33.008357   15272 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:28:33 GMT
	I0109 00:28:33.009104   15272 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-173500","uid":"5e5c58e3-1ae8-4346-8766-7537fea36975","resourceVersion":"1823","creationTimestamp":"2024-01-09T00:05:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-173500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-173500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_09T00_05_33_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-01-09T00:05:27Z","fieldsType":"FieldsV1","f [truncated 5358 chars]
	I0109 00:28:33.509736   15272 round_trippers.go:463] GET https://172.24.109.120:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-173500
	I0109 00:28:33.509825   15272 round_trippers.go:469] Request Headers:
	I0109 00:28:33.509825   15272 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0109 00:28:33.509825   15272 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:28:33.515815   15272 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0109 00:28:33.515815   15272 round_trippers.go:577] Response Headers:
	I0109 00:28:33.515815   15272 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b65f6d3f-e21a-46b0-805d-9b71486b6c1c
	I0109 00:28:33.515815   15272 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 160c1d93-30c8-4413-9986-7aa0d5f0fdd1
	I0109 00:28:33.515815   15272 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:28:33 GMT
	I0109 00:28:33.515815   15272 round_trippers.go:580]     Audit-Id: 25a23ccd-0361-47c1-8007-8af2ed647b06
	I0109 00:28:33.515815   15272 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:28:33.515815   15272 round_trippers.go:580]     Content-Type: application/json
	I0109 00:28:33.516599   15272 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-173500","namespace":"kube-system","uid":"5c089ac2-fe84-48d8-9727-a932903b646d","resourceVersion":"1772","creationTimestamp":"2024-01-09T00:28:19Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.24.109.120:8443","kubernetes.io/config.hash":"3af26441278f10d0a9196ab55837c292","kubernetes.io/config.mirror":"3af26441278f10d0a9196ab55837c292","kubernetes.io/config.seen":"2024-01-09T00:28:04.947424101Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-173500","uid":"5e5c58e3-1ae8-4346-8766-7537fea36975","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-09T00:28:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.k
ubernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernete [truncated 7634 chars]
	I0109 00:28:33.517403   15272 round_trippers.go:463] GET https://172.24.109.120:8443/api/v1/nodes/multinode-173500
	I0109 00:28:33.517434   15272 round_trippers.go:469] Request Headers:
	I0109 00:28:33.517434   15272 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:28:33.517434   15272 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0109 00:28:33.521082   15272 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0109 00:28:33.521082   15272 round_trippers.go:577] Response Headers:
	I0109 00:28:33.521082   15272 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b65f6d3f-e21a-46b0-805d-9b71486b6c1c
	I0109 00:28:33.521082   15272 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 160c1d93-30c8-4413-9986-7aa0d5f0fdd1
	I0109 00:28:33.521082   15272 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:28:33 GMT
	I0109 00:28:33.521082   15272 round_trippers.go:580]     Audit-Id: 6defc422-7924-4c93-b23a-cef309b3eba3
	I0109 00:28:33.521082   15272 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:28:33.521234   15272 round_trippers.go:580]     Content-Type: application/json
	I0109 00:28:33.521590   15272 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-173500","uid":"5e5c58e3-1ae8-4346-8766-7537fea36975","resourceVersion":"1823","creationTimestamp":"2024-01-09T00:05:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-173500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-173500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_09T00_05_33_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-01-09T00:05:27Z","fieldsType":"FieldsV1","f [truncated 5358 chars]
	I0109 00:28:34.009836   15272 round_trippers.go:463] GET https://172.24.109.120:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-173500
	I0109 00:28:34.009904   15272 round_trippers.go:469] Request Headers:
	I0109 00:28:34.009904   15272 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:28:34.009904   15272 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0109 00:28:34.014654   15272 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0109 00:28:34.014654   15272 round_trippers.go:577] Response Headers:
	I0109 00:28:34.014654   15272 round_trippers.go:580]     Audit-Id: b8177ed8-f355-4051-a971-065f1c9e59d9
	I0109 00:28:34.014778   15272 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:28:34.014778   15272 round_trippers.go:580]     Content-Type: application/json
	I0109 00:28:34.014778   15272 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b65f6d3f-e21a-46b0-805d-9b71486b6c1c
	I0109 00:28:34.014778   15272 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 160c1d93-30c8-4413-9986-7aa0d5f0fdd1
	I0109 00:28:34.014778   15272 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:28:34 GMT
	I0109 00:28:34.015523   15272 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-173500","namespace":"kube-system","uid":"5c089ac2-fe84-48d8-9727-a932903b646d","resourceVersion":"1772","creationTimestamp":"2024-01-09T00:28:19Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.24.109.120:8443","kubernetes.io/config.hash":"3af26441278f10d0a9196ab55837c292","kubernetes.io/config.mirror":"3af26441278f10d0a9196ab55837c292","kubernetes.io/config.seen":"2024-01-09T00:28:04.947424101Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-173500","uid":"5e5c58e3-1ae8-4346-8766-7537fea36975","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-09T00:28:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.k
ubernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernete [truncated 7634 chars]
	I0109 00:28:34.016210   15272 round_trippers.go:463] GET https://172.24.109.120:8443/api/v1/nodes/multinode-173500
	I0109 00:28:34.016210   15272 round_trippers.go:469] Request Headers:
	I0109 00:28:34.016313   15272 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:28:34.016313   15272 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0109 00:28:34.023363   15272 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0109 00:28:34.023363   15272 round_trippers.go:577] Response Headers:
	I0109 00:28:34.023363   15272 round_trippers.go:580]     Audit-Id: 93c82f16-ed49-487c-8807-adacebc02d75
	I0109 00:28:34.023363   15272 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:28:34.023363   15272 round_trippers.go:580]     Content-Type: application/json
	I0109 00:28:34.023363   15272 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b65f6d3f-e21a-46b0-805d-9b71486b6c1c
	I0109 00:28:34.023363   15272 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 160c1d93-30c8-4413-9986-7aa0d5f0fdd1
	I0109 00:28:34.023363   15272 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:28:34 GMT
	I0109 00:28:34.024204   15272 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-173500","uid":"5e5c58e3-1ae8-4346-8766-7537fea36975","resourceVersion":"1823","creationTimestamp":"2024-01-09T00:05:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-173500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-173500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_09T00_05_33_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-01-09T00:05:27Z","fieldsType":"FieldsV1","f [truncated 5358 chars]
	I0109 00:28:34.500323   15272 round_trippers.go:463] GET https://172.24.109.120:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-173500
	I0109 00:28:34.500447   15272 round_trippers.go:469] Request Headers:
	I0109 00:28:34.500447   15272 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:28:34.500447   15272 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0109 00:28:34.504839   15272 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0109 00:28:34.504839   15272 round_trippers.go:577] Response Headers:
	I0109 00:28:34.504839   15272 round_trippers.go:580]     Content-Type: application/json
	I0109 00:28:34.504839   15272 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b65f6d3f-e21a-46b0-805d-9b71486b6c1c
	I0109 00:28:34.504839   15272 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 160c1d93-30c8-4413-9986-7aa0d5f0fdd1
	I0109 00:28:34.504839   15272 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:28:34 GMT
	I0109 00:28:34.504839   15272 round_trippers.go:580]     Audit-Id: 47cdd00f-b341-4de9-8e29-54e25b448a67
	I0109 00:28:34.504839   15272 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:28:34.505466   15272 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-173500","namespace":"kube-system","uid":"5c089ac2-fe84-48d8-9727-a932903b646d","resourceVersion":"1772","creationTimestamp":"2024-01-09T00:28:19Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.24.109.120:8443","kubernetes.io/config.hash":"3af26441278f10d0a9196ab55837c292","kubernetes.io/config.mirror":"3af26441278f10d0a9196ab55837c292","kubernetes.io/config.seen":"2024-01-09T00:28:04.947424101Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-173500","uid":"5e5c58e3-1ae8-4346-8766-7537fea36975","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-09T00:28:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.k
ubernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernete [truncated 7634 chars]
	I0109 00:28:34.506785   15272 round_trippers.go:463] GET https://172.24.109.120:8443/api/v1/nodes/multinode-173500
	I0109 00:28:34.506897   15272 round_trippers.go:469] Request Headers:
	I0109 00:28:34.506897   15272 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:28:34.506897   15272 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0109 00:28:34.510334   15272 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0109 00:28:34.510334   15272 round_trippers.go:577] Response Headers:
	I0109 00:28:34.510334   15272 round_trippers.go:580]     Content-Type: application/json
	I0109 00:28:34.510334   15272 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b65f6d3f-e21a-46b0-805d-9b71486b6c1c
	I0109 00:28:34.510842   15272 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 160c1d93-30c8-4413-9986-7aa0d5f0fdd1
	I0109 00:28:34.510842   15272 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:28:34 GMT
	I0109 00:28:34.510842   15272 round_trippers.go:580]     Audit-Id: 05af92c8-e192-42db-97f5-8fc43561f6f8
	I0109 00:28:34.510842   15272 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:28:34.511016   15272 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-173500","uid":"5e5c58e3-1ae8-4346-8766-7537fea36975","resourceVersion":"1823","creationTimestamp":"2024-01-09T00:05:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-173500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-173500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_09T00_05_33_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-01-09T00:05:27Z","fieldsType":"FieldsV1","f [truncated 5358 chars]
	I0109 00:28:34.999986   15272 round_trippers.go:463] GET https://172.24.109.120:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-173500
	I0109 00:28:34.999986   15272 round_trippers.go:469] Request Headers:
	I0109 00:28:34.999986   15272 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:28:34.999986   15272 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0109 00:28:35.009547   15272 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0109 00:28:35.009547   15272 round_trippers.go:577] Response Headers:
	I0109 00:28:35.009547   15272 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:28:35.009547   15272 round_trippers.go:580]     Content-Type: application/json
	I0109 00:28:35.009547   15272 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b65f6d3f-e21a-46b0-805d-9b71486b6c1c
	I0109 00:28:35.009547   15272 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 160c1d93-30c8-4413-9986-7aa0d5f0fdd1
	I0109 00:28:35.009547   15272 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:28:35 GMT
	I0109 00:28:35.009547   15272 round_trippers.go:580]     Audit-Id: db3fc8ee-fa20-4e37-bd35-f18567e12cf3
	I0109 00:28:35.009547   15272 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-173500","namespace":"kube-system","uid":"5c089ac2-fe84-48d8-9727-a932903b646d","resourceVersion":"1772","creationTimestamp":"2024-01-09T00:28:19Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.24.109.120:8443","kubernetes.io/config.hash":"3af26441278f10d0a9196ab55837c292","kubernetes.io/config.mirror":"3af26441278f10d0a9196ab55837c292","kubernetes.io/config.seen":"2024-01-09T00:28:04.947424101Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-173500","uid":"5e5c58e3-1ae8-4346-8766-7537fea36975","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-09T00:28:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.k
ubernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernete [truncated 7634 chars]
	I0109 00:28:35.010804   15272 round_trippers.go:463] GET https://172.24.109.120:8443/api/v1/nodes/multinode-173500
	I0109 00:28:35.010909   15272 round_trippers.go:469] Request Headers:
	I0109 00:28:35.010909   15272 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:28:35.010909   15272 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0109 00:28:35.013093   15272 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0109 00:28:35.013093   15272 round_trippers.go:577] Response Headers:
	I0109 00:28:35.013093   15272 round_trippers.go:580]     Audit-Id: c84aeecf-18cb-4aa2-a72c-2866076fbee2
	I0109 00:28:35.013093   15272 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:28:35.013093   15272 round_trippers.go:580]     Content-Type: application/json
	I0109 00:28:35.013093   15272 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b65f6d3f-e21a-46b0-805d-9b71486b6c1c
	I0109 00:28:35.014039   15272 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 160c1d93-30c8-4413-9986-7aa0d5f0fdd1
	I0109 00:28:35.014039   15272 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:28:35 GMT
	I0109 00:28:35.014247   15272 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-173500","uid":"5e5c58e3-1ae8-4346-8766-7537fea36975","resourceVersion":"1823","creationTimestamp":"2024-01-09T00:05:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-173500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-173500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_09T00_05_33_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-01-09T00:05:27Z","fieldsType":"FieldsV1","f [truncated 5358 chars]
	I0109 00:28:35.014834   15272 pod_ready.go:102] pod "kube-apiserver-multinode-173500" in "kube-system" namespace has status "Ready":"False"
	I0109 00:28:35.503632   15272 round_trippers.go:463] GET https://172.24.109.120:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-173500
	I0109 00:28:35.503696   15272 round_trippers.go:469] Request Headers:
	I0109 00:28:35.503696   15272 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:28:35.503696   15272 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0109 00:28:35.508673   15272 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0109 00:28:35.508673   15272 round_trippers.go:577] Response Headers:
	I0109 00:28:35.508673   15272 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:28:35 GMT
	I0109 00:28:35.508673   15272 round_trippers.go:580]     Audit-Id: 5b08a41a-adb8-474c-9fbe-e379efe9a53b
	I0109 00:28:35.508673   15272 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:28:35.508673   15272 round_trippers.go:580]     Content-Type: application/json
	I0109 00:28:35.508673   15272 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b65f6d3f-e21a-46b0-805d-9b71486b6c1c
	I0109 00:28:35.508673   15272 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 160c1d93-30c8-4413-9986-7aa0d5f0fdd1
	I0109 00:28:35.510074   15272 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-173500","namespace":"kube-system","uid":"5c089ac2-fe84-48d8-9727-a932903b646d","resourceVersion":"1830","creationTimestamp":"2024-01-09T00:28:19Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.24.109.120:8443","kubernetes.io/config.hash":"3af26441278f10d0a9196ab55837c292","kubernetes.io/config.mirror":"3af26441278f10d0a9196ab55837c292","kubernetes.io/config.seen":"2024-01-09T00:28:04.947424101Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-173500","uid":"5e5c58e3-1ae8-4346-8766-7537fea36975","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-09T00:28:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.k
ubernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernete [truncated 7409 chars]
	I0109 00:28:35.511088   15272 round_trippers.go:463] GET https://172.24.109.120:8443/api/v1/nodes/multinode-173500
	I0109 00:28:35.511193   15272 round_trippers.go:469] Request Headers:
	I0109 00:28:35.511193   15272 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:28:35.511193   15272 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0109 00:28:35.517013   15272 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0109 00:28:35.517013   15272 round_trippers.go:577] Response Headers:
	I0109 00:28:35.517013   15272 round_trippers.go:580]     Content-Type: application/json
	I0109 00:28:35.517013   15272 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b65f6d3f-e21a-46b0-805d-9b71486b6c1c
	I0109 00:28:35.517013   15272 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 160c1d93-30c8-4413-9986-7aa0d5f0fdd1
	I0109 00:28:35.517013   15272 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:28:35 GMT
	I0109 00:28:35.517013   15272 round_trippers.go:580]     Audit-Id: ce198a41-28cb-411a-bdfc-43e56b605b88
	I0109 00:28:35.517013   15272 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:28:35.517013   15272 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-173500","uid":"5e5c58e3-1ae8-4346-8766-7537fea36975","resourceVersion":"1835","creationTimestamp":"2024-01-09T00:05:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-173500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-173500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_09T00_05_33_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-01-09T00:05:27Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0109 00:28:35.518114   15272 pod_ready.go:92] pod "kube-apiserver-multinode-173500" in "kube-system" namespace has status "Ready":"True"
	I0109 00:28:35.518183   15272 pod_ready.go:81] duration metric: took 2.5209594s waiting for pod "kube-apiserver-multinode-173500" in "kube-system" namespace to be "Ready" ...
	I0109 00:28:35.518183   15272 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-173500" in "kube-system" namespace to be "Ready" ...
	I0109 00:28:35.518183   15272 round_trippers.go:463] GET https://172.24.109.120:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-173500
	I0109 00:28:35.518183   15272 round_trippers.go:469] Request Headers:
	I0109 00:28:35.518183   15272 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:28:35.518183   15272 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0109 00:28:35.522588   15272 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0109 00:28:35.522588   15272 round_trippers.go:577] Response Headers:
	I0109 00:28:35.522588   15272 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:28:35 GMT
	I0109 00:28:35.522588   15272 round_trippers.go:580]     Audit-Id: c146cadb-6f80-45c4-b928-2e0bb62c3454
	I0109 00:28:35.522588   15272 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:28:35.522588   15272 round_trippers.go:580]     Content-Type: application/json
	I0109 00:28:35.522588   15272 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b65f6d3f-e21a-46b0-805d-9b71486b6c1c
	I0109 00:28:35.522588   15272 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 160c1d93-30c8-4413-9986-7aa0d5f0fdd1
	I0109 00:28:35.522588   15272 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-173500","namespace":"kube-system","uid":"a0252ea5-5d6a-4303-b7e6-151481d4cd8a","resourceVersion":"1796","creationTimestamp":"2024-01-09T00:05:31Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"f6b180d5a2686dc98b0355b6df7f53ea","kubernetes.io/config.mirror":"f6b180d5a2686dc98b0355b6df7f53ea","kubernetes.io/config.seen":"2024-01-09T00:05:31.606504770Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-173500","uid":"5e5c58e3-1ae8-4346-8766-7537fea36975","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-09T00:05:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.
io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".": [truncated 7179 chars]
	I0109 00:28:35.523494   15272 round_trippers.go:463] GET https://172.24.109.120:8443/api/v1/nodes/multinode-173500
	I0109 00:28:35.523494   15272 round_trippers.go:469] Request Headers:
	I0109 00:28:35.523494   15272 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:28:35.523494   15272 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0109 00:28:35.526659   15272 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0109 00:28:35.527435   15272 round_trippers.go:577] Response Headers:
	I0109 00:28:35.527435   15272 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b65f6d3f-e21a-46b0-805d-9b71486b6c1c
	I0109 00:28:35.527482   15272 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 160c1d93-30c8-4413-9986-7aa0d5f0fdd1
	I0109 00:28:35.527482   15272 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:28:35 GMT
	I0109 00:28:35.527482   15272 round_trippers.go:580]     Audit-Id: 4df09479-b61c-4ed1-aef5-4f241d618ada
	I0109 00:28:35.527482   15272 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:28:35.527482   15272 round_trippers.go:580]     Content-Type: application/json
	I0109 00:28:35.527734   15272 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-173500","uid":"5e5c58e3-1ae8-4346-8766-7537fea36975","resourceVersion":"1835","creationTimestamp":"2024-01-09T00:05:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-173500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-173500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_09T00_05_33_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-01-09T00:05:27Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0109 00:28:35.527980   15272 pod_ready.go:92] pod "kube-controller-manager-multinode-173500" in "kube-system" namespace has status "Ready":"True"
	I0109 00:28:35.527980   15272 pod_ready.go:81] duration metric: took 9.797ms waiting for pod "kube-controller-manager-multinode-173500" in "kube-system" namespace to be "Ready" ...
	I0109 00:28:35.527980   15272 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-4h4sv" in "kube-system" namespace to be "Ready" ...
	I0109 00:28:35.527980   15272 round_trippers.go:463] GET https://172.24.109.120:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4h4sv
	I0109 00:28:35.527980   15272 round_trippers.go:469] Request Headers:
	I0109 00:28:35.527980   15272 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:28:35.527980   15272 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0109 00:28:35.530631   15272 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0109 00:28:35.530631   15272 round_trippers.go:577] Response Headers:
	I0109 00:28:35.530631   15272 round_trippers.go:580]     Content-Type: application/json
	I0109 00:28:35.530631   15272 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b65f6d3f-e21a-46b0-805d-9b71486b6c1c
	I0109 00:28:35.530631   15272 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 160c1d93-30c8-4413-9986-7aa0d5f0fdd1
	I0109 00:28:35.530631   15272 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:28:35 GMT
	I0109 00:28:35.530631   15272 round_trippers.go:580]     Audit-Id: 2ef6fc69-d55a-4042-9c3b-bb9a844bb9b7
	I0109 00:28:35.530631   15272 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:28:35.532298   15272 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-4h4sv","generateName":"kube-proxy-","namespace":"kube-system","uid":"a45861ba-73e0-452f-a535-f66e154ea1c6","resourceVersion":"592","creationTimestamp":"2024-01-09T00:08:43Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"ec2434b9-012c-4df1-b401-04556fed7700","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-09T00:08:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ec2434b9-012c-4df1-b401-04556fed7700\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5541 chars]
	I0109 00:28:35.533065   15272 round_trippers.go:463] GET https://172.24.109.120:8443/api/v1/nodes/multinode-173500-m02
	I0109 00:28:35.533143   15272 round_trippers.go:469] Request Headers:
	I0109 00:28:35.533143   15272 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0109 00:28:35.533143   15272 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:28:35.536386   15272 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0109 00:28:35.536459   15272 round_trippers.go:577] Response Headers:
	I0109 00:28:35.536459   15272 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:28:35 GMT
	I0109 00:28:35.536459   15272 round_trippers.go:580]     Audit-Id: c715e16c-4a05-4059-a894-9864b3c9a04a
	I0109 00:28:35.536459   15272 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:28:35.536459   15272 round_trippers.go:580]     Content-Type: application/json
	I0109 00:28:35.536459   15272 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b65f6d3f-e21a-46b0-805d-9b71486b6c1c
	I0109 00:28:35.536459   15272 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 160c1d93-30c8-4413-9986-7aa0d5f0fdd1
	I0109 00:28:35.536876   15272 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-173500-m02","uid":"2696f851-45f3-47f4-953f-d03a5dc2fac0","resourceVersion":"1573","creationTimestamp":"2024-01-09T00:08:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-173500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-173500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_09T00_23_52_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-09T00:08:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:anno
tations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager": [truncated 3819 chars]
	I0109 00:28:35.537327   15272 pod_ready.go:92] pod "kube-proxy-4h4sv" in "kube-system" namespace has status "Ready":"True"
	I0109 00:28:35.537351   15272 pod_ready.go:81] duration metric: took 9.3715ms waiting for pod "kube-proxy-4h4sv" in "kube-system" namespace to be "Ready" ...
	I0109 00:28:35.537351   15272 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-mj6ks" in "kube-system" namespace to be "Ready" ...
	I0109 00:28:35.548918   15272 round_trippers.go:463] GET https://172.24.109.120:8443/api/v1/namespaces/kube-system/pods/kube-proxy-mj6ks
	I0109 00:28:35.549040   15272 round_trippers.go:469] Request Headers:
	I0109 00:28:35.549040   15272 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0109 00:28:35.549040   15272 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:28:35.551265   15272 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0109 00:28:35.551265   15272 round_trippers.go:577] Response Headers:
	I0109 00:28:35.551265   15272 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b65f6d3f-e21a-46b0-805d-9b71486b6c1c
	I0109 00:28:35.551265   15272 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 160c1d93-30c8-4413-9986-7aa0d5f0fdd1
	I0109 00:28:35.551265   15272 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:28:35 GMT
	I0109 00:28:35.551265   15272 round_trippers.go:580]     Audit-Id: 2d767a85-5d43-485d-9db8-4a34b2fc44af
	I0109 00:28:35.552087   15272 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:28:35.552087   15272 round_trippers.go:580]     Content-Type: application/json
	I0109 00:28:35.552473   15272 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-mj6ks","generateName":"kube-proxy-","namespace":"kube-system","uid":"bd23c4c8-d363-4a3f-b750-a3de2346a3bb","resourceVersion":"1587","creationTimestamp":"2024-01-09T00:13:24Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"ec2434b9-012c-4df1-b401-04556fed7700","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-09T00:13:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ec2434b9-012c-4df1-b401-04556fed7700\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5747 chars]
	I0109 00:28:35.751810   15272 request.go:629] Waited for 199.337ms due to client-side throttling, not priority and fairness, request: GET:https://172.24.109.120:8443/api/v1/nodes/multinode-173500-m03
	I0109 00:28:35.751810   15272 round_trippers.go:463] GET https://172.24.109.120:8443/api/v1/nodes/multinode-173500-m03
	I0109 00:28:35.751810   15272 round_trippers.go:469] Request Headers:
	I0109 00:28:35.751810   15272 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:28:35.751810   15272 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0109 00:28:35.756599   15272 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0109 00:28:35.756599   15272 round_trippers.go:577] Response Headers:
	I0109 00:28:35.756599   15272 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:28:35 GMT
	I0109 00:28:35.756599   15272 round_trippers.go:580]     Audit-Id: 059c2e17-8259-49a9-9759-c2d966f467df
	I0109 00:28:35.756599   15272 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:28:35.757485   15272 round_trippers.go:580]     Content-Type: application/json
	I0109 00:28:35.757485   15272 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b65f6d3f-e21a-46b0-805d-9b71486b6c1c
	I0109 00:28:35.757485   15272 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 160c1d93-30c8-4413-9986-7aa0d5f0fdd1
	I0109 00:28:35.757801   15272 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-173500-m03","uid":"9d8a783a-d01b-498d-94ae-1e3f65e7667c","resourceVersion":"1603","creationTimestamp":"2024-01-09T00:23:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-173500-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-173500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_09T00_23_52_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-09T00:23:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:anno
tations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-det [truncated 3636 chars]
	I0109 00:28:35.759038   15272 pod_ready.go:92] pod "kube-proxy-mj6ks" in "kube-system" namespace has status "Ready":"True"
	I0109 00:28:35.759161   15272 pod_ready.go:81] duration metric: took 221.8096ms waiting for pod "kube-proxy-mj6ks" in "kube-system" namespace to be "Ready" ...
	I0109 00:28:35.759161   15272 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-qrtm6" in "kube-system" namespace to be "Ready" ...
	I0109 00:28:35.954348   15272 request.go:629] Waited for 194.8019ms due to client-side throttling, not priority and fairness, request: GET:https://172.24.109.120:8443/api/v1/namespaces/kube-system/pods/kube-proxy-qrtm6
	I0109 00:28:35.954572   15272 round_trippers.go:463] GET https://172.24.109.120:8443/api/v1/namespaces/kube-system/pods/kube-proxy-qrtm6
	I0109 00:28:35.954572   15272 round_trippers.go:469] Request Headers:
	I0109 00:28:35.954572   15272 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0109 00:28:35.954572   15272 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:28:35.960085   15272 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0109 00:28:35.960085   15272 round_trippers.go:577] Response Headers:
	I0109 00:28:35.960085   15272 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b65f6d3f-e21a-46b0-805d-9b71486b6c1c
	I0109 00:28:35.960085   15272 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 160c1d93-30c8-4413-9986-7aa0d5f0fdd1
	I0109 00:28:35.960085   15272 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:28:35 GMT
	I0109 00:28:35.960566   15272 round_trippers.go:580]     Audit-Id: 4506f8da-372f-45e2-9215-bbaddb1a4674
	I0109 00:28:35.960566   15272 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:28:35.960566   15272 round_trippers.go:580]     Content-Type: application/json
	I0109 00:28:35.961953   15272 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-qrtm6","generateName":"kube-proxy-","namespace":"kube-system","uid":"37d066e0-6ff3-4f22-abc3-6bddfa64736e","resourceVersion":"1833","creationTimestamp":"2024-01-09T00:05:45Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"ec2434b9-012c-4df1-b401-04556fed7700","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-09T00:05:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ec2434b9-012c-4df1-b401-04556fed7700\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5743 chars]
	I0109 00:28:36.156805   15272 request.go:629] Waited for 194.0074ms due to client-side throttling, not priority and fairness, request: GET:https://172.24.109.120:8443/api/v1/nodes/multinode-173500
	I0109 00:28:36.157065   15272 round_trippers.go:463] GET https://172.24.109.120:8443/api/v1/nodes/multinode-173500
	I0109 00:28:36.157110   15272 round_trippers.go:469] Request Headers:
	I0109 00:28:36.157110   15272 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:28:36.157110   15272 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0109 00:28:36.160696   15272 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0109 00:28:36.160696   15272 round_trippers.go:577] Response Headers:
	I0109 00:28:36.160696   15272 round_trippers.go:580]     Audit-Id: 6f9c05dd-484b-4ef2-b456-ad817c8443f1
	I0109 00:28:36.161214   15272 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:28:36.161214   15272 round_trippers.go:580]     Content-Type: application/json
	I0109 00:28:36.161214   15272 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b65f6d3f-e21a-46b0-805d-9b71486b6c1c
	I0109 00:28:36.161214   15272 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 160c1d93-30c8-4413-9986-7aa0d5f0fdd1
	I0109 00:28:36.161214   15272 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:28:36 GMT
	I0109 00:28:36.161487   15272 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-173500","uid":"5e5c58e3-1ae8-4346-8766-7537fea36975","resourceVersion":"1835","creationTimestamp":"2024-01-09T00:05:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-173500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-173500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_09T00_05_33_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-01-09T00:05:27Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0109 00:28:36.162068   15272 pod_ready.go:92] pod "kube-proxy-qrtm6" in "kube-system" namespace has status "Ready":"True"
	I0109 00:28:36.162068   15272 pod_ready.go:81] duration metric: took 402.9067ms waiting for pod "kube-proxy-qrtm6" in "kube-system" namespace to be "Ready" ...
	I0109 00:28:36.162139   15272 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-173500" in "kube-system" namespace to be "Ready" ...
	I0109 00:28:36.359841   15272 request.go:629] Waited for 197.3842ms due to client-side throttling, not priority and fairness, request: GET:https://172.24.109.120:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-173500
	I0109 00:28:36.360238   15272 round_trippers.go:463] GET https://172.24.109.120:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-173500
	I0109 00:28:36.360303   15272 round_trippers.go:469] Request Headers:
	I0109 00:28:36.360303   15272 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:28:36.360303   15272 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0109 00:28:36.364016   15272 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0109 00:28:36.364016   15272 round_trippers.go:577] Response Headers:
	I0109 00:28:36.364016   15272 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:28:36.364016   15272 round_trippers.go:580]     Content-Type: application/json
	I0109 00:28:36.364016   15272 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b65f6d3f-e21a-46b0-805d-9b71486b6c1c
	I0109 00:28:36.364016   15272 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 160c1d93-30c8-4413-9986-7aa0d5f0fdd1
	I0109 00:28:36.364381   15272 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:28:36 GMT
	I0109 00:28:36.364381   15272 round_trippers.go:580]     Audit-Id: 586f6628-bb6d-4d11-a63b-4659061bb668
	I0109 00:28:36.364889   15272 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-173500","namespace":"kube-system","uid":"31d8cdf6-292f-4b3c-87c5-951fc34d0ea4","resourceVersion":"1829","creationTimestamp":"2024-01-09T00:05:30Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"70306498a200a6bbe0aa0b41e240d63b","kubernetes.io/config.mirror":"70306498a200a6bbe0aa0b41e240d63b","kubernetes.io/config.seen":"2024-01-09T00:05:21.481168866Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-173500","uid":"5e5c58e3-1ae8-4346-8766-7537fea36975","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-09T00:05:30Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{}
,"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{ [truncated 4909 chars]
	I0109 00:28:36.548919   15272 request.go:629] Waited for 183.8335ms due to client-side throttling, not priority and fairness, request: GET:https://172.24.109.120:8443/api/v1/nodes/multinode-173500
	I0109 00:28:36.549255   15272 round_trippers.go:463] GET https://172.24.109.120:8443/api/v1/nodes/multinode-173500
	I0109 00:28:36.549255   15272 round_trippers.go:469] Request Headers:
	I0109 00:28:36.549316   15272 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0109 00:28:36.549316   15272 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:28:36.553690   15272 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0109 00:28:36.554663   15272 round_trippers.go:577] Response Headers:
	I0109 00:28:36.554721   15272 round_trippers.go:580]     Audit-Id: 45257db1-1c70-4e68-90d1-5911917c411d
	I0109 00:28:36.554721   15272 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:28:36.554721   15272 round_trippers.go:580]     Content-Type: application/json
	I0109 00:28:36.554721   15272 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b65f6d3f-e21a-46b0-805d-9b71486b6c1c
	I0109 00:28:36.554721   15272 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 160c1d93-30c8-4413-9986-7aa0d5f0fdd1
	I0109 00:28:36.554721   15272 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:28:36 GMT
	I0109 00:28:36.555843   15272 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-173500","uid":"5e5c58e3-1ae8-4346-8766-7537fea36975","resourceVersion":"1835","creationTimestamp":"2024-01-09T00:05:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-173500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-173500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_09T00_05_33_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-01-09T00:05:27Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0109 00:28:36.556583   15272 pod_ready.go:92] pod "kube-scheduler-multinode-173500" in "kube-system" namespace has status "Ready":"True"
	I0109 00:28:36.556703   15272 pod_ready.go:81] duration metric: took 394.5645ms waiting for pod "kube-scheduler-multinode-173500" in "kube-system" namespace to be "Ready" ...
	I0109 00:28:36.556703   15272 pod_ready.go:38] duration metric: took 3.5995245s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0109 00:28:36.556811   15272 api_server.go:52] waiting for apiserver process to appear ...
	I0109 00:28:36.572050   15272 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0109 00:28:36.590608   15272 command_runner.go:130] > 1838
	I0109 00:28:36.591438   15272 api_server.go:72] duration metric: took 18.2753157s to wait for apiserver process to appear ...
	I0109 00:28:36.591438   15272 api_server.go:88] waiting for apiserver healthz status ...
	I0109 00:28:36.591438   15272 api_server.go:253] Checking apiserver healthz at https://172.24.109.120:8443/healthz ...
	I0109 00:28:36.600224   15272 api_server.go:279] https://172.24.109.120:8443/healthz returned 200:
	ok
	I0109 00:28:36.600463   15272 round_trippers.go:463] GET https://172.24.109.120:8443/version
	I0109 00:28:36.600463   15272 round_trippers.go:469] Request Headers:
	I0109 00:28:36.600463   15272 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:28:36.600463   15272 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0109 00:28:36.601664   15272 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0109 00:28:36.601664   15272 round_trippers.go:577] Response Headers:
	I0109 00:28:36.601664   15272 round_trippers.go:580]     Content-Length: 264
	I0109 00:28:36.601664   15272 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:28:36 GMT
	I0109 00:28:36.601664   15272 round_trippers.go:580]     Audit-Id: 800ac1b8-3469-4a2e-a908-456fbf37c4a4
	I0109 00:28:36.601664   15272 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:28:36.602449   15272 round_trippers.go:580]     Content-Type: application/json
	I0109 00:28:36.602449   15272 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b65f6d3f-e21a-46b0-805d-9b71486b6c1c
	I0109 00:28:36.602449   15272 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 160c1d93-30c8-4413-9986-7aa0d5f0fdd1
	I0109 00:28:36.602449   15272 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "28",
	  "gitVersion": "v1.28.4",
	  "gitCommit": "bae2c62678db2b5053817bc97181fcc2e8388103",
	  "gitTreeState": "clean",
	  "buildDate": "2023-11-15T16:48:54Z",
	  "goVersion": "go1.20.11",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0109 00:28:36.602546   15272 api_server.go:141] control plane version: v1.28.4
	I0109 00:28:36.602546   15272 api_server.go:131] duration metric: took 11.1086ms to wait for apiserver health ...
	I0109 00:28:36.602546   15272 system_pods.go:43] waiting for kube-system pods to appear ...
	I0109 00:28:36.751421   15272 request.go:629] Waited for 148.7771ms due to client-side throttling, not priority and fairness, request: GET:https://172.24.109.120:8443/api/v1/namespaces/kube-system/pods
	I0109 00:28:36.751669   15272 round_trippers.go:463] GET https://172.24.109.120:8443/api/v1/namespaces/kube-system/pods
	I0109 00:28:36.751669   15272 round_trippers.go:469] Request Headers:
	I0109 00:28:36.751669   15272 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:28:36.751669   15272 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0109 00:28:36.761118   15272 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0109 00:28:36.761118   15272 round_trippers.go:577] Response Headers:
	I0109 00:28:36.761118   15272 round_trippers.go:580]     Audit-Id: 86da45dc-ef91-4477-b8c4-d278cda81392
	I0109 00:28:36.761118   15272 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:28:36.761118   15272 round_trippers.go:580]     Content-Type: application/json
	I0109 00:28:36.761118   15272 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b65f6d3f-e21a-46b0-805d-9b71486b6c1c
	I0109 00:28:36.761118   15272 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 160c1d93-30c8-4413-9986-7aa0d5f0fdd1
	I0109 00:28:36.761118   15272 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:28:36 GMT
	I0109 00:28:36.763826   15272 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1838"},"items":[{"metadata":{"name":"coredns-5dd5756b68-bkss9","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"463fb6c6-1e85-419f-9c13-96e58a2ec22e","resourceVersion":"1809","creationTimestamp":"2024-01-09T00:05:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"391af85f-9c35-497b-9b4f-c347a35d4a42","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-09T00:05:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"391af85f-9c35-497b-9b4f-c347a35d4a42\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 82555 chars]
	I0109 00:28:36.768039   15272 system_pods.go:59] 12 kube-system pods found
	I0109 00:28:36.768039   15272 system_pods.go:61] "coredns-5dd5756b68-bkss9" [463fb6c6-1e85-419f-9c13-96e58a2ec22e] Running
	I0109 00:28:36.768099   15272 system_pods.go:61] "etcd-multinode-173500" [43da51b9-2249-4c4d-a9c0-4c899270d870] Running
	I0109 00:28:36.768099   15272 system_pods.go:61] "kindnet-6nz87" [73ad6ec4-cbfb-4b93-888c-3d430f3c7bf2] Running
	I0109 00:28:36.768099   15272 system_pods.go:61] "kindnet-ht547" [711faf1a-9f77-487c-bd84-1e227ab9c51a] Running
	I0109 00:28:36.768099   15272 system_pods.go:61] "kindnet-t72cs" [63893803-de87-4df9-ac98-3772bd46603c] Running
	I0109 00:28:36.768099   15272 system_pods.go:61] "kube-apiserver-multinode-173500" [5c089ac2-fe84-48d8-9727-a932903b646d] Running
	I0109 00:28:36.768099   15272 system_pods.go:61] "kube-controller-manager-multinode-173500" [a0252ea5-5d6a-4303-b7e6-151481d4cd8a] Running
	I0109 00:28:36.768151   15272 system_pods.go:61] "kube-proxy-4h4sv" [a45861ba-73e0-452f-a535-f66e154ea1c6] Running
	I0109 00:28:36.768151   15272 system_pods.go:61] "kube-proxy-mj6ks" [bd23c4c8-d363-4a3f-b750-a3de2346a3bb] Running
	I0109 00:28:36.768151   15272 system_pods.go:61] "kube-proxy-qrtm6" [37d066e0-6ff3-4f22-abc3-6bddfa64736e] Running
	I0109 00:28:36.768151   15272 system_pods.go:61] "kube-scheduler-multinode-173500" [31d8cdf6-292f-4b3c-87c5-951fc34d0ea4] Running
	I0109 00:28:36.768151   15272 system_pods.go:61] "storage-provisioner" [936240bb-4bdd-4681-91a9-cb458c623805] Running
	I0109 00:28:36.768151   15272 system_pods.go:74] duration metric: took 165.6045ms to wait for pod list to return data ...
	I0109 00:28:36.768151   15272 default_sa.go:34] waiting for default service account to be created ...
	I0109 00:28:36.955393   15272 request.go:629] Waited for 187.056ms due to client-side throttling, not priority and fairness, request: GET:https://172.24.109.120:8443/api/v1/namespaces/default/serviceaccounts
	I0109 00:28:36.955741   15272 round_trippers.go:463] GET https://172.24.109.120:8443/api/v1/namespaces/default/serviceaccounts
	I0109 00:28:36.955741   15272 round_trippers.go:469] Request Headers:
	I0109 00:28:36.955793   15272 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:28:36.955793   15272 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0109 00:28:36.960020   15272 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0109 00:28:36.960020   15272 round_trippers.go:577] Response Headers:
	I0109 00:28:36.960020   15272 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:28:36 GMT
	I0109 00:28:36.961039   15272 round_trippers.go:580]     Audit-Id: 42e9d292-87d5-4d40-bd7d-4ed39783ad5a
	I0109 00:28:36.961071   15272 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:28:36.961071   15272 round_trippers.go:580]     Content-Type: application/json
	I0109 00:28:36.961071   15272 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b65f6d3f-e21a-46b0-805d-9b71486b6c1c
	I0109 00:28:36.961071   15272 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 160c1d93-30c8-4413-9986-7aa0d5f0fdd1
	I0109 00:28:36.961071   15272 round_trippers.go:580]     Content-Length: 262
	I0109 00:28:36.961071   15272 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"1838"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"a9cc6a7c-f512-49f6-8485-edb39bd8695b","resourceVersion":"311","creationTimestamp":"2024-01-09T00:05:44Z"}}]}
	I0109 00:28:36.961388   15272 default_sa.go:45] found service account: "default"
	I0109 00:28:36.961496   15272 default_sa.go:55] duration metric: took 193.287ms for default service account to be created ...
	I0109 00:28:36.961496   15272 system_pods.go:116] waiting for k8s-apps to be running ...
	I0109 00:28:37.157172   15272 request.go:629] Waited for 195.676ms due to client-side throttling, not priority and fairness, request: GET:https://172.24.109.120:8443/api/v1/namespaces/kube-system/pods
	I0109 00:28:37.157342   15272 round_trippers.go:463] GET https://172.24.109.120:8443/api/v1/namespaces/kube-system/pods
	I0109 00:28:37.157342   15272 round_trippers.go:469] Request Headers:
	I0109 00:28:37.157555   15272 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:28:37.157555   15272 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0109 00:28:37.164895   15272 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0109 00:28:37.164895   15272 round_trippers.go:577] Response Headers:
	I0109 00:28:37.164895   15272 round_trippers.go:580]     Content-Type: application/json
	I0109 00:28:37.165143   15272 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b65f6d3f-e21a-46b0-805d-9b71486b6c1c
	I0109 00:28:37.165143   15272 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 160c1d93-30c8-4413-9986-7aa0d5f0fdd1
	I0109 00:28:37.165143   15272 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:28:37 GMT
	I0109 00:28:37.165143   15272 round_trippers.go:580]     Audit-Id: 4ab64dd9-3c79-4448-912a-1678bf5f75b6
	I0109 00:28:37.165143   15272 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:28:37.167267   15272 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1838"},"items":[{"metadata":{"name":"coredns-5dd5756b68-bkss9","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"463fb6c6-1e85-419f-9c13-96e58a2ec22e","resourceVersion":"1809","creationTimestamp":"2024-01-09T00:05:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"391af85f-9c35-497b-9b4f-c347a35d4a42","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-09T00:05:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"391af85f-9c35-497b-9b4f-c347a35d4a42\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 82555 chars]
	I0109 00:28:37.171238   15272 system_pods.go:86] 12 kube-system pods found
	I0109 00:28:37.171304   15272 system_pods.go:89] "coredns-5dd5756b68-bkss9" [463fb6c6-1e85-419f-9c13-96e58a2ec22e] Running
	I0109 00:28:37.171304   15272 system_pods.go:89] "etcd-multinode-173500" [43da51b9-2249-4c4d-a9c0-4c899270d870] Running
	I0109 00:28:37.171304   15272 system_pods.go:89] "kindnet-6nz87" [73ad6ec4-cbfb-4b93-888c-3d430f3c7bf2] Running
	I0109 00:28:37.171304   15272 system_pods.go:89] "kindnet-ht547" [711faf1a-9f77-487c-bd84-1e227ab9c51a] Running
	I0109 00:28:37.171304   15272 system_pods.go:89] "kindnet-t72cs" [63893803-de87-4df9-ac98-3772bd46603c] Running
	I0109 00:28:37.171304   15272 system_pods.go:89] "kube-apiserver-multinode-173500" [5c089ac2-fe84-48d8-9727-a932903b646d] Running
	I0109 00:28:37.171304   15272 system_pods.go:89] "kube-controller-manager-multinode-173500" [a0252ea5-5d6a-4303-b7e6-151481d4cd8a] Running
	I0109 00:28:37.171304   15272 system_pods.go:89] "kube-proxy-4h4sv" [a45861ba-73e0-452f-a535-f66e154ea1c6] Running
	I0109 00:28:37.171304   15272 system_pods.go:89] "kube-proxy-mj6ks" [bd23c4c8-d363-4a3f-b750-a3de2346a3bb] Running
	I0109 00:28:37.171304   15272 system_pods.go:89] "kube-proxy-qrtm6" [37d066e0-6ff3-4f22-abc3-6bddfa64736e] Running
	I0109 00:28:37.171304   15272 system_pods.go:89] "kube-scheduler-multinode-173500" [31d8cdf6-292f-4b3c-87c5-951fc34d0ea4] Running
	I0109 00:28:37.171304   15272 system_pods.go:89] "storage-provisioner" [936240bb-4bdd-4681-91a9-cb458c623805] Running
	I0109 00:28:37.171304   15272 system_pods.go:126] duration metric: took 209.8079ms to wait for k8s-apps to be running ...
	I0109 00:28:37.171304   15272 system_svc.go:44] waiting for kubelet service to be running ....
	I0109 00:28:37.184180   15272 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0109 00:28:37.206772   15272 system_svc.go:56] duration metric: took 35.4678ms WaitForService to wait for kubelet.
	I0109 00:28:37.207036   15272 kubeadm.go:581] duration metric: took 18.8909923s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0109 00:28:37.207036   15272 node_conditions.go:102] verifying NodePressure condition ...
	I0109 00:28:37.361206   15272 request.go:629] Waited for 153.9876ms due to client-side throttling, not priority and fairness, request: GET:https://172.24.109.120:8443/api/v1/nodes
	I0109 00:28:37.361295   15272 round_trippers.go:463] GET https://172.24.109.120:8443/api/v1/nodes
	I0109 00:28:37.361591   15272 round_trippers.go:469] Request Headers:
	I0109 00:28:37.361729   15272 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0109 00:28:37.361729   15272 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:28:37.367568   15272 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0109 00:28:37.367629   15272 round_trippers.go:577] Response Headers:
	I0109 00:28:37.367629   15272 round_trippers.go:580]     Audit-Id: d2ca5be5-3390-4cfe-af53-c9aa55fe2780
	I0109 00:28:37.367629   15272 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:28:37.367629   15272 round_trippers.go:580]     Content-Type: application/json
	I0109 00:28:37.367629   15272 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b65f6d3f-e21a-46b0-805d-9b71486b6c1c
	I0109 00:28:37.367629   15272 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 160c1d93-30c8-4413-9986-7aa0d5f0fdd1
	I0109 00:28:37.367629   15272 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:28:37 GMT
	I0109 00:28:37.367629   15272 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"1838"},"items":[{"metadata":{"name":"multinode-173500","uid":"5e5c58e3-1ae8-4346-8766-7537fea36975","resourceVersion":"1835","creationTimestamp":"2024-01-09T00:05:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-173500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-173500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_09T00_05_33_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFi
elds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","tim [truncated 14731 chars]
	I0109 00:28:37.369889   15272 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0109 00:28:37.369889   15272 node_conditions.go:123] node cpu capacity is 2
	I0109 00:28:37.369889   15272 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0109 00:28:37.369889   15272 node_conditions.go:123] node cpu capacity is 2
	I0109 00:28:37.369889   15272 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0109 00:28:37.369889   15272 node_conditions.go:123] node cpu capacity is 2
	I0109 00:28:37.369889   15272 node_conditions.go:105] duration metric: took 162.8532ms to run NodePressure ...
	I0109 00:28:37.369889   15272 start.go:228] waiting for startup goroutines ...
	I0109 00:28:37.369889   15272 start.go:233] waiting for cluster config update ...
	I0109 00:28:37.369889   15272 start.go:242] writing updated cluster config ...
	I0109 00:28:37.384585   15272 config.go:182] Loaded profile config "multinode-173500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0109 00:28:37.385133   15272 profile.go:148] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-173500\config.json ...
	I0109 00:28:37.392945   15272 out.go:177] * Starting worker node multinode-173500-m02 in cluster multinode-173500
	I0109 00:28:37.395094   15272 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0109 00:28:37.395094   15272 cache.go:56] Caching tarball of preloaded images
	I0109 00:28:37.395094   15272 preload.go:174] Found C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0109 00:28:37.395767   15272 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on docker
	I0109 00:28:37.396102   15272 profile.go:148] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-173500\config.json ...
	I0109 00:28:37.398782   15272 start.go:365] acquiring machines lock for multinode-173500-m02: {Name:mk1756dfe9622c208593bf9b718faf6c1651aea2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0109 00:28:37.398782   15272 start.go:369] acquired machines lock for "multinode-173500-m02" in 0s
	I0109 00:28:37.399338   15272 start.go:96] Skipping create...Using existing machine configuration
	I0109 00:28:37.399379   15272 fix.go:54] fixHost starting: m02
	I0109 00:28:37.399708   15272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-173500-m02 ).state
	I0109 00:28:39.577919   15272 main.go:141] libmachine: [stdout =====>] : Off
	
	I0109 00:28:39.577919   15272 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:28:39.577919   15272 fix.go:102] recreateIfNeeded on multinode-173500-m02: state=Stopped err=<nil>
	W0109 00:28:39.577919   15272 fix.go:128] unexpected machine state, will restart: <nil>
	I0109 00:28:39.583296   15272 out.go:177] * Restarting existing hyperv VM for "multinode-173500-m02" ...
	I0109 00:28:39.585584   15272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-173500-m02
	I0109 00:28:42.762854   15272 main.go:141] libmachine: [stdout =====>] : 
	I0109 00:28:42.762923   15272 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:28:42.762923   15272 main.go:141] libmachine: Waiting for host to start...
	I0109 00:28:42.762975   15272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-173500-m02 ).state
	I0109 00:28:45.119301   15272 main.go:141] libmachine: [stdout =====>] : Running
	
	I0109 00:28:45.119412   15272 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:28:45.119412   15272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-173500-m02 ).networkadapters[0]).ipaddresses[0]
	I0109 00:28:47.727166   15272 main.go:141] libmachine: [stdout =====>] : 
	I0109 00:28:47.727166   15272 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:28:48.729901   15272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-173500-m02 ).state
	I0109 00:28:51.004550   15272 main.go:141] libmachine: [stdout =====>] : Running
	
	I0109 00:28:51.004862   15272 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:28:51.005069   15272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-173500-m02 ).networkadapters[0]).ipaddresses[0]
	I0109 00:28:53.660076   15272 main.go:141] libmachine: [stdout =====>] : 
	I0109 00:28:53.660076   15272 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:28:54.660430   15272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-173500-m02 ).state
	I0109 00:28:56.920798   15272 main.go:141] libmachine: [stdout =====>] : Running
	
	I0109 00:28:56.920871   15272 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:28:56.920871   15272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-173500-m02 ).networkadapters[0]).ipaddresses[0]
	I0109 00:28:59.495358   15272 main.go:141] libmachine: [stdout =====>] : 
	I0109 00:28:59.495411   15272 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:29:00.496020   15272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-173500-m02 ).state
	I0109 00:29:02.775841   15272 main.go:141] libmachine: [stdout =====>] : Running
	
	I0109 00:29:02.775841   15272 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:29:02.776088   15272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-173500-m02 ).networkadapters[0]).ipaddresses[0]
	I0109 00:29:05.386612   15272 main.go:141] libmachine: [stdout =====>] : 
	I0109 00:29:05.386612   15272 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:29:06.390567   15272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-173500-m02 ).state
	I0109 00:29:08.637523   15272 main.go:141] libmachine: [stdout =====>] : Running
	
	I0109 00:29:08.637736   15272 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:29:08.637870   15272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-173500-m02 ).networkadapters[0]).ipaddresses[0]
	I0109 00:29:11.301517   15272 main.go:141] libmachine: [stdout =====>] : 172.24.111.157
	
	I0109 00:29:11.301912   15272 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:29:11.304510   15272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-173500-m02 ).state
	I0109 00:29:13.497706   15272 main.go:141] libmachine: [stdout =====>] : Running
	
	I0109 00:29:13.497939   15272 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:29:13.497939   15272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-173500-m02 ).networkadapters[0]).ipaddresses[0]
	I0109 00:29:16.089562   15272 main.go:141] libmachine: [stdout =====>] : 172.24.111.157
	
	I0109 00:29:16.089862   15272 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:29:16.090128   15272 profile.go:148] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-173500\config.json ...
	I0109 00:29:16.092894   15272 machine.go:88] provisioning docker machine ...
	I0109 00:29:16.092972   15272 buildroot.go:166] provisioning hostname "multinode-173500-m02"
	I0109 00:29:16.092972   15272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-173500-m02 ).state
	I0109 00:29:18.286737   15272 main.go:141] libmachine: [stdout =====>] : Running
	
	I0109 00:29:18.286737   15272 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:29:18.286821   15272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-173500-m02 ).networkadapters[0]).ipaddresses[0]
	I0109 00:29:20.945545   15272 main.go:141] libmachine: [stdout =====>] : 172.24.111.157
	
	I0109 00:29:20.945545   15272 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:29:20.951745   15272 main.go:141] libmachine: Using SSH client type: native
	I0109 00:29:20.952715   15272 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa46120] 0xa48c60 <nil>  [] 0s} 172.24.111.157 22 <nil> <nil>}
	I0109 00:29:20.952801   15272 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-173500-m02 && echo "multinode-173500-m02" | sudo tee /etc/hostname
	I0109 00:29:21.117188   15272 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-173500-m02
	
	I0109 00:29:21.117266   15272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-173500-m02 ).state
	I0109 00:29:23.319941   15272 main.go:141] libmachine: [stdout =====>] : Running
	
	I0109 00:29:23.320145   15272 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:29:23.320364   15272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-173500-m02 ).networkadapters[0]).ipaddresses[0]
	I0109 00:29:25.972137   15272 main.go:141] libmachine: [stdout =====>] : 172.24.111.157
	
	I0109 00:29:25.972310   15272 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:29:25.979464   15272 main.go:141] libmachine: Using SSH client type: native
	I0109 00:29:25.981241   15272 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa46120] 0xa48c60 <nil>  [] 0s} 172.24.111.157 22 <nil> <nil>}
	I0109 00:29:25.981241   15272 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-173500-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-173500-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-173500-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0109 00:29:26.133009   15272 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0109 00:29:26.133009   15272 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube1\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube1\minikube-integration\.minikube}
	I0109 00:29:26.133009   15272 buildroot.go:174] setting up certificates
	I0109 00:29:26.133009   15272 provision.go:83] configureAuth start
	I0109 00:29:26.133009   15272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-173500-m02 ).state
	I0109 00:29:28.340541   15272 main.go:141] libmachine: [stdout =====>] : Running
	
	I0109 00:29:28.340797   15272 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:29:28.340797   15272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-173500-m02 ).networkadapters[0]).ipaddresses[0]
	I0109 00:29:31.027653   15272 main.go:141] libmachine: [stdout =====>] : 172.24.111.157
	
	I0109 00:29:31.027950   15272 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:29:31.027950   15272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-173500-m02 ).state
	I0109 00:29:33.240556   15272 main.go:141] libmachine: [stdout =====>] : Running
	
	I0109 00:29:33.240556   15272 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:29:33.240647   15272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-173500-m02 ).networkadapters[0]).ipaddresses[0]
	I0109 00:29:35.834183   15272 main.go:141] libmachine: [stdout =====>] : 172.24.111.157
	
	I0109 00:29:35.834368   15272 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:29:35.834368   15272 provision.go:138] copyHostCerts
	I0109 00:29:35.834587   15272 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem
	I0109 00:29:35.835112   15272 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem, removing ...
	I0109 00:29:35.835112   15272 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cert.pem
	I0109 00:29:35.835112   15272 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0109 00:29:35.836803   15272 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem
	I0109 00:29:35.837161   15272 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem, removing ...
	I0109 00:29:35.837192   15272 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\key.pem
	I0109 00:29:35.837557   15272 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem (1675 bytes)
	I0109 00:29:35.838642   15272 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem
	I0109 00:29:35.838895   15272 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem, removing ...
	I0109 00:29:35.839007   15272 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.pem
	I0109 00:29:35.839249   15272 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0109 00:29:35.840225   15272 provision.go:112] generating server cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-173500-m02 san=[172.24.111.157 172.24.111.157 localhost 127.0.0.1 minikube multinode-173500-m02]
	I0109 00:29:36.125186   15272 provision.go:172] copyRemoteCerts
	I0109 00:29:36.140853   15272 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0109 00:29:36.140853   15272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-173500-m02 ).state
	I0109 00:29:38.303588   15272 main.go:141] libmachine: [stdout =====>] : Running
	
	I0109 00:29:38.303588   15272 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:29:38.303691   15272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-173500-m02 ).networkadapters[0]).ipaddresses[0]
	I0109 00:29:40.931147   15272 main.go:141] libmachine: [stdout =====>] : 172.24.111.157
	
	I0109 00:29:40.931147   15272 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:29:40.931477   15272 sshutil.go:53] new ssh client: &{IP:172.24.111.157 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-173500-m02\id_rsa Username:docker}
	I0109 00:29:41.042198   15272 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.9012129s)
	I0109 00:29:41.042259   15272 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0109 00:29:41.042777   15272 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0109 00:29:41.086473   15272 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0109 00:29:41.086473   15272 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1237 bytes)
	I0109 00:29:41.127730   15272 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0109 00:29:41.127730   15272 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0109 00:29:41.172560   15272 provision.go:86] duration metric: configureAuth took 15.0394719s
	I0109 00:29:41.172650   15272 buildroot.go:189] setting minikube options for container-runtime
	I0109 00:29:41.173594   15272 config.go:182] Loaded profile config "multinode-173500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0109 00:29:41.173685   15272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-173500-m02 ).state
	I0109 00:29:43.361592   15272 main.go:141] libmachine: [stdout =====>] : Running
	
	I0109 00:29:43.361592   15272 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:29:43.361720   15272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-173500-m02 ).networkadapters[0]).ipaddresses[0]
	I0109 00:29:46.004121   15272 main.go:141] libmachine: [stdout =====>] : 172.24.111.157
	
	I0109 00:29:46.004322   15272 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:29:46.010533   15272 main.go:141] libmachine: Using SSH client type: native
	I0109 00:29:46.011306   15272 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa46120] 0xa48c60 <nil>  [] 0s} 172.24.111.157 22 <nil> <nil>}
	I0109 00:29:46.011306   15272 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0109 00:29:46.154457   15272 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0109 00:29:46.154542   15272 buildroot.go:70] root file system type: tmpfs
	I0109 00:29:46.154618   15272 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0109 00:29:46.154618   15272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-173500-m02 ).state
	I0109 00:29:48.373298   15272 main.go:141] libmachine: [stdout =====>] : Running
	
	I0109 00:29:48.373298   15272 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:29:48.373397   15272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-173500-m02 ).networkadapters[0]).ipaddresses[0]
	I0109 00:29:51.066630   15272 main.go:141] libmachine: [stdout =====>] : 172.24.111.157
	
	I0109 00:29:51.066938   15272 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:29:51.073715   15272 main.go:141] libmachine: Using SSH client type: native
	I0109 00:29:51.074698   15272 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa46120] 0xa48c60 <nil>  [] 0s} 172.24.111.157 22 <nil> <nil>}
	I0109 00:29:51.074911   15272 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.24.109.120"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0109 00:29:51.242233   15272 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.24.109.120
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0109 00:29:51.242366   15272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-173500-m02 ).state
	I0109 00:29:53.432117   15272 main.go:141] libmachine: [stdout =====>] : Running
	
	I0109 00:29:53.432117   15272 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:29:53.432481   15272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-173500-m02 ).networkadapters[0]).ipaddresses[0]
	I0109 00:29:55.997212   15272 main.go:141] libmachine: [stdout =====>] : 172.24.111.157
	
	I0109 00:29:55.997352   15272 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:29:56.004079   15272 main.go:141] libmachine: Using SSH client type: native
	I0109 00:29:56.005460   15272 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa46120] 0xa48c60 <nil>  [] 0s} 172.24.111.157 22 <nil> <nil>}
	I0109 00:29:56.005460   15272 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0109 00:29:57.312410   15272 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0109 00:29:57.312410   15272 machine.go:91] provisioned docker machine in 41.2195118s
	I0109 00:29:57.312410   15272 start.go:300] post-start starting for "multinode-173500-m02" (driver="hyperv")
	I0109 00:29:57.312410   15272 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0109 00:29:57.326563   15272 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0109 00:29:57.326563   15272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-173500-m02 ).state
	I0109 00:29:59.521278   15272 main.go:141] libmachine: [stdout =====>] : Running
	
	I0109 00:29:59.521455   15272 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:29:59.521729   15272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-173500-m02 ).networkadapters[0]).ipaddresses[0]
	I0109 00:30:02.164699   15272 main.go:141] libmachine: [stdout =====>] : 172.24.111.157
	
	I0109 00:30:02.164910   15272 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:30:02.165171   15272 sshutil.go:53] new ssh client: &{IP:172.24.111.157 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-173500-m02\id_rsa Username:docker}
	I0109 00:30:02.276584   15272 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.9500207s)
	I0109 00:30:02.292174   15272 ssh_runner.go:195] Run: cat /etc/os-release
	I0109 00:30:02.298959   15272 command_runner.go:130] > NAME=Buildroot
	I0109 00:30:02.298959   15272 command_runner.go:130] > VERSION=2021.02.12-1-gae27a7b-dirty
	I0109 00:30:02.298959   15272 command_runner.go:130] > ID=buildroot
	I0109 00:30:02.298959   15272 command_runner.go:130] > VERSION_ID=2021.02.12
	I0109 00:30:02.298959   15272 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
	I0109 00:30:02.299331   15272 info.go:137] Remote host: Buildroot 2021.02.12
	I0109 00:30:02.299331   15272 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\addons for local assets ...
	I0109 00:30:02.300032   15272 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\files for local assets ...
	I0109 00:30:02.301498   15272 filesync.go:149] local asset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\142882.pem -> 142882.pem in /etc/ssl/certs
	I0109 00:30:02.301498   15272 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\142882.pem -> /etc/ssl/certs/142882.pem
	I0109 00:30:02.316666   15272 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0109 00:30:02.333827   15272 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\142882.pem --> /etc/ssl/certs/142882.pem (1708 bytes)
	I0109 00:30:02.375661   15272 start.go:303] post-start completed in 5.0632505s
	I0109 00:30:02.375661   15272 fix.go:56] fixHost completed within 1m24.9762731s
	I0109 00:30:02.375661   15272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-173500-m02 ).state
	I0109 00:30:04.564657   15272 main.go:141] libmachine: [stdout =====>] : Running
	
	I0109 00:30:04.564742   15272 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:30:04.564830   15272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-173500-m02 ).networkadapters[0]).ipaddresses[0]
	I0109 00:30:07.191342   15272 main.go:141] libmachine: [stdout =====>] : 172.24.111.157
	
	I0109 00:30:07.191486   15272 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:30:07.197497   15272 main.go:141] libmachine: Using SSH client type: native
	I0109 00:30:07.198335   15272 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa46120] 0xa48c60 <nil>  [] 0s} 172.24.111.157 22 <nil> <nil>}
	I0109 00:30:07.198335   15272 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0109 00:30:07.338755   15272 main.go:141] libmachine: SSH cmd err, output: <nil>: 1704760207.333519426
	
	I0109 00:30:07.338755   15272 fix.go:206] guest clock: 1704760207.333519426
	I0109 00:30:07.338755   15272 fix.go:219] Guest: 2024-01-09 00:30:07.333519426 +0000 UTC Remote: 2024-01-09 00:30:02.3756614 +0000 UTC m=+236.760321701 (delta=4.957858026s)
	I0109 00:30:07.338755   15272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-173500-m02 ).state
	I0109 00:30:09.499713   15272 main.go:141] libmachine: [stdout =====>] : Running
	
	I0109 00:30:09.499713   15272 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:30:09.499713   15272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-173500-m02 ).networkadapters[0]).ipaddresses[0]
	I0109 00:30:12.123647   15272 main.go:141] libmachine: [stdout =====>] : 172.24.111.157
	
	I0109 00:30:12.123647   15272 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:30:12.130401   15272 main.go:141] libmachine: Using SSH client type: native
	I0109 00:30:12.131119   15272 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa46120] 0xa48c60 <nil>  [] 0s} 172.24.111.157 22 <nil> <nil>}
	I0109 00:30:12.131119   15272 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1704760207
	I0109 00:30:12.280877   15272 main.go:141] libmachine: SSH cmd err, output: <nil>: Tue Jan  9 00:30:07 UTC 2024
	
	I0109 00:30:12.280877   15272 fix.go:226] clock set: Tue Jan  9 00:30:07 UTC 2024
	 (err=<nil>)
	I0109 00:30:12.280877   15272 start.go:83] releasing machines lock for "multinode-173500-m02", held for 1m34.8820854s
	I0109 00:30:12.281163   15272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-173500-m02 ).state
	I0109 00:30:14.444218   15272 main.go:141] libmachine: [stdout =====>] : Running
	
	I0109 00:30:14.444218   15272 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:30:14.444218   15272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-173500-m02 ).networkadapters[0]).ipaddresses[0]
	I0109 00:30:17.027633   15272 main.go:141] libmachine: [stdout =====>] : 172.24.111.157
	
	I0109 00:30:17.027809   15272 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:30:17.029739   15272 out.go:177] * Found network options:
	I0109 00:30:17.033510   15272 out.go:177]   - NO_PROXY=172.24.109.120
	W0109 00:30:17.035601   15272 proxy.go:119] fail to check proxy env: Error ip not in block
	I0109 00:30:17.038721   15272 out.go:177]   - NO_PROXY=172.24.109.120
	W0109 00:30:17.042782   15272 proxy.go:119] fail to check proxy env: Error ip not in block
	W0109 00:30:17.044396   15272 proxy.go:119] fail to check proxy env: Error ip not in block
	I0109 00:30:17.046684   15272 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0109 00:30:17.046684   15272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-173500-m02 ).state
	I0109 00:30:17.058939   15272 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0109 00:30:17.058939   15272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-173500-m02 ).state
	I0109 00:30:19.284576   15272 main.go:141] libmachine: [stdout =====>] : Running
	
	I0109 00:30:19.284783   15272 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:30:19.284576   15272 main.go:141] libmachine: [stdout =====>] : Running
	
	I0109 00:30:19.284882   15272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-173500-m02 ).networkadapters[0]).ipaddresses[0]
	I0109 00:30:19.284882   15272 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:30:19.285086   15272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-173500-m02 ).networkadapters[0]).ipaddresses[0]
	I0109 00:30:21.984934   15272 main.go:141] libmachine: [stdout =====>] : 172.24.111.157
	
	I0109 00:30:21.985157   15272 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:30:21.985388   15272 sshutil.go:53] new ssh client: &{IP:172.24.111.157 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-173500-m02\id_rsa Username:docker}
	I0109 00:30:22.012960   15272 main.go:141] libmachine: [stdout =====>] : 172.24.111.157
	
	I0109 00:30:22.013084   15272 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:30:22.013323   15272 sshutil.go:53] new ssh client: &{IP:172.24.111.157 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-173500-m02\id_rsa Username:docker}
	I0109 00:30:22.181636   15272 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0109 00:30:22.181636   15272 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	I0109 00:30:22.181750   15272 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (5.1349511s)
	I0109 00:30:22.181750   15272 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (5.1228113s)
	W0109 00:30:22.181750   15272 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0109 00:30:22.198679   15272 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0109 00:30:22.224037   15272 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0109 00:30:22.224037   15272 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0109 00:30:22.224169   15272 start.go:475] detecting cgroup driver to use...
	I0109 00:30:22.224420   15272 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0109 00:30:22.255923   15272 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0109 00:30:22.271433   15272 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0109 00:30:22.311501   15272 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0109 00:30:22.328635   15272 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0109 00:30:22.342294   15272 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0109 00:30:22.373662   15272 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0109 00:30:22.406793   15272 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0109 00:30:22.437314   15272 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0109 00:30:22.474101   15272 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0109 00:30:22.506106   15272 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0109 00:30:22.535418   15272 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0109 00:30:22.550688   15272 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0109 00:30:22.564669   15272 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0109 00:30:22.595523   15272 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0109 00:30:22.763807   15272 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0109 00:30:22.789479   15272 start.go:475] detecting cgroup driver to use...
	I0109 00:30:22.803596   15272 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0109 00:30:22.824242   15272 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0109 00:30:22.824242   15272 command_runner.go:130] > [Unit]
	I0109 00:30:22.824342   15272 command_runner.go:130] > Description=Docker Application Container Engine
	I0109 00:30:22.824342   15272 command_runner.go:130] > Documentation=https://docs.docker.com
	I0109 00:30:22.824342   15272 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0109 00:30:22.824342   15272 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0109 00:30:22.824342   15272 command_runner.go:130] > StartLimitBurst=3
	I0109 00:30:22.824342   15272 command_runner.go:130] > StartLimitIntervalSec=60
	I0109 00:30:22.824342   15272 command_runner.go:130] > [Service]
	I0109 00:30:22.824342   15272 command_runner.go:130] > Type=notify
	I0109 00:30:22.824342   15272 command_runner.go:130] > Restart=on-failure
	I0109 00:30:22.824342   15272 command_runner.go:130] > Environment=NO_PROXY=172.24.109.120
	I0109 00:30:22.824342   15272 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0109 00:30:22.824342   15272 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0109 00:30:22.824342   15272 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0109 00:30:22.824342   15272 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0109 00:30:22.824342   15272 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0109 00:30:22.824342   15272 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0109 00:30:22.824342   15272 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0109 00:30:22.824342   15272 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0109 00:30:22.824342   15272 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0109 00:30:22.824342   15272 command_runner.go:130] > ExecStart=
	I0109 00:30:22.824342   15272 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0109 00:30:22.824342   15272 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0109 00:30:22.824342   15272 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0109 00:30:22.824342   15272 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0109 00:30:22.824342   15272 command_runner.go:130] > LimitNOFILE=infinity
	I0109 00:30:22.824342   15272 command_runner.go:130] > LimitNPROC=infinity
	I0109 00:30:22.824342   15272 command_runner.go:130] > LimitCORE=infinity
	I0109 00:30:22.824342   15272 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0109 00:30:22.824342   15272 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0109 00:30:22.824342   15272 command_runner.go:130] > TasksMax=infinity
	I0109 00:30:22.824342   15272 command_runner.go:130] > TimeoutStartSec=0
	I0109 00:30:22.824342   15272 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0109 00:30:22.824342   15272 command_runner.go:130] > Delegate=yes
	I0109 00:30:22.824342   15272 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0109 00:30:22.824342   15272 command_runner.go:130] > KillMode=process
	I0109 00:30:22.824342   15272 command_runner.go:130] > [Install]
	I0109 00:30:22.824342   15272 command_runner.go:130] > WantedBy=multi-user.target
	I0109 00:30:22.842554   15272 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0109 00:30:22.872829   15272 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0109 00:30:22.916239   15272 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0109 00:30:22.951525   15272 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0109 00:30:22.984607   15272 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0109 00:30:23.048939   15272 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0109 00:30:23.073444   15272 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0109 00:30:23.102522   15272 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0109 00:30:23.120616   15272 ssh_runner.go:195] Run: which cri-dockerd
	I0109 00:30:23.125971   15272 command_runner.go:130] > /usr/bin/cri-dockerd
	I0109 00:30:23.141575   15272 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0109 00:30:23.158155   15272 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0109 00:30:23.201496   15272 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0109 00:30:23.379012   15272 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0109 00:30:23.540649   15272 docker.go:560] configuring docker to use "cgroupfs" as cgroup driver...
	I0109 00:30:23.540681   15272 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0109 00:30:23.586217   15272 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0109 00:30:23.753832   15272 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0109 00:30:25.415147   15272 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.6613144s)
	I0109 00:30:25.429957   15272 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0109 00:30:25.611502   15272 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0109 00:30:25.777258   15272 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0109 00:30:25.949773   15272 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0109 00:30:26.127881   15272 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0109 00:30:26.173468   15272 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0109 00:30:26.359584   15272 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I0109 00:30:26.467093   15272 start.go:522] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0109 00:30:26.483334   15272 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0109 00:30:26.491034   15272 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0109 00:30:26.491034   15272 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0109 00:30:26.491034   15272 command_runner.go:130] > Device: 16h/22d	Inode: 901         Links: 1
	I0109 00:30:26.491034   15272 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0109 00:30:26.491034   15272 command_runner.go:130] > Access: 2024-01-09 00:30:26.357538478 +0000
	I0109 00:30:26.491034   15272 command_runner.go:130] > Modify: 2024-01-09 00:30:26.357538478 +0000
	I0109 00:30:26.491034   15272 command_runner.go:130] > Change: 2024-01-09 00:30:26.362538478 +0000
	I0109 00:30:26.491034   15272 command_runner.go:130] >  Birth: -
	I0109 00:30:26.491034   15272 start.go:543] Will wait 60s for crictl version
	I0109 00:30:26.507221   15272 ssh_runner.go:195] Run: which crictl
	I0109 00:30:26.512165   15272 command_runner.go:130] > /usr/bin/crictl
	I0109 00:30:26.526544   15272 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0109 00:30:26.603592   15272 command_runner.go:130] > Version:  0.1.0
	I0109 00:30:26.603710   15272 command_runner.go:130] > RuntimeName:  docker
	I0109 00:30:26.603710   15272 command_runner.go:130] > RuntimeVersion:  24.0.7
	I0109 00:30:26.603710   15272 command_runner.go:130] > RuntimeApiVersion:  v1
	I0109 00:30:26.604102   15272 start.go:559] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  24.0.7
	RuntimeApiVersion:  v1
	I0109 00:30:26.614991   15272 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0109 00:30:26.650555   15272 command_runner.go:130] > 24.0.7
	I0109 00:30:26.662197   15272 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0109 00:30:26.694166   15272 command_runner.go:130] > 24.0.7
	I0109 00:30:26.698965   15272 out.go:204] * Preparing Kubernetes v1.28.4 on Docker 24.0.7 ...
	I0109 00:30:26.702770   15272 out.go:177]   - env NO_PROXY=172.24.109.120
	I0109 00:30:26.704426   15272 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0109 00:30:26.709338   15272 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0109 00:30:26.709338   15272 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0109 00:30:26.709338   15272 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0109 00:30:26.709338   15272 ip.go:207] Found interface: {Index:13 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:c4:61:0a Flags:up|broadcast|multicast|running}
	I0109 00:30:26.712343   15272 ip.go:210] interface addr: fe80::3fa5:15f5:46dc:dc8f/64
	I0109 00:30:26.712343   15272 ip.go:210] interface addr: 172.24.96.1/20
	I0109 00:30:26.725349   15272 ssh_runner.go:195] Run: grep 172.24.96.1	host.minikube.internal$ /etc/hosts
	I0109 00:30:26.731400   15272 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.24.96.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0109 00:30:26.751524   15272 certs.go:56] Setting up C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-173500 for IP: 172.24.111.157
	I0109 00:30:26.751524   15272 certs.go:190] acquiring lock for shared ca certs: {Name:mk7484196a709b348d442d7deac4228c8c4b804e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0109 00:30:26.752339   15272 certs.go:199] skipping minikubeCA CA generation: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key
	I0109 00:30:26.752701   15272 certs.go:199] skipping proxyClientCA CA generation: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key
	I0109 00:30:26.752985   15272 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0109 00:30:26.753394   15272 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0109 00:30:26.753716   15272 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0109 00:30:26.754159   15272 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0109 00:30:26.755125   15272 certs.go:437] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\14288.pem (1338 bytes)
	W0109 00:30:26.755710   15272 certs.go:433] ignoring C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\14288_empty.pem, impossibly tiny 0 bytes
	I0109 00:30:26.755840   15272 certs.go:437] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I0109 00:30:26.756339   15272 certs.go:437] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0109 00:30:26.756787   15272 certs.go:437] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0109 00:30:26.757234   15272 certs.go:437] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I0109 00:30:26.758218   15272 certs.go:437] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\142882.pem (1708 bytes)
	I0109 00:30:26.758521   15272 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\142882.pem -> /usr/share/ca-certificates/142882.pem
	I0109 00:30:26.758812   15272 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0109 00:30:26.759166   15272 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\14288.pem -> /usr/share/ca-certificates/14288.pem
	I0109 00:30:26.761952   15272 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0109 00:30:26.804351   15272 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0109 00:30:26.849692   15272 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0109 00:30:26.889234   15272 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0109 00:30:26.928297   15272 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\142882.pem --> /usr/share/ca-certificates/142882.pem (1708 bytes)
	I0109 00:30:26.965433   15272 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0109 00:30:27.004528   15272 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\14288.pem --> /usr/share/ca-certificates/14288.pem (1338 bytes)
	I0109 00:30:27.064739   15272 ssh_runner.go:195] Run: openssl version
	I0109 00:30:27.075725   15272 command_runner.go:130] > OpenSSL 1.1.1n  15 Mar 2022
	I0109 00:30:27.089857   15272 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/142882.pem && ln -fs /usr/share/ca-certificates/142882.pem /etc/ssl/certs/142882.pem"
	I0109 00:30:27.120877   15272 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/142882.pem
	I0109 00:30:27.126834   15272 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Jan  8 23:11 /usr/share/ca-certificates/142882.pem
	I0109 00:30:27.126834   15272 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan  8 23:11 /usr/share/ca-certificates/142882.pem
	I0109 00:30:27.140484   15272 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/142882.pem
	I0109 00:30:27.149579   15272 command_runner.go:130] > 3ec20f2e
	I0109 00:30:27.163459   15272 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/142882.pem /etc/ssl/certs/3ec20f2e.0"
	I0109 00:30:27.194542   15272 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0109 00:30:27.225734   15272 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0109 00:30:27.232884   15272 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Jan  8 22:56 /usr/share/ca-certificates/minikubeCA.pem
	I0109 00:30:27.232884   15272 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan  8 22:56 /usr/share/ca-certificates/minikubeCA.pem
	I0109 00:30:27.250094   15272 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0109 00:30:27.257324   15272 command_runner.go:130] > b5213941
	I0109 00:30:27.273114   15272 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0109 00:30:27.306616   15272 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14288.pem && ln -fs /usr/share/ca-certificates/14288.pem /etc/ssl/certs/14288.pem"
	I0109 00:30:27.336824   15272 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14288.pem
	I0109 00:30:27.344058   15272 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Jan  8 23:11 /usr/share/ca-certificates/14288.pem
	I0109 00:30:27.344058   15272 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan  8 23:11 /usr/share/ca-certificates/14288.pem
	I0109 00:30:27.358092   15272 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14288.pem
	I0109 00:30:27.365849   15272 command_runner.go:130] > 51391683
	I0109 00:30:27.379246   15272 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14288.pem /etc/ssl/certs/51391683.0"
	I0109 00:30:27.411405   15272 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0109 00:30:27.416973   15272 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0109 00:30:27.416973   15272 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0109 00:30:27.428433   15272 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0109 00:30:27.465860   15272 command_runner.go:130] > cgroupfs
	I0109 00:30:27.465860   15272 cni.go:84] Creating CNI manager for ""
	I0109 00:30:27.465860   15272 cni.go:136] 3 nodes found, recommending kindnet
	I0109 00:30:27.465860   15272 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0109 00:30:27.465860   15272 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.24.111.157 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-173500 NodeName:multinode-173500-m02 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.24.109.120"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.24.111.157 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0109 00:30:27.465860   15272 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.24.111.157
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "multinode-173500-m02"
	  kubeletExtraArgs:
	    node-ip: 172.24.111.157
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.24.109.120"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0109 00:30:27.465860   15272 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=multinode-173500-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.24.111.157
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:multinode-173500 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0109 00:30:27.480427   15272 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0109 00:30:27.498139   15272 command_runner.go:130] > kubeadm
	I0109 00:30:27.498139   15272 command_runner.go:130] > kubectl
	I0109 00:30:27.498139   15272 command_runner.go:130] > kubelet
	I0109 00:30:27.498206   15272 binaries.go:44] Found k8s binaries, skipping transfer
	I0109 00:30:27.511860   15272 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I0109 00:30:27.527124   15272 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (383 bytes)
	I0109 00:30:27.553750   15272 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0109 00:30:27.597538   15272 ssh_runner.go:195] Run: grep 172.24.109.120	control-plane.minikube.internal$ /etc/hosts
	I0109 00:30:27.603559   15272 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.24.109.120	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0109 00:30:27.620478   15272 host.go:66] Checking if "multinode-173500" exists ...
	I0109 00:30:27.621606   15272 config.go:182] Loaded profile config "multinode-173500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0109 00:30:27.621606   15272 start.go:304] JoinCluster: &{Name:multinode-173500 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17806/minikube-v1.32.1-1702708929-17806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704751654-17830@sha256:cabd32f8d9e8d804966eb117ed5366660f6363a4d1415f0b5480de6e396be617 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion
:v1.28.4 ClusterName:multinode-173500 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:172.24.109.120 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.24.111.157 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.24.100.87 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingr
ess:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:do
cker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I0109 00:30:27.621835   15272 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0109 00:30:27.621948   15272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-173500 ).state
	I0109 00:30:29.784396   15272 main.go:141] libmachine: [stdout =====>] : Running
	
	I0109 00:30:29.784583   15272 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:30:29.784678   15272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-173500 ).networkadapters[0]).ipaddresses[0]
	I0109 00:30:32.381616   15272 main.go:141] libmachine: [stdout =====>] : 172.24.109.120
	
	I0109 00:30:32.381616   15272 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:30:32.381862   15272 sshutil.go:53] new ssh client: &{IP:172.24.109.120 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-173500\id_rsa Username:docker}
	I0109 00:30:32.583582   15272 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token o4ugah.wbuog6qrdb131mae --discovery-token-ca-cert-hash sha256:6a12e94bf3397e7db59fa944f4e20c2c2c34b5794397b381e3c5134eb1900391 
	I0109 00:30:32.583664   15272 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm token create --print-join-command --ttl=0": (4.9617753s)
	I0109 00:30:32.583664   15272 start.go:317] removing existing worker node "m02" before attempting to rejoin cluster: &{Name:m02 IP:172.24.111.157 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0109 00:30:32.583664   15272 host.go:66] Checking if "multinode-173500" exists ...
	I0109 00:30:32.598099   15272 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl drain multinode-173500-m02 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data --delete-local-data
	I0109 00:30:32.598099   15272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-173500 ).state
	I0109 00:30:34.805509   15272 main.go:141] libmachine: [stdout =====>] : Running
	
	I0109 00:30:34.805509   15272 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:30:34.805778   15272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-173500 ).networkadapters[0]).ipaddresses[0]
	I0109 00:30:37.401513   15272 main.go:141] libmachine: [stdout =====>] : 172.24.109.120
	
	I0109 00:30:37.401513   15272 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:30:37.401787   15272 sshutil.go:53] new ssh client: &{IP:172.24.109.120 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-173500\id_rsa Username:docker}
	I0109 00:30:37.596318   15272 command_runner.go:130] ! Flag --delete-local-data has been deprecated, This option is deprecated and will be deleted. Use --delete-emptydir-data.
	I0109 00:30:37.681494   15272 command_runner.go:130] ! Warning: ignoring DaemonSet-managed Pods: kube-system/kindnet-t72cs, kube-system/kube-proxy-4h4sv
	I0109 00:30:40.724522   15272 command_runner.go:130] > node/multinode-173500-m02 cordoned
	I0109 00:30:40.724889   15272 command_runner.go:130] > pod "busybox-5bc68d56bd-txtnl" has DeletionTimestamp older than 1 seconds, skipping
	I0109 00:30:40.724889   15272 command_runner.go:130] > node/multinode-173500-m02 drained
	I0109 00:30:40.728297   15272 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl drain multinode-173500-m02 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data --delete-local-data: (8.1301977s)
	I0109 00:30:40.728297   15272 node.go:108] successfully drained node "m02"
	I0109 00:30:40.728922   15272 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0109 00:30:40.730290   15272 kapi.go:59] client config for multinode-173500: &rest.Config{Host:"https://172.24.109.120:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\multinode-173500\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\multinode-173500\\client.key", CAFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CA
Data:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1e2c140), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0109 00:30:40.730557   15272 request.go:1212] Request Body: {"kind":"DeleteOptions","apiVersion":"v1"}
	I0109 00:30:40.731335   15272 round_trippers.go:463] DELETE https://172.24.109.120:8443/api/v1/nodes/multinode-173500-m02
	I0109 00:30:40.731335   15272 round_trippers.go:469] Request Headers:
	I0109 00:30:40.731335   15272 round_trippers.go:473]     Content-Type: application/json
	I0109 00:30:40.731335   15272 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0109 00:30:40.731335   15272 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:30:40.750379   15272 round_trippers.go:574] Response Status: 200 OK in 19 milliseconds
	I0109 00:30:40.750379   15272 round_trippers.go:577] Response Headers:
	I0109 00:30:40.750379   15272 round_trippers.go:580]     Audit-Id: b5cb8855-8e00-4202-9c10-d1bda015852b
	I0109 00:30:40.751314   15272 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:30:40.751314   15272 round_trippers.go:580]     Content-Type: application/json
	I0109 00:30:40.751314   15272 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b65f6d3f-e21a-46b0-805d-9b71486b6c1c
	I0109 00:30:40.751348   15272 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 160c1d93-30c8-4413-9986-7aa0d5f0fdd1
	I0109 00:30:40.751348   15272 round_trippers.go:580]     Content-Length: 171
	I0109 00:30:40.751374   15272 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:30:40 GMT
	I0109 00:30:40.751374   15272 request.go:1212] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Success","details":{"name":"multinode-173500-m02","kind":"nodes","uid":"2696f851-45f3-47f4-953f-d03a5dc2fac0"}}
	I0109 00:30:40.751516   15272 node.go:124] successfully deleted node "m02"
	I0109 00:30:40.751539   15272 start.go:321] successfully removed existing worker node "m02" from cluster: &{Name:m02 IP:172.24.111.157 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0109 00:30:40.751567   15272 start.go:325] trying to join worker node "m02" to cluster: &{Name:m02 IP:172.24.111.157 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0109 00:30:40.751567   15272 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm join control-plane.minikube.internal:8443 --token o4ugah.wbuog6qrdb131mae --discovery-token-ca-cert-hash sha256:6a12e94bf3397e7db59fa944f4e20c2c2c34b5794397b381e3c5134eb1900391 --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-173500-m02"
	I0109 00:30:41.032352   15272 command_runner.go:130] > [preflight] Running pre-flight checks
	I0109 00:30:41.673258   15272 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I0109 00:30:41.674049   15272 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I0109 00:30:41.728343   15272 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0109 00:30:41.730604   15272 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0109 00:30:41.730975   15272 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0109 00:30:41.898856   15272 command_runner.go:130] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
	I0109 00:30:43.435338   15272 command_runner.go:130] > This node has joined the cluster:
	I0109 00:30:43.436021   15272 command_runner.go:130] > * Certificate signing request was sent to apiserver and a response was received.
	I0109 00:30:43.436021   15272 command_runner.go:130] > * The Kubelet was informed of the new secure connection details.
	I0109 00:30:43.436067   15272 command_runner.go:130] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I0109 00:30:43.440380   15272 command_runner.go:130] ! W0109 00:30:41.009737    1365 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I0109 00:30:43.440421   15272 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0109 00:30:43.440454   15272 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm join control-plane.minikube.internal:8443 --token o4ugah.wbuog6qrdb131mae --discovery-token-ca-cert-hash sha256:6a12e94bf3397e7db59fa944f4e20c2c2c34b5794397b381e3c5134eb1900391 --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-173500-m02": (2.6888867s)
	I0109 00:30:43.440454   15272 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0109 00:30:43.711250   15272 command_runner.go:130] ! Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /usr/lib/systemd/system/kubelet.service.
	I0109 00:30:43.970316   15272 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a minikube.k8s.io/name=multinode-173500 minikube.k8s.io/updated_at=2024_01_09T00_30_43_0700 minikube.k8s.io/primary=false "-l minikube.k8s.io/primary!=true" --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:30:44.147085   15272 command_runner.go:130] > node/multinode-173500-m02 labeled
	I0109 00:30:44.147218   15272 command_runner.go:130] > node/multinode-173500-m03 labeled
	I0109 00:30:44.147218   15272 start.go:306] JoinCluster complete in 16.5256106s
	I0109 00:30:44.147218   15272 cni.go:84] Creating CNI manager for ""
	I0109 00:30:44.147218   15272 cni.go:136] 3 nodes found, recommending kindnet
	I0109 00:30:44.162958   15272 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0109 00:30:44.171953   15272 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0109 00:30:44.171953   15272 command_runner.go:130] >   Size: 2685752   	Blocks: 5248       IO Block: 4096   regular file
	I0109 00:30:44.171953   15272 command_runner.go:130] > Device: 11h/17d	Inode: 3544        Links: 1
	I0109 00:30:44.171953   15272 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0109 00:30:44.171953   15272 command_runner.go:130] > Access: 2024-01-09 00:26:43.947705700 +0000
	I0109 00:30:44.171953   15272 command_runner.go:130] > Modify: 2023-12-16 11:53:47.000000000 +0000
	I0109 00:30:44.171953   15272 command_runner.go:130] > Change: 2024-01-09 00:26:31.489000000 +0000
	I0109 00:30:44.172937   15272 command_runner.go:130] >  Birth: -
	I0109 00:30:44.172937   15272 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I0109 00:30:44.172937   15272 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0109 00:30:44.217540   15272 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0109 00:30:44.656521   15272 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I0109 00:30:44.656668   15272 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I0109 00:30:44.656668   15272 command_runner.go:130] > serviceaccount/kindnet unchanged
	I0109 00:30:44.656668   15272 command_runner.go:130] > daemonset.apps/kindnet configured
	I0109 00:30:44.657458   15272 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0109 00:30:44.658192   15272 kapi.go:59] client config for multinode-173500: &rest.Config{Host:"https://172.24.109.120:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\multinode-173500\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\multinode-173500\\client.key", CAFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CA
Data:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1e2c140), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0109 00:30:44.658692   15272 round_trippers.go:463] GET https://172.24.109.120:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0109 00:30:44.658692   15272 round_trippers.go:469] Request Headers:
	I0109 00:30:44.658692   15272 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:30:44.658692   15272 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0109 00:30:44.666753   15272 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0109 00:30:44.666753   15272 round_trippers.go:577] Response Headers:
	I0109 00:30:44.666753   15272 round_trippers.go:580]     Audit-Id: 3cc26e8f-61e9-49da-9767-4832e6b0d4e7
	I0109 00:30:44.666753   15272 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:30:44.666753   15272 round_trippers.go:580]     Content-Type: application/json
	I0109 00:30:44.666753   15272 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b65f6d3f-e21a-46b0-805d-9b71486b6c1c
	I0109 00:30:44.666753   15272 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 160c1d93-30c8-4413-9986-7aa0d5f0fdd1
	I0109 00:30:44.666753   15272 round_trippers.go:580]     Content-Length: 292
	I0109 00:30:44.666753   15272 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:30:44 GMT
	I0109 00:30:44.666753   15272 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"40c365d2-2414-4cb8-9731-fc615f6d2dcd","resourceVersion":"1814","creationTimestamp":"2024-01-09T00:05:31Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0109 00:30:44.666753   15272 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-173500" context rescaled to 1 replicas
	I0109 00:30:44.666753   15272 start.go:223] Will wait 6m0s for node &{Name:m02 IP:172.24.111.157 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0109 00:30:44.671638   15272 out.go:177] * Verifying Kubernetes components...
	I0109 00:30:44.687646   15272 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0109 00:30:44.710645   15272 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0109 00:30:44.711644   15272 kapi.go:59] client config for multinode-173500: &rest.Config{Host:"https://172.24.109.120:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\multinode-173500\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\multinode-173500\\client.key", CAFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CA
Data:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1e2c140), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0109 00:30:44.711644   15272 node_ready.go:35] waiting up to 6m0s for node "multinode-173500-m02" to be "Ready" ...
	I0109 00:30:44.712649   15272 round_trippers.go:463] GET https://172.24.109.120:8443/api/v1/nodes/multinode-173500-m02
	I0109 00:30:44.712649   15272 round_trippers.go:469] Request Headers:
	I0109 00:30:44.712649   15272 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:30:44.712649   15272 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0109 00:30:44.716646   15272 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0109 00:30:44.716646   15272 round_trippers.go:577] Response Headers:
	I0109 00:30:44.716646   15272 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:30:44 GMT
	I0109 00:30:44.716646   15272 round_trippers.go:580]     Audit-Id: d591fe2c-ed8d-4549-9091-09fe84c48d0a
	I0109 00:30:44.716646   15272 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:30:44.716646   15272 round_trippers.go:580]     Content-Type: application/json
	I0109 00:30:44.716829   15272 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b65f6d3f-e21a-46b0-805d-9b71486b6c1c
	I0109 00:30:44.716829   15272 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 160c1d93-30c8-4413-9986-7aa0d5f0fdd1
	I0109 00:30:44.717244   15272 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-173500-m02","uid":"5e797fe4-8400-423e-ad46-5d1f64335887","resourceVersion":"1998","creationTimestamp":"2024-01-09T00:30:42Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-173500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-173500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_09T00_30_43_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-09T00:30:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:me
tadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec [truncated 3782 chars]
	I0109 00:30:45.220005   15272 round_trippers.go:463] GET https://172.24.109.120:8443/api/v1/nodes/multinode-173500-m02
	I0109 00:30:45.220138   15272 round_trippers.go:469] Request Headers:
	I0109 00:30:45.220138   15272 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:30:45.220287   15272 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0109 00:30:45.225704   15272 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0109 00:30:45.225704   15272 round_trippers.go:577] Response Headers:
	I0109 00:30:45.225704   15272 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:30:45.225704   15272 round_trippers.go:580]     Content-Type: application/json
	I0109 00:30:45.225704   15272 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b65f6d3f-e21a-46b0-805d-9b71486b6c1c
	I0109 00:30:45.225704   15272 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 160c1d93-30c8-4413-9986-7aa0d5f0fdd1
	I0109 00:30:45.225704   15272 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:30:45 GMT
	I0109 00:30:45.225704   15272 round_trippers.go:580]     Audit-Id: c12ecfe5-7b90-4438-8fa7-72f5eab5caf7
	I0109 00:30:45.225704   15272 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-173500-m02","uid":"5e797fe4-8400-423e-ad46-5d1f64335887","resourceVersion":"1998","creationTimestamp":"2024-01-09T00:30:42Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-173500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-173500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_09T00_30_43_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-09T00:30:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:me
tadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec [truncated 3782 chars]
	I0109 00:30:45.725159   15272 round_trippers.go:463] GET https://172.24.109.120:8443/api/v1/nodes/multinode-173500-m02
	I0109 00:30:45.725234   15272 round_trippers.go:469] Request Headers:
	I0109 00:30:45.725234   15272 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:30:45.725302   15272 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0109 00:30:45.729065   15272 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0109 00:30:45.729065   15272 round_trippers.go:577] Response Headers:
	I0109 00:30:45.729183   15272 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:30:45.729183   15272 round_trippers.go:580]     Content-Type: application/json
	I0109 00:30:45.729183   15272 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b65f6d3f-e21a-46b0-805d-9b71486b6c1c
	I0109 00:30:45.729183   15272 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 160c1d93-30c8-4413-9986-7aa0d5f0fdd1
	I0109 00:30:45.729183   15272 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:30:45 GMT
	I0109 00:30:45.729183   15272 round_trippers.go:580]     Audit-Id: 1c858047-7378-4795-83fa-1cbcd858cec3
	I0109 00:30:45.729388   15272 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-173500-m02","uid":"5e797fe4-8400-423e-ad46-5d1f64335887","resourceVersion":"2006","creationTimestamp":"2024-01-09T00:30:42Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-173500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-173500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_09T00_30_43_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-09T00:30:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:anno
tations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-det [truncated 3891 chars]
	I0109 00:30:46.212921   15272 round_trippers.go:463] GET https://172.24.109.120:8443/api/v1/nodes/multinode-173500-m02
	I0109 00:30:46.212921   15272 round_trippers.go:469] Request Headers:
	I0109 00:30:46.212921   15272 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:30:46.212921   15272 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0109 00:30:46.218684   15272 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0109 00:30:46.218684   15272 round_trippers.go:577] Response Headers:
	I0109 00:30:46.218955   15272 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:30:46.218955   15272 round_trippers.go:580]     Content-Type: application/json
	I0109 00:30:46.218955   15272 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b65f6d3f-e21a-46b0-805d-9b71486b6c1c
	I0109 00:30:46.218955   15272 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 160c1d93-30c8-4413-9986-7aa0d5f0fdd1
	I0109 00:30:46.218955   15272 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:30:46 GMT
	I0109 00:30:46.218955   15272 round_trippers.go:580]     Audit-Id: 4fef0360-33b2-4d8b-bdbd-98d01eb23780
	I0109 00:30:46.219112   15272 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-173500-m02","uid":"5e797fe4-8400-423e-ad46-5d1f64335887","resourceVersion":"2006","creationTimestamp":"2024-01-09T00:30:42Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-173500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-173500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_09T00_30_43_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-09T00:30:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:anno
tations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-det [truncated 3891 chars]
	I0109 00:30:46.715674   15272 round_trippers.go:463] GET https://172.24.109.120:8443/api/v1/nodes/multinode-173500-m02
	I0109 00:30:46.715674   15272 round_trippers.go:469] Request Headers:
	I0109 00:30:46.715674   15272 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:30:46.715910   15272 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0109 00:30:46.719221   15272 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0109 00:30:46.720229   15272 round_trippers.go:577] Response Headers:
	I0109 00:30:46.720229   15272 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:30:46.720229   15272 round_trippers.go:580]     Content-Type: application/json
	I0109 00:30:46.720229   15272 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b65f6d3f-e21a-46b0-805d-9b71486b6c1c
	I0109 00:30:46.720229   15272 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 160c1d93-30c8-4413-9986-7aa0d5f0fdd1
	I0109 00:30:46.720229   15272 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:30:46 GMT
	I0109 00:30:46.720229   15272 round_trippers.go:580]     Audit-Id: cd6817c6-8c73-4f22-9ff1-c16874fd989b
	I0109 00:30:46.720432   15272 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-173500-m02","uid":"5e797fe4-8400-423e-ad46-5d1f64335887","resourceVersion":"2006","creationTimestamp":"2024-01-09T00:30:42Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-173500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-173500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_09T00_30_43_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-09T00:30:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:anno
tations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-det [truncated 3891 chars]
	I0109 00:30:46.721024   15272 node_ready.go:58] node "multinode-173500-m02" has status "Ready":"False"
	I0109 00:30:47.218596   15272 round_trippers.go:463] GET https://172.24.109.120:8443/api/v1/nodes/multinode-173500-m02
	I0109 00:30:47.218596   15272 round_trippers.go:469] Request Headers:
	I0109 00:30:47.218596   15272 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:30:47.218596   15272 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0109 00:30:47.225020   15272 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0109 00:30:47.225020   15272 round_trippers.go:577] Response Headers:
	I0109 00:30:47.225020   15272 round_trippers.go:580]     Audit-Id: e6614d03-0e74-4cd3-8c01-e81399d8f9e6
	I0109 00:30:47.225020   15272 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:30:47.225020   15272 round_trippers.go:580]     Content-Type: application/json
	I0109 00:30:47.225020   15272 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b65f6d3f-e21a-46b0-805d-9b71486b6c1c
	I0109 00:30:47.225020   15272 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 160c1d93-30c8-4413-9986-7aa0d5f0fdd1
	I0109 00:30:47.225020   15272 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:30:47 GMT
	I0109 00:30:47.225843   15272 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-173500-m02","uid":"5e797fe4-8400-423e-ad46-5d1f64335887","resourceVersion":"2006","creationTimestamp":"2024-01-09T00:30:42Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-173500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-173500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_09T00_30_43_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-09T00:30:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:anno
tations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-det [truncated 3891 chars]
	I0109 00:30:47.721912   15272 round_trippers.go:463] GET https://172.24.109.120:8443/api/v1/nodes/multinode-173500-m02
	I0109 00:30:47.721912   15272 round_trippers.go:469] Request Headers:
	I0109 00:30:47.722252   15272 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:30:47.722252   15272 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0109 00:30:47.727622   15272 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0109 00:30:47.727622   15272 round_trippers.go:577] Response Headers:
	I0109 00:30:47.727622   15272 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 160c1d93-30c8-4413-9986-7aa0d5f0fdd1
	I0109 00:30:47.727622   15272 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:30:47 GMT
	I0109 00:30:47.727622   15272 round_trippers.go:580]     Audit-Id: d7f25ae5-d62c-4151-9817-eedd79b32a7f
	I0109 00:30:47.727817   15272 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:30:47.727817   15272 round_trippers.go:580]     Content-Type: application/json
	I0109 00:30:47.727817   15272 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b65f6d3f-e21a-46b0-805d-9b71486b6c1c
	I0109 00:30:47.728056   15272 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-173500-m02","uid":"5e797fe4-8400-423e-ad46-5d1f64335887","resourceVersion":"2006","creationTimestamp":"2024-01-09T00:30:42Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-173500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-173500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_09T00_30_43_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-09T00:30:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:anno
tations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-det [truncated 3891 chars]
	I0109 00:30:48.222732   15272 round_trippers.go:463] GET https://172.24.109.120:8443/api/v1/nodes/multinode-173500-m02
	I0109 00:30:48.222732   15272 round_trippers.go:469] Request Headers:
	I0109 00:30:48.222732   15272 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:30:48.222732   15272 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0109 00:30:48.227361   15272 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0109 00:30:48.227361   15272 round_trippers.go:577] Response Headers:
	I0109 00:30:48.227361   15272 round_trippers.go:580]     Audit-Id: 89c493ae-636a-4a42-b368-a72964af7f4c
	I0109 00:30:48.227361   15272 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:30:48.227361   15272 round_trippers.go:580]     Content-Type: application/json
	I0109 00:30:48.227361   15272 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b65f6d3f-e21a-46b0-805d-9b71486b6c1c
	I0109 00:30:48.227361   15272 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 160c1d93-30c8-4413-9986-7aa0d5f0fdd1
	I0109 00:30:48.227703   15272 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:30:48 GMT
	I0109 00:30:48.228041   15272 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-173500-m02","uid":"5e797fe4-8400-423e-ad46-5d1f64335887","resourceVersion":"2006","creationTimestamp":"2024-01-09T00:30:42Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-173500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-173500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_09T00_30_43_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-09T00:30:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:anno
tations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-det [truncated 3891 chars]
	I0109 00:30:48.716408   15272 round_trippers.go:463] GET https://172.24.109.120:8443/api/v1/nodes/multinode-173500-m02
	I0109 00:30:48.716408   15272 round_trippers.go:469] Request Headers:
	I0109 00:30:48.716497   15272 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:30:48.716497   15272 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0109 00:30:48.720869   15272 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0109 00:30:48.720869   15272 round_trippers.go:577] Response Headers:
	I0109 00:30:48.721368   15272 round_trippers.go:580]     Audit-Id: 141cfd0e-a4d3-41a8-aa8f-512137f92470
	I0109 00:30:48.721368   15272 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:30:48.721368   15272 round_trippers.go:580]     Content-Type: application/json
	I0109 00:30:48.721368   15272 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b65f6d3f-e21a-46b0-805d-9b71486b6c1c
	I0109 00:30:48.721368   15272 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 160c1d93-30c8-4413-9986-7aa0d5f0fdd1
	I0109 00:30:48.721368   15272 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:30:48 GMT
	I0109 00:30:48.721674   15272 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-173500-m02","uid":"5e797fe4-8400-423e-ad46-5d1f64335887","resourceVersion":"2006","creationTimestamp":"2024-01-09T00:30:42Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-173500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-173500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_09T00_30_43_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-09T00:30:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:anno
tations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-det [truncated 3891 chars]
	I0109 00:30:48.722370   15272 node_ready.go:58] node "multinode-173500-m02" has status "Ready":"False"
	I0109 00:30:49.224247   15272 round_trippers.go:463] GET https://172.24.109.120:8443/api/v1/nodes/multinode-173500-m02
	I0109 00:30:49.224305   15272 round_trippers.go:469] Request Headers:
	I0109 00:30:49.224342   15272 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:30:49.224342   15272 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0109 00:30:49.232068   15272 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0109 00:30:49.232068   15272 round_trippers.go:577] Response Headers:
	I0109 00:30:49.232068   15272 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b65f6d3f-e21a-46b0-805d-9b71486b6c1c
	I0109 00:30:49.232068   15272 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 160c1d93-30c8-4413-9986-7aa0d5f0fdd1
	I0109 00:30:49.232068   15272 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:30:49 GMT
	I0109 00:30:49.232068   15272 round_trippers.go:580]     Audit-Id: 6a751ac4-9998-481e-963e-ee1716cfbb72
	I0109 00:30:49.232068   15272 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:30:49.232068   15272 round_trippers.go:580]     Content-Type: application/json
	I0109 00:30:49.232621   15272 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-173500-m02","uid":"5e797fe4-8400-423e-ad46-5d1f64335887","resourceVersion":"2006","creationTimestamp":"2024-01-09T00:30:42Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-173500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-173500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_09T00_30_43_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-09T00:30:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:anno
tations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-det [truncated 3891 chars]
	I0109 00:30:49.714528   15272 round_trippers.go:463] GET https://172.24.109.120:8443/api/v1/nodes/multinode-173500-m02
	I0109 00:30:49.714528   15272 round_trippers.go:469] Request Headers:
	I0109 00:30:49.714528   15272 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:30:49.714528   15272 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0109 00:30:49.719097   15272 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0109 00:30:49.719343   15272 round_trippers.go:577] Response Headers:
	I0109 00:30:49.719343   15272 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 160c1d93-30c8-4413-9986-7aa0d5f0fdd1
	I0109 00:30:49.719414   15272 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:30:49 GMT
	I0109 00:30:49.719414   15272 round_trippers.go:580]     Audit-Id: b2ff800c-f85d-4813-84a6-7e8a94361207
	I0109 00:30:49.719414   15272 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:30:49.719414   15272 round_trippers.go:580]     Content-Type: application/json
	I0109 00:30:49.719414   15272 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b65f6d3f-e21a-46b0-805d-9b71486b6c1c
	I0109 00:30:49.719414   15272 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-173500-m02","uid":"5e797fe4-8400-423e-ad46-5d1f64335887","resourceVersion":"2006","creationTimestamp":"2024-01-09T00:30:42Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-173500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-173500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_09T00_30_43_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-09T00:30:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:anno
tations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-det [truncated 3891 chars]
	I0109 00:30:50.215553   15272 round_trippers.go:463] GET https://172.24.109.120:8443/api/v1/nodes/multinode-173500-m02
	I0109 00:30:50.215659   15272 round_trippers.go:469] Request Headers:
	I0109 00:30:50.215659   15272 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:30:50.215659   15272 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0109 00:30:50.222158   15272 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0109 00:30:50.222158   15272 round_trippers.go:577] Response Headers:
	I0109 00:30:50.222158   15272 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 160c1d93-30c8-4413-9986-7aa0d5f0fdd1
	I0109 00:30:50.222158   15272 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:30:50 GMT
	I0109 00:30:50.222158   15272 round_trippers.go:580]     Audit-Id: e15c8ecc-31ea-4b6f-a7b3-170f1ceaad52
	I0109 00:30:50.222158   15272 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:30:50.222158   15272 round_trippers.go:580]     Content-Type: application/json
	I0109 00:30:50.222158   15272 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b65f6d3f-e21a-46b0-805d-9b71486b6c1c
	I0109 00:30:50.223016   15272 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-173500-m02","uid":"5e797fe4-8400-423e-ad46-5d1f64335887","resourceVersion":"2006","creationTimestamp":"2024-01-09T00:30:42Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-173500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-173500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_09T00_30_43_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-09T00:30:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:anno
tations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-det [truncated 3891 chars]
	I0109 00:30:50.717613   15272 round_trippers.go:463] GET https://172.24.109.120:8443/api/v1/nodes/multinode-173500-m02
	I0109 00:30:50.717613   15272 round_trippers.go:469] Request Headers:
	I0109 00:30:50.717752   15272 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:30:50.717752   15272 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0109 00:30:50.722083   15272 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0109 00:30:50.722192   15272 round_trippers.go:577] Response Headers:
	I0109 00:30:50.722192   15272 round_trippers.go:580]     Audit-Id: d3ea3a3b-76c7-412b-a153-d0881803b619
	I0109 00:30:50.722192   15272 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:30:50.722192   15272 round_trippers.go:580]     Content-Type: application/json
	I0109 00:30:50.722192   15272 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b65f6d3f-e21a-46b0-805d-9b71486b6c1c
	I0109 00:30:50.722192   15272 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 160c1d93-30c8-4413-9986-7aa0d5f0fdd1
	I0109 00:30:50.722192   15272 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:30:50 GMT
	I0109 00:30:50.722511   15272 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-173500-m02","uid":"5e797fe4-8400-423e-ad46-5d1f64335887","resourceVersion":"2006","creationTimestamp":"2024-01-09T00:30:42Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-173500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-173500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_09T00_30_43_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-09T00:30:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:anno
tations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-det [truncated 3891 chars]
	I0109 00:30:50.723133   15272 node_ready.go:58] node "multinode-173500-m02" has status "Ready":"False"
	I0109 00:30:51.218988   15272 round_trippers.go:463] GET https://172.24.109.120:8443/api/v1/nodes/multinode-173500-m02
	I0109 00:30:51.218988   15272 round_trippers.go:469] Request Headers:
	I0109 00:30:51.219058   15272 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:30:51.219058   15272 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0109 00:30:51.223422   15272 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0109 00:30:51.223673   15272 round_trippers.go:577] Response Headers:
	I0109 00:30:51.223673   15272 round_trippers.go:580]     Audit-Id: ff0f7ea3-06f3-4976-b542-f13047b6422c
	I0109 00:30:51.223673   15272 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:30:51.223673   15272 round_trippers.go:580]     Content-Type: application/json
	I0109 00:30:51.223673   15272 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b65f6d3f-e21a-46b0-805d-9b71486b6c1c
	I0109 00:30:51.223673   15272 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 160c1d93-30c8-4413-9986-7aa0d5f0fdd1
	I0109 00:30:51.223778   15272 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:30:51 GMT
	I0109 00:30:51.223778   15272 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-173500-m02","uid":"5e797fe4-8400-423e-ad46-5d1f64335887","resourceVersion":"2006","creationTimestamp":"2024-01-09T00:30:42Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-173500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-173500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_09T00_30_43_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-09T00:30:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:anno
tations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-det [truncated 3891 chars]
	I0109 00:30:51.712714   15272 round_trippers.go:463] GET https://172.24.109.120:8443/api/v1/nodes/multinode-173500-m02
	I0109 00:30:51.712803   15272 round_trippers.go:469] Request Headers:
	I0109 00:30:51.712803   15272 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0109 00:30:51.712803   15272 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:30:51.716228   15272 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0109 00:30:51.716228   15272 round_trippers.go:577] Response Headers:
	I0109 00:30:51.716228   15272 round_trippers.go:580]     Audit-Id: f22c1f3f-6b4d-49a6-98ce-a2d668eeb2cb
	I0109 00:30:51.716228   15272 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:30:51.717005   15272 round_trippers.go:580]     Content-Type: application/json
	I0109 00:30:51.717005   15272 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b65f6d3f-e21a-46b0-805d-9b71486b6c1c
	I0109 00:30:51.717005   15272 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 160c1d93-30c8-4413-9986-7aa0d5f0fdd1
	I0109 00:30:51.717005   15272 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:30:51 GMT
	I0109 00:30:51.717286   15272 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-173500-m02","uid":"5e797fe4-8400-423e-ad46-5d1f64335887","resourceVersion":"2022","creationTimestamp":"2024-01-09T00:30:42Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-173500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-173500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_09T00_30_43_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-09T00:30:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:anno
tations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-det [truncated 3926 chars]
	I0109 00:30:51.717725   15272 node_ready.go:49] node "multinode-173500-m02" has status "Ready":"True"
	I0109 00:30:51.717725   15272 node_ready.go:38] duration metric: took 7.0060807s waiting for node "multinode-173500-m02" to be "Ready" ...
	I0109 00:30:51.717725   15272 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0109 00:30:51.717869   15272 round_trippers.go:463] GET https://172.24.109.120:8443/api/v1/namespaces/kube-system/pods
	I0109 00:30:51.718033   15272 round_trippers.go:469] Request Headers:
	I0109 00:30:51.718033   15272 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:30:51.718033   15272 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0109 00:30:51.723463   15272 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0109 00:30:51.723942   15272 round_trippers.go:577] Response Headers:
	I0109 00:30:51.723942   15272 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:30:51.723942   15272 round_trippers.go:580]     Content-Type: application/json
	I0109 00:30:51.723942   15272 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b65f6d3f-e21a-46b0-805d-9b71486b6c1c
	I0109 00:30:51.724011   15272 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 160c1d93-30c8-4413-9986-7aa0d5f0fdd1
	I0109 00:30:51.724011   15272 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:30:51 GMT
	I0109 00:30:51.724096   15272 round_trippers.go:580]     Audit-Id: 8a5ee13f-727e-4511-adcb-0b87e029c099
	I0109 00:30:51.727281   15272 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"2024"},"items":[{"metadata":{"name":"coredns-5dd5756b68-bkss9","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"463fb6c6-1e85-419f-9c13-96e58a2ec22e","resourceVersion":"1809","creationTimestamp":"2024-01-09T00:05:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"391af85f-9c35-497b-9b4f-c347a35d4a42","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-09T00:05:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"391af85f-9c35-497b-9b4f-c347a35d4a42\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 83425 chars]
	I0109 00:30:51.731164   15272 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-bkss9" in "kube-system" namespace to be "Ready" ...
	I0109 00:30:51.731324   15272 round_trippers.go:463] GET https://172.24.109.120:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-bkss9
	I0109 00:30:51.731324   15272 round_trippers.go:469] Request Headers:
	I0109 00:30:51.731324   15272 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:30:51.731424   15272 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0109 00:30:51.733697   15272 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0109 00:30:51.734700   15272 round_trippers.go:577] Response Headers:
	I0109 00:30:51.734700   15272 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 160c1d93-30c8-4413-9986-7aa0d5f0fdd1
	I0109 00:30:51.734700   15272 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:30:51 GMT
	I0109 00:30:51.734700   15272 round_trippers.go:580]     Audit-Id: 4562c795-f599-420e-a327-2fb4777fcdad
	I0109 00:30:51.734700   15272 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:30:51.734781   15272 round_trippers.go:580]     Content-Type: application/json
	I0109 00:30:51.734781   15272 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b65f6d3f-e21a-46b0-805d-9b71486b6c1c
	I0109 00:30:51.734979   15272 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-bkss9","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"463fb6c6-1e85-419f-9c13-96e58a2ec22e","resourceVersion":"1809","creationTimestamp":"2024-01-09T00:05:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"391af85f-9c35-497b-9b4f-c347a35d4a42","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-09T00:05:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"391af85f-9c35-497b-9b4f-c347a35d4a42\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6494 chars]
	I0109 00:30:51.735476   15272 round_trippers.go:463] GET https://172.24.109.120:8443/api/v1/nodes/multinode-173500
	I0109 00:30:51.735582   15272 round_trippers.go:469] Request Headers:
	I0109 00:30:51.735582   15272 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:30:51.735582   15272 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0109 00:30:51.737927   15272 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0109 00:30:51.737927   15272 round_trippers.go:577] Response Headers:
	I0109 00:30:51.737927   15272 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:30:51.737927   15272 round_trippers.go:580]     Content-Type: application/json
	I0109 00:30:51.737927   15272 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b65f6d3f-e21a-46b0-805d-9b71486b6c1c
	I0109 00:30:51.737927   15272 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 160c1d93-30c8-4413-9986-7aa0d5f0fdd1
	I0109 00:30:51.737927   15272 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:30:51 GMT
	I0109 00:30:51.738734   15272 round_trippers.go:580]     Audit-Id: 31d72e7a-e6aa-484f-9207-8a45f9fdbf95
	I0109 00:30:51.738961   15272 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-173500","uid":"5e5c58e3-1ae8-4346-8766-7537fea36975","resourceVersion":"1835","creationTimestamp":"2024-01-09T00:05:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-173500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-173500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_09T00_05_33_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-01-09T00:05:27Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0109 00:30:51.739357   15272 pod_ready.go:92] pod "coredns-5dd5756b68-bkss9" in "kube-system" namespace has status "Ready":"True"
	I0109 00:30:51.739357   15272 pod_ready.go:81] duration metric: took 8.1122ms waiting for pod "coredns-5dd5756b68-bkss9" in "kube-system" namespace to be "Ready" ...
	I0109 00:30:51.739516   15272 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-173500" in "kube-system" namespace to be "Ready" ...
	I0109 00:30:51.739605   15272 round_trippers.go:463] GET https://172.24.109.120:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-173500
	I0109 00:30:51.739605   15272 round_trippers.go:469] Request Headers:
	I0109 00:30:51.739643   15272 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0109 00:30:51.739643   15272 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:30:51.742872   15272 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0109 00:30:51.742969   15272 round_trippers.go:577] Response Headers:
	I0109 00:30:51.742969   15272 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:30:51 GMT
	I0109 00:30:51.742969   15272 round_trippers.go:580]     Audit-Id: 7d35fcc8-6651-4d3b-9f75-d3a0bb02de12
	I0109 00:30:51.742969   15272 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:30:51.742969   15272 round_trippers.go:580]     Content-Type: application/json
	I0109 00:30:51.742969   15272 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b65f6d3f-e21a-46b0-805d-9b71486b6c1c
	I0109 00:30:51.742969   15272 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 160c1d93-30c8-4413-9986-7aa0d5f0fdd1
	I0109 00:30:51.743166   15272 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-173500","namespace":"kube-system","uid":"43da51b9-2249-4c4d-a9c0-4c899270d870","resourceVersion":"1777","creationTimestamp":"2024-01-09T00:28:18Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.24.109.120:2379","kubernetes.io/config.hash":"d21425b7f4d2774c35dc812132e81582","kubernetes.io/config.mirror":"d21425b7f4d2774c35dc812132e81582","kubernetes.io/config.seen":"2024-01-09T00:28:04.947418401Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-173500","uid":"5e5c58e3-1ae8-4346-8766-7537fea36975","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-09T00:28:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise
-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/conf [truncated 5873 chars]
	I0109 00:30:51.743724   15272 round_trippers.go:463] GET https://172.24.109.120:8443/api/v1/nodes/multinode-173500
	I0109 00:30:51.743904   15272 round_trippers.go:469] Request Headers:
	I0109 00:30:51.743904   15272 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:30:51.743904   15272 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0109 00:30:51.751286   15272 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0109 00:30:51.751286   15272 round_trippers.go:577] Response Headers:
	I0109 00:30:51.751286   15272 round_trippers.go:580]     Audit-Id: eda42690-d82c-47b4-8148-1329a8c860b0
	I0109 00:30:51.751286   15272 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:30:51.751286   15272 round_trippers.go:580]     Content-Type: application/json
	I0109 00:30:51.751286   15272 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b65f6d3f-e21a-46b0-805d-9b71486b6c1c
	I0109 00:30:51.751286   15272 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 160c1d93-30c8-4413-9986-7aa0d5f0fdd1
	I0109 00:30:51.751286   15272 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:30:51 GMT
	I0109 00:30:51.752317   15272 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-173500","uid":"5e5c58e3-1ae8-4346-8766-7537fea36975","resourceVersion":"1835","creationTimestamp":"2024-01-09T00:05:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-173500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-173500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_09T00_05_33_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-01-09T00:05:27Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0109 00:30:51.752540   15272 pod_ready.go:92] pod "etcd-multinode-173500" in "kube-system" namespace has status "Ready":"True"
	I0109 00:30:51.752540   15272 pod_ready.go:81] duration metric: took 13.0238ms waiting for pod "etcd-multinode-173500" in "kube-system" namespace to be "Ready" ...
	I0109 00:30:51.752540   15272 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-173500" in "kube-system" namespace to be "Ready" ...
	I0109 00:30:51.752540   15272 round_trippers.go:463] GET https://172.24.109.120:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-173500
	I0109 00:30:51.752540   15272 round_trippers.go:469] Request Headers:
	I0109 00:30:51.752540   15272 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0109 00:30:51.752540   15272 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:30:51.760898   15272 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0109 00:30:51.760898   15272 round_trippers.go:577] Response Headers:
	I0109 00:30:51.760898   15272 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 160c1d93-30c8-4413-9986-7aa0d5f0fdd1
	I0109 00:30:51.760898   15272 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:30:51 GMT
	I0109 00:30:51.760898   15272 round_trippers.go:580]     Audit-Id: 29a90170-9e8b-406b-99cd-1a5603529e56
	I0109 00:30:51.760898   15272 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:30:51.760898   15272 round_trippers.go:580]     Content-Type: application/json
	I0109 00:30:51.760898   15272 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b65f6d3f-e21a-46b0-805d-9b71486b6c1c
	I0109 00:30:51.761519   15272 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-173500","namespace":"kube-system","uid":"5c089ac2-fe84-48d8-9727-a932903b646d","resourceVersion":"1830","creationTimestamp":"2024-01-09T00:28:19Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.24.109.120:8443","kubernetes.io/config.hash":"3af26441278f10d0a9196ab55837c292","kubernetes.io/config.mirror":"3af26441278f10d0a9196ab55837c292","kubernetes.io/config.seen":"2024-01-09T00:28:04.947424101Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-173500","uid":"5e5c58e3-1ae8-4346-8766-7537fea36975","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-09T00:28:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.k
ubernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernete [truncated 7409 chars]
	I0109 00:30:51.762115   15272 round_trippers.go:463] GET https://172.24.109.120:8443/api/v1/nodes/multinode-173500
	I0109 00:30:51.762115   15272 round_trippers.go:469] Request Headers:
	I0109 00:30:51.762115   15272 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:30:51.762115   15272 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0109 00:30:51.764747   15272 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0109 00:30:51.765820   15272 round_trippers.go:577] Response Headers:
	I0109 00:30:51.765820   15272 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:30:51 GMT
	I0109 00:30:51.765820   15272 round_trippers.go:580]     Audit-Id: e56a66c3-0b58-4b15-88ed-bde1d1234c31
	I0109 00:30:51.765820   15272 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:30:51.765820   15272 round_trippers.go:580]     Content-Type: application/json
	I0109 00:30:51.765820   15272 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b65f6d3f-e21a-46b0-805d-9b71486b6c1c
	I0109 00:30:51.765820   15272 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 160c1d93-30c8-4413-9986-7aa0d5f0fdd1
	I0109 00:30:51.765920   15272 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-173500","uid":"5e5c58e3-1ae8-4346-8766-7537fea36975","resourceVersion":"1835","creationTimestamp":"2024-01-09T00:05:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-173500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-173500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_09T00_05_33_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-01-09T00:05:27Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0109 00:30:51.766444   15272 pod_ready.go:92] pod "kube-apiserver-multinode-173500" in "kube-system" namespace has status "Ready":"True"
	I0109 00:30:51.766444   15272 pod_ready.go:81] duration metric: took 13.9043ms waiting for pod "kube-apiserver-multinode-173500" in "kube-system" namespace to be "Ready" ...
	I0109 00:30:51.766444   15272 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-173500" in "kube-system" namespace to be "Ready" ...
	I0109 00:30:51.766558   15272 round_trippers.go:463] GET https://172.24.109.120:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-173500
	I0109 00:30:51.766558   15272 round_trippers.go:469] Request Headers:
	I0109 00:30:51.766558   15272 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0109 00:30:51.766558   15272 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:30:51.769812   15272 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0109 00:30:51.769812   15272 round_trippers.go:577] Response Headers:
	I0109 00:30:51.769812   15272 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:30:51 GMT
	I0109 00:30:51.769812   15272 round_trippers.go:580]     Audit-Id: e9d5ccd9-8179-44fb-8b47-2667962a86f2
	I0109 00:30:51.769812   15272 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:30:51.769812   15272 round_trippers.go:580]     Content-Type: application/json
	I0109 00:30:51.769812   15272 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b65f6d3f-e21a-46b0-805d-9b71486b6c1c
	I0109 00:30:51.769812   15272 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 160c1d93-30c8-4413-9986-7aa0d5f0fdd1
	I0109 00:30:51.771140   15272 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-173500","namespace":"kube-system","uid":"a0252ea5-5d6a-4303-b7e6-151481d4cd8a","resourceVersion":"1796","creationTimestamp":"2024-01-09T00:05:31Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"f6b180d5a2686dc98b0355b6df7f53ea","kubernetes.io/config.mirror":"f6b180d5a2686dc98b0355b6df7f53ea","kubernetes.io/config.seen":"2024-01-09T00:05:31.606504770Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-173500","uid":"5e5c58e3-1ae8-4346-8766-7537fea36975","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-09T00:05:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.
io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".": [truncated 7179 chars]
	I0109 00:30:51.771727   15272 round_trippers.go:463] GET https://172.24.109.120:8443/api/v1/nodes/multinode-173500
	I0109 00:30:51.771727   15272 round_trippers.go:469] Request Headers:
	I0109 00:30:51.771809   15272 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:30:51.771809   15272 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0109 00:30:51.774163   15272 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0109 00:30:51.774163   15272 round_trippers.go:577] Response Headers:
	I0109 00:30:51.774163   15272 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:30:51.775050   15272 round_trippers.go:580]     Content-Type: application/json
	I0109 00:30:51.775050   15272 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b65f6d3f-e21a-46b0-805d-9b71486b6c1c
	I0109 00:30:51.775050   15272 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 160c1d93-30c8-4413-9986-7aa0d5f0fdd1
	I0109 00:30:51.775050   15272 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:30:51 GMT
	I0109 00:30:51.775050   15272 round_trippers.go:580]     Audit-Id: 2c5a8af0-b5cd-4833-a4d2-3e786999b33d
	I0109 00:30:51.775252   15272 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-173500","uid":"5e5c58e3-1ae8-4346-8766-7537fea36975","resourceVersion":"1835","creationTimestamp":"2024-01-09T00:05:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-173500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-173500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_09T00_05_33_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-01-09T00:05:27Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0109 00:30:51.775664   15272 pod_ready.go:92] pod "kube-controller-manager-multinode-173500" in "kube-system" namespace has status "Ready":"True"
	I0109 00:30:51.775730   15272 pod_ready.go:81] duration metric: took 9.2862ms waiting for pod "kube-controller-manager-multinode-173500" in "kube-system" namespace to be "Ready" ...
	I0109 00:30:51.775784   15272 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-4h4sv" in "kube-system" namespace to be "Ready" ...
	I0109 00:30:51.916412   15272 request.go:629] Waited for 140.3167ms due to client-side throttling, not priority and fairness, request: GET:https://172.24.109.120:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4h4sv
	I0109 00:30:51.916544   15272 round_trippers.go:463] GET https://172.24.109.120:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4h4sv
	I0109 00:30:51.916544   15272 round_trippers.go:469] Request Headers:
	I0109 00:30:51.916579   15272 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:30:51.916770   15272 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0109 00:30:51.921293   15272 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0109 00:30:51.921293   15272 round_trippers.go:577] Response Headers:
	I0109 00:30:51.921293   15272 round_trippers.go:580]     Audit-Id: 65481881-117f-4e71-923a-65423b6ea1c9
	I0109 00:30:51.921293   15272 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:30:51.921293   15272 round_trippers.go:580]     Content-Type: application/json
	I0109 00:30:51.921293   15272 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b65f6d3f-e21a-46b0-805d-9b71486b6c1c
	I0109 00:30:51.921293   15272 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 160c1d93-30c8-4413-9986-7aa0d5f0fdd1
	I0109 00:30:51.921902   15272 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:30:51 GMT
	I0109 00:30:51.922030   15272 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-4h4sv","generateName":"kube-proxy-","namespace":"kube-system","uid":"a45861ba-73e0-452f-a535-f66e154ea1c6","resourceVersion":"2014","creationTimestamp":"2024-01-09T00:08:43Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"ec2434b9-012c-4df1-b401-04556fed7700","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-09T00:08:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ec2434b9-012c-4df1-b401-04556fed7700\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5751 chars]
	I0109 00:30:52.117252   15272 request.go:629] Waited for 193.9898ms due to client-side throttling, not priority and fairness, request: GET:https://172.24.109.120:8443/api/v1/nodes/multinode-173500-m02
	I0109 00:30:52.117356   15272 round_trippers.go:463] GET https://172.24.109.120:8443/api/v1/nodes/multinode-173500-m02
	I0109 00:30:52.117356   15272 round_trippers.go:469] Request Headers:
	I0109 00:30:52.117356   15272 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:30:52.117356   15272 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0109 00:30:52.120315   15272 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0109 00:30:52.120315   15272 round_trippers.go:577] Response Headers:
	I0109 00:30:52.120315   15272 round_trippers.go:580]     Audit-Id: 66f04b2e-e6f5-4823-aa91-db70dab8408c
	I0109 00:30:52.120315   15272 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:30:52.120315   15272 round_trippers.go:580]     Content-Type: application/json
	I0109 00:30:52.120315   15272 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b65f6d3f-e21a-46b0-805d-9b71486b6c1c
	I0109 00:30:52.120315   15272 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 160c1d93-30c8-4413-9986-7aa0d5f0fdd1
	I0109 00:30:52.121320   15272 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:30:52 GMT
	I0109 00:30:52.121556   15272 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-173500-m02","uid":"5e797fe4-8400-423e-ad46-5d1f64335887","resourceVersion":"2022","creationTimestamp":"2024-01-09T00:30:42Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-173500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-173500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_09T00_30_43_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-09T00:30:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:anno
tations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-det [truncated 3926 chars]
	I0109 00:30:52.122484   15272 pod_ready.go:92] pod "kube-proxy-4h4sv" in "kube-system" namespace has status "Ready":"True"
	I0109 00:30:52.122557   15272 pod_ready.go:81] duration metric: took 346.6996ms waiting for pod "kube-proxy-4h4sv" in "kube-system" namespace to be "Ready" ...
	I0109 00:30:52.122557   15272 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-mj6ks" in "kube-system" namespace to be "Ready" ...
	I0109 00:30:52.319873   15272 request.go:629] Waited for 197.1837ms due to client-side throttling, not priority and fairness, request: GET:https://172.24.109.120:8443/api/v1/namespaces/kube-system/pods/kube-proxy-mj6ks
	I0109 00:30:52.319873   15272 round_trippers.go:463] GET https://172.24.109.120:8443/api/v1/namespaces/kube-system/pods/kube-proxy-mj6ks
	I0109 00:30:52.319873   15272 round_trippers.go:469] Request Headers:
	I0109 00:30:52.319873   15272 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:30:52.319873   15272 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0109 00:30:52.324587   15272 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0109 00:30:52.324587   15272 round_trippers.go:577] Response Headers:
	I0109 00:30:52.324587   15272 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b65f6d3f-e21a-46b0-805d-9b71486b6c1c
	I0109 00:30:52.325286   15272 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 160c1d93-30c8-4413-9986-7aa0d5f0fdd1
	I0109 00:30:52.325286   15272 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:30:52 GMT
	I0109 00:30:52.325286   15272 round_trippers.go:580]     Audit-Id: 63eb578d-a3c3-4218-9ec8-44ee471b9f6c
	I0109 00:30:52.325286   15272 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:30:52.325286   15272 round_trippers.go:580]     Content-Type: application/json
	I0109 00:30:52.325518   15272 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-mj6ks","generateName":"kube-proxy-","namespace":"kube-system","uid":"bd23c4c8-d363-4a3f-b750-a3de2346a3bb","resourceVersion":"1866","creationTimestamp":"2024-01-09T00:13:24Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"ec2434b9-012c-4df1-b401-04556fed7700","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-09T00:13:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ec2434b9-012c-4df1-b401-04556fed7700\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5972 chars]
	I0109 00:30:52.524732   15272 request.go:629] Waited for 198.452ms due to client-side throttling, not priority and fairness, request: GET:https://172.24.109.120:8443/api/v1/nodes/multinode-173500-m03
	I0109 00:30:52.524874   15272 round_trippers.go:463] GET https://172.24.109.120:8443/api/v1/nodes/multinode-173500-m03
	I0109 00:30:52.524953   15272 round_trippers.go:469] Request Headers:
	I0109 00:30:52.525035   15272 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0109 00:30:52.525035   15272 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:30:52.528477   15272 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0109 00:30:52.528477   15272 round_trippers.go:577] Response Headers:
	I0109 00:30:52.528477   15272 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:30:52 GMT
	I0109 00:30:52.528477   15272 round_trippers.go:580]     Audit-Id: ab14c57f-4c4d-4b62-bb59-37673876fe51
	I0109 00:30:52.528477   15272 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:30:52.528477   15272 round_trippers.go:580]     Content-Type: application/json
	I0109 00:30:52.528477   15272 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b65f6d3f-e21a-46b0-805d-9b71486b6c1c
	I0109 00:30:52.528908   15272 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 160c1d93-30c8-4413-9986-7aa0d5f0fdd1
	I0109 00:30:52.529071   15272 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-173500-m03","uid":"9d8a783a-d01b-498d-94ae-1e3f65e7667c","resourceVersion":"2000","creationTimestamp":"2024-01-09T00:23:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-173500-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-173500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_09T00_30_43_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-09T00:23:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:anno
tations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-det [truncated 4392 chars]
	I0109 00:30:52.529623   15272 pod_ready.go:97] node "multinode-173500-m03" hosting pod "kube-proxy-mj6ks" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-173500-m03" has status "Ready":"Unknown"
	I0109 00:30:52.529623   15272 pod_ready.go:81] duration metric: took 407.0665ms waiting for pod "kube-proxy-mj6ks" in "kube-system" namespace to be "Ready" ...
	E0109 00:30:52.529623   15272 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-173500-m03" hosting pod "kube-proxy-mj6ks" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-173500-m03" has status "Ready":"Unknown"
	I0109 00:30:52.529623   15272 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-qrtm6" in "kube-system" namespace to be "Ready" ...
	I0109 00:30:52.725164   15272 request.go:629] Waited for 195.1625ms due to client-side throttling, not priority and fairness, request: GET:https://172.24.109.120:8443/api/v1/namespaces/kube-system/pods/kube-proxy-qrtm6
	I0109 00:30:52.725401   15272 round_trippers.go:463] GET https://172.24.109.120:8443/api/v1/namespaces/kube-system/pods/kube-proxy-qrtm6
	I0109 00:30:52.725401   15272 round_trippers.go:469] Request Headers:
	I0109 00:30:52.725401   15272 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:30:52.725401   15272 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0109 00:30:52.732957   15272 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0109 00:30:52.732957   15272 round_trippers.go:577] Response Headers:
	I0109 00:30:52.732957   15272 round_trippers.go:580]     Audit-Id: b163b680-e840-441c-8223-012bf75695a1
	I0109 00:30:52.732957   15272 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:30:52.732957   15272 round_trippers.go:580]     Content-Type: application/json
	I0109 00:30:52.732957   15272 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b65f6d3f-e21a-46b0-805d-9b71486b6c1c
	I0109 00:30:52.732957   15272 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 160c1d93-30c8-4413-9986-7aa0d5f0fdd1
	I0109 00:30:52.732957   15272 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:30:52 GMT
	I0109 00:30:52.732957   15272 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-qrtm6","generateName":"kube-proxy-","namespace":"kube-system","uid":"37d066e0-6ff3-4f22-abc3-6bddfa64736e","resourceVersion":"1833","creationTimestamp":"2024-01-09T00:05:45Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"ec2434b9-012c-4df1-b401-04556fed7700","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-09T00:05:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ec2434b9-012c-4df1-b401-04556fed7700\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5743 chars]
	I0109 00:30:52.927189   15272 request.go:629] Waited for 192.9915ms due to client-side throttling, not priority and fairness, request: GET:https://172.24.109.120:8443/api/v1/nodes/multinode-173500
	I0109 00:30:52.927189   15272 round_trippers.go:463] GET https://172.24.109.120:8443/api/v1/nodes/multinode-173500
	I0109 00:30:52.927189   15272 round_trippers.go:469] Request Headers:
	I0109 00:30:52.927189   15272 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:30:52.927494   15272 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0109 00:30:52.932000   15272 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0109 00:30:52.932000   15272 round_trippers.go:577] Response Headers:
	I0109 00:30:52.932156   15272 round_trippers.go:580]     Audit-Id: c143bf4f-38be-4bab-bd32-8c884580310c
	I0109 00:30:52.932156   15272 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:30:52.932156   15272 round_trippers.go:580]     Content-Type: application/json
	I0109 00:30:52.932156   15272 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b65f6d3f-e21a-46b0-805d-9b71486b6c1c
	I0109 00:30:52.932156   15272 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 160c1d93-30c8-4413-9986-7aa0d5f0fdd1
	I0109 00:30:52.932156   15272 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:30:52 GMT
	I0109 00:30:52.932755   15272 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-173500","uid":"5e5c58e3-1ae8-4346-8766-7537fea36975","resourceVersion":"1835","creationTimestamp":"2024-01-09T00:05:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-173500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-173500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_09T00_05_33_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-01-09T00:05:27Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0109 00:30:52.933153   15272 pod_ready.go:92] pod "kube-proxy-qrtm6" in "kube-system" namespace has status "Ready":"True"
	I0109 00:30:52.933153   15272 pod_ready.go:81] duration metric: took 403.5293ms waiting for pod "kube-proxy-qrtm6" in "kube-system" namespace to be "Ready" ...
	I0109 00:30:52.933153   15272 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-173500" in "kube-system" namespace to be "Ready" ...
	I0109 00:30:53.115569   15272 request.go:629] Waited for 182.4164ms due to client-side throttling, not priority and fairness, request: GET:https://172.24.109.120:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-173500
	I0109 00:30:53.115569   15272 round_trippers.go:463] GET https://172.24.109.120:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-173500
	I0109 00:30:53.115569   15272 round_trippers.go:469] Request Headers:
	I0109 00:30:53.115569   15272 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0109 00:30:53.115569   15272 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:30:53.120177   15272 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0109 00:30:53.120177   15272 round_trippers.go:577] Response Headers:
	I0109 00:30:53.120310   15272 round_trippers.go:580]     Audit-Id: 2fadb41d-6486-480b-8884-e72c8e95c955
	I0109 00:30:53.120310   15272 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:30:53.120310   15272 round_trippers.go:580]     Content-Type: application/json
	I0109 00:30:53.120310   15272 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b65f6d3f-e21a-46b0-805d-9b71486b6c1c
	I0109 00:30:53.120310   15272 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 160c1d93-30c8-4413-9986-7aa0d5f0fdd1
	I0109 00:30:53.120310   15272 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:30:53 GMT
	I0109 00:30:53.120783   15272 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-173500","namespace":"kube-system","uid":"31d8cdf6-292f-4b3c-87c5-951fc34d0ea4","resourceVersion":"1829","creationTimestamp":"2024-01-09T00:05:30Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"70306498a200a6bbe0aa0b41e240d63b","kubernetes.io/config.mirror":"70306498a200a6bbe0aa0b41e240d63b","kubernetes.io/config.seen":"2024-01-09T00:05:21.481168866Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-173500","uid":"5e5c58e3-1ae8-4346-8766-7537fea36975","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-09T00:05:30Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{}
,"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{ [truncated 4909 chars]
	I0109 00:30:53.317578   15272 request.go:629] Waited for 196.0716ms due to client-side throttling, not priority and fairness, request: GET:https://172.24.109.120:8443/api/v1/nodes/multinode-173500
	I0109 00:30:53.317773   15272 round_trippers.go:463] GET https://172.24.109.120:8443/api/v1/nodes/multinode-173500
	I0109 00:30:53.317866   15272 round_trippers.go:469] Request Headers:
	I0109 00:30:53.317904   15272 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:30:53.317937   15272 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0109 00:30:53.322787   15272 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0109 00:30:53.322787   15272 round_trippers.go:577] Response Headers:
	I0109 00:30:53.322787   15272 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b65f6d3f-e21a-46b0-805d-9b71486b6c1c
	I0109 00:30:53.322787   15272 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 160c1d93-30c8-4413-9986-7aa0d5f0fdd1
	I0109 00:30:53.322908   15272 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:30:53 GMT
	I0109 00:30:53.322908   15272 round_trippers.go:580]     Audit-Id: c1b41569-3337-4ef8-8a7f-d229495216a2
	I0109 00:30:53.322908   15272 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:30:53.322908   15272 round_trippers.go:580]     Content-Type: application/json
	I0109 00:30:53.323456   15272 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-173500","uid":"5e5c58e3-1ae8-4346-8766-7537fea36975","resourceVersion":"1835","creationTimestamp":"2024-01-09T00:05:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-173500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-173500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_09T00_05_33_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-01-09T00:05:27Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0109 00:30:53.324140   15272 pod_ready.go:92] pod "kube-scheduler-multinode-173500" in "kube-system" namespace has status "Ready":"True"
	I0109 00:30:53.324218   15272 pod_ready.go:81] duration metric: took 391.0653ms waiting for pod "kube-scheduler-multinode-173500" in "kube-system" namespace to be "Ready" ...
	I0109 00:30:53.324321   15272 pod_ready.go:38] duration metric: took 1.6063485s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0109 00:30:53.324321   15272 system_svc.go:44] waiting for kubelet service to be running ....
	I0109 00:30:53.341626   15272 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0109 00:30:53.363420   15272 system_svc.go:56] duration metric: took 39.0997ms WaitForService to wait for kubelet.
	I0109 00:30:53.363420   15272 kubeadm.go:581] duration metric: took 8.6957761s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0109 00:30:53.363420   15272 node_conditions.go:102] verifying NodePressure condition ...
	I0109 00:30:53.519986   15272 request.go:629] Waited for 156.3343ms due to client-side throttling, not priority and fairness, request: GET:https://172.24.109.120:8443/api/v1/nodes
	I0109 00:30:53.520093   15272 round_trippers.go:463] GET https://172.24.109.120:8443/api/v1/nodes
	I0109 00:30:53.520093   15272 round_trippers.go:469] Request Headers:
	I0109 00:30:53.520093   15272 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0109 00:30:53.520093   15272 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:30:53.525572   15272 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0109 00:30:53.525572   15272 round_trippers.go:577] Response Headers:
	I0109 00:30:53.525572   15272 round_trippers.go:580]     Content-Type: application/json
	I0109 00:30:53.525572   15272 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b65f6d3f-e21a-46b0-805d-9b71486b6c1c
	I0109 00:30:53.526154   15272 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 160c1d93-30c8-4413-9986-7aa0d5f0fdd1
	I0109 00:30:53.526154   15272 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:30:53 GMT
	I0109 00:30:53.526154   15272 round_trippers.go:580]     Audit-Id: 8ab7c3ed-4243-4153-a648-b0d1899e17c9
	I0109 00:30:53.526154   15272 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:30:53.526732   15272 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"2027"},"items":[{"metadata":{"name":"multinode-173500","uid":"5e5c58e3-1ae8-4346-8766-7537fea36975","resourceVersion":"1835","creationTimestamp":"2024-01-09T00:05:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-173500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-173500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_09T00_05_33_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFi
elds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","tim [truncated 15594 chars]
	I0109 00:30:53.527632   15272 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0109 00:30:53.527757   15272 node_conditions.go:123] node cpu capacity is 2
	I0109 00:30:53.527757   15272 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0109 00:30:53.527757   15272 node_conditions.go:123] node cpu capacity is 2
	I0109 00:30:53.527757   15272 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0109 00:30:53.527757   15272 node_conditions.go:123] node cpu capacity is 2
	I0109 00:30:53.527757   15272 node_conditions.go:105] duration metric: took 164.3365ms to run NodePressure ...
	I0109 00:30:53.527757   15272 start.go:228] waiting for startup goroutines ...
	I0109 00:30:53.527867   15272 start.go:242] writing updated cluster config ...
	I0109 00:30:53.547608   15272 config.go:182] Loaded profile config "multinode-173500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0109 00:30:53.547912   15272 profile.go:148] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-173500\config.json ...
	I0109 00:30:53.557455   15272 out.go:177] * Starting worker node multinode-173500-m03 in cluster multinode-173500
	I0109 00:30:53.560292   15272 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0109 00:30:53.560292   15272 cache.go:56] Caching tarball of preloaded images
	I0109 00:30:53.560292   15272 preload.go:174] Found C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0109 00:30:53.560292   15272 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on docker
	I0109 00:30:53.560292   15272 profile.go:148] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-173500\config.json ...
	I0109 00:30:53.564614   15272 start.go:365] acquiring machines lock for multinode-173500-m03: {Name:mk1756dfe9622c208593bf9b718faf6c1651aea2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0109 00:30:53.564614   15272 start.go:369] acquired machines lock for "multinode-173500-m03" in 0s
	I0109 00:30:53.564614   15272 start.go:96] Skipping create...Using existing machine configuration
	I0109 00:30:53.564992   15272 fix.go:54] fixHost starting: m03
	I0109 00:30:53.565224   15272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-173500-m03 ).state
	I0109 00:30:55.701776   15272 main.go:141] libmachine: [stdout =====>] : Off
	
	I0109 00:30:55.701776   15272 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:30:55.702043   15272 fix.go:102] recreateIfNeeded on multinode-173500-m03: state=Stopped err=<nil>
	W0109 00:30:55.702043   15272 fix.go:128] unexpected machine state, will restart: <nil>
	I0109 00:30:55.705287   15272 out.go:177] * Restarting existing hyperv VM for "multinode-173500-m03" ...
	I0109 00:30:55.709308   15272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-173500-m03
	I0109 00:30:58.216780   15272 main.go:141] libmachine: [stdout =====>] : 
	I0109 00:30:58.216848   15272 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:30:58.216848   15272 main.go:141] libmachine: Waiting for host to start...
	I0109 00:30:58.216848   15272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-173500-m03 ).state
	I0109 00:31:00.489934   15272 main.go:141] libmachine: [stdout =====>] : Running
	
	I0109 00:31:00.490186   15272 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:31:00.490186   15272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-173500-m03 ).networkadapters[0]).ipaddresses[0]
	I0109 00:31:03.098159   15272 main.go:141] libmachine: [stdout =====>] : 
	I0109 00:31:03.098317   15272 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:31:04.101103   15272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-173500-m03 ).state
	I0109 00:31:06.314469   15272 main.go:141] libmachine: [stdout =====>] : Running
	
	I0109 00:31:06.314469   15272 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:31:06.314558   15272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-173500-m03 ).networkadapters[0]).ipaddresses[0]
	I0109 00:31:08.915067   15272 main.go:141] libmachine: [stdout =====>] : 
	I0109 00:31:08.915137   15272 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:31:09.930650   15272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-173500-m03 ).state
	I0109 00:31:12.191605   15272 main.go:141] libmachine: [stdout =====>] : Running
	
	I0109 00:31:12.191689   15272 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:31:12.191749   15272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-173500-m03 ).networkadapters[0]).ipaddresses[0]
	I0109 00:31:14.809701   15272 main.go:141] libmachine: [stdout =====>] : 
	I0109 00:31:14.809701   15272 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:31:15.814385   15272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-173500-m03 ).state
	I0109 00:31:18.043625   15272 main.go:141] libmachine: [stdout =====>] : Running
	
	I0109 00:31:18.043667   15272 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:31:18.043753   15272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-173500-m03 ).networkadapters[0]).ipaddresses[0]
	I0109 00:31:20.628015   15272 main.go:141] libmachine: [stdout =====>] : 
	I0109 00:31:20.628015   15272 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:31:21.631669   15272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-173500-m03 ).state
	I0109 00:31:23.877007   15272 main.go:141] libmachine: [stdout =====>] : Running
	
	I0109 00:31:23.877054   15272 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:31:23.877097   15272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-173500-m03 ).networkadapters[0]).ipaddresses[0]
	I0109 00:31:26.521241   15272 main.go:141] libmachine: [stdout =====>] : 172.24.101.30
	
	I0109 00:31:26.521500   15272 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:31:26.524461   15272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-173500-m03 ).state
	I0109 00:31:28.652900   15272 main.go:141] libmachine: [stdout =====>] : Running
	
	I0109 00:31:28.652900   15272 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:31:28.653276   15272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-173500-m03 ).networkadapters[0]).ipaddresses[0]
	I0109 00:31:31.289854   15272 main.go:141] libmachine: [stdout =====>] : 172.24.101.30
	
	I0109 00:31:31.289854   15272 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:31:31.290298   15272 profile.go:148] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-173500\config.json ...
	I0109 00:31:31.294000   15272 machine.go:88] provisioning docker machine ...
	I0109 00:31:31.294107   15272 buildroot.go:166] provisioning hostname "multinode-173500-m03"
	I0109 00:31:31.294214   15272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-173500-m03 ).state
	I0109 00:31:33.439160   15272 main.go:141] libmachine: [stdout =====>] : Running
	
	I0109 00:31:33.439160   15272 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:31:33.439160   15272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-173500-m03 ).networkadapters[0]).ipaddresses[0]
	I0109 00:31:36.005454   15272 main.go:141] libmachine: [stdout =====>] : 172.24.101.30
	
	I0109 00:31:36.005454   15272 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:31:36.011371   15272 main.go:141] libmachine: Using SSH client type: native
	I0109 00:31:36.012156   15272 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa46120] 0xa48c60 <nil>  [] 0s} 172.24.101.30 22 <nil> <nil>}
	I0109 00:31:36.012156   15272 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-173500-m03 && echo "multinode-173500-m03" | sudo tee /etc/hostname
	I0109 00:31:36.177233   15272 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-173500-m03
	
	I0109 00:31:36.177233   15272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-173500-m03 ).state
	I0109 00:31:38.314827   15272 main.go:141] libmachine: [stdout =====>] : Running
	
	I0109 00:31:38.315126   15272 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:31:38.315220   15272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-173500-m03 ).networkadapters[0]).ipaddresses[0]
	I0109 00:31:40.876972   15272 main.go:141] libmachine: [stdout =====>] : 172.24.101.30
	
	I0109 00:31:40.877222   15272 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:31:40.883078   15272 main.go:141] libmachine: Using SSH client type: native
	I0109 00:31:40.883902   15272 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa46120] 0xa48c60 <nil>  [] 0s} 172.24.101.30 22 <nil> <nil>}
	I0109 00:31:40.883902   15272 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-173500-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-173500-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-173500-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0109 00:31:41.039738   15272 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0109 00:31:41.039909   15272 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube1\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube1\minikube-integration\.minikube}
	I0109 00:31:41.039909   15272 buildroot.go:174] setting up certificates
	I0109 00:31:41.040018   15272 provision.go:83] configureAuth start
	I0109 00:31:41.040193   15272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-173500-m03 ).state
	I0109 00:31:43.189630   15272 main.go:141] libmachine: [stdout =====>] : Running
	
	I0109 00:31:43.189630   15272 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:31:43.189630   15272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-173500-m03 ).networkadapters[0]).ipaddresses[0]
	I0109 00:31:45.751399   15272 main.go:141] libmachine: [stdout =====>] : 172.24.101.30
	
	I0109 00:31:45.751399   15272 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:31:45.751599   15272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-173500-m03 ).state
	I0109 00:31:47.908281   15272 main.go:141] libmachine: [stdout =====>] : Running
	
	I0109 00:31:47.908281   15272 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:31:47.908281   15272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-173500-m03 ).networkadapters[0]).ipaddresses[0]
	I0109 00:31:50.461607   15272 main.go:141] libmachine: [stdout =====>] : 172.24.101.30
	
	I0109 00:31:50.461659   15272 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:31:50.461749   15272 provision.go:138] copyHostCerts
	I0109 00:31:50.461921   15272 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem
	I0109 00:31:50.461921   15272 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem, removing ...
	I0109 00:31:50.461921   15272 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.pem
	I0109 00:31:50.462988   15272 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0109 00:31:50.464088   15272 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem
	I0109 00:31:50.464118   15272 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem, removing ...
	I0109 00:31:50.464118   15272 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cert.pem
	I0109 00:31:50.464663   15272 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0109 00:31:50.465787   15272 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem
	I0109 00:31:50.465860   15272 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem, removing ...
	I0109 00:31:50.465860   15272 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\key.pem
	I0109 00:31:50.466554   15272 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem (1675 bytes)
	I0109 00:31:50.467996   15272 provision.go:112] generating server cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-173500-m03 san=[172.24.101.30 172.24.101.30 localhost 127.0.0.1 minikube multinode-173500-m03]
	I0109 00:31:50.542922   15272 provision.go:172] copyRemoteCerts
	I0109 00:31:50.557309   15272 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0109 00:31:50.557309   15272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-173500-m03 ).state
	I0109 00:31:52.720007   15272 main.go:141] libmachine: [stdout =====>] : Running
	
	I0109 00:31:52.720400   15272 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:31:52.720400   15272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-173500-m03 ).networkadapters[0]).ipaddresses[0]
	I0109 00:31:55.281807   15272 main.go:141] libmachine: [stdout =====>] : 172.24.101.30
	
	I0109 00:31:55.282159   15272 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:31:55.282388   15272 sshutil.go:53] new ssh client: &{IP:172.24.101.30 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-173500-m03\id_rsa Username:docker}
	I0109 00:31:55.390469   15272 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.8331598s)
	I0109 00:31:55.391483   15272 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0109 00:31:55.391923   15272 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0109 00:31:55.434454   15272 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0109 00:31:55.434968   15272 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1237 bytes)
	I0109 00:31:55.477684   15272 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0109 00:31:55.477684   15272 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0109 00:31:55.524030   15272 provision.go:86] duration metric: configureAuth took 14.4840105s
	I0109 00:31:55.524208   15272 buildroot.go:189] setting minikube options for container-runtime
	I0109 00:31:55.524912   15272 config.go:182] Loaded profile config "multinode-173500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0109 00:31:55.524978   15272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-173500-m03 ).state
	I0109 00:31:57.704612   15272 main.go:141] libmachine: [stdout =====>] : Running
	
	I0109 00:31:57.704612   15272 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:31:57.704698   15272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-173500-m03 ).networkadapters[0]).ipaddresses[0]
	I0109 00:32:00.362858   15272 main.go:141] libmachine: [stdout =====>] : 172.24.101.30
	
	I0109 00:32:00.362858   15272 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:32:00.369522   15272 main.go:141] libmachine: Using SSH client type: native
	I0109 00:32:00.370265   15272 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa46120] 0xa48c60 <nil>  [] 0s} 172.24.101.30 22 <nil> <nil>}
	I0109 00:32:00.370265   15272 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0109 00:32:00.511800   15272 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0109 00:32:00.511886   15272 buildroot.go:70] root file system type: tmpfs
	I0109 00:32:00.511959   15272 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0109 00:32:00.511959   15272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-173500-m03 ).state
	I0109 00:32:02.666728   15272 main.go:141] libmachine: [stdout =====>] : Running
	
	I0109 00:32:02.666836   15272 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:32:02.666836   15272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-173500-m03 ).networkadapters[0]).ipaddresses[0]
	I0109 00:32:05.275733   15272 main.go:141] libmachine: [stdout =====>] : 172.24.101.30
	
	I0109 00:32:05.275942   15272 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:32:05.281484   15272 main.go:141] libmachine: Using SSH client type: native
	I0109 00:32:05.282294   15272 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa46120] 0xa48c60 <nil>  [] 0s} 172.24.101.30 22 <nil> <nil>}
	I0109 00:32:05.282357   15272 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.24.109.120"
	Environment="NO_PROXY=172.24.109.120,172.24.111.157"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0109 00:32:05.447325   15272 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.24.109.120
	Environment=NO_PROXY=172.24.109.120,172.24.111.157
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0109 00:32:05.447325   15272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-173500-m03 ).state
	I0109 00:32:07.639997   15272 main.go:141] libmachine: [stdout =====>] : Running
	
	I0109 00:32:07.640450   15272 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:32:07.640563   15272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-173500-m03 ).networkadapters[0]).ipaddresses[0]
	I0109 00:32:10.221433   15272 main.go:141] libmachine: [stdout =====>] : 172.24.101.30
	
	I0109 00:32:10.221433   15272 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:32:10.230300   15272 main.go:141] libmachine: Using SSH client type: native
	I0109 00:32:10.231076   15272 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa46120] 0xa48c60 <nil>  [] 0s} 172.24.101.30 22 <nil> <nil>}
	I0109 00:32:10.231076   15272 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0109 00:32:11.481518   15272 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0109 00:32:11.481518   15272 machine.go:91] provisioned docker machine in 40.1874075s
	I0109 00:32:11.481518   15272 start.go:300] post-start starting for "multinode-173500-m03" (driver="hyperv")
	I0109 00:32:11.481518   15272 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0109 00:32:11.497813   15272 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0109 00:32:11.497813   15272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-173500-m03 ).state
	I0109 00:32:13.651655   15272 main.go:141] libmachine: [stdout =====>] : Running
	
	I0109 00:32:13.651765   15272 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:32:13.651765   15272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-173500-m03 ).networkadapters[0]).ipaddresses[0]
	I0109 00:32:16.230597   15272 main.go:141] libmachine: [stdout =====>] : 172.24.101.30
	
	I0109 00:32:16.230640   15272 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:32:16.231056   15272 sshutil.go:53] new ssh client: &{IP:172.24.101.30 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-173500-m03\id_rsa Username:docker}
	I0109 00:32:16.343128   15272 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.845314s)
	I0109 00:32:16.357813   15272 ssh_runner.go:195] Run: cat /etc/os-release
	I0109 00:32:16.364800   15272 command_runner.go:130] > NAME=Buildroot
	I0109 00:32:16.364800   15272 command_runner.go:130] > VERSION=2021.02.12-1-gae27a7b-dirty
	I0109 00:32:16.364800   15272 command_runner.go:130] > ID=buildroot
	I0109 00:32:16.364800   15272 command_runner.go:130] > VERSION_ID=2021.02.12
	I0109 00:32:16.364800   15272 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
	I0109 00:32:16.364800   15272 info.go:137] Remote host: Buildroot 2021.02.12
	I0109 00:32:16.364800   15272 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\addons for local assets ...
	I0109 00:32:16.365524   15272 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\files for local assets ...
	I0109 00:32:16.366717   15272 filesync.go:149] local asset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\142882.pem -> 142882.pem in /etc/ssl/certs
	I0109 00:32:16.366717   15272 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\142882.pem -> /etc/ssl/certs/142882.pem
	I0109 00:32:16.380396   15272 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0109 00:32:16.396227   15272 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\142882.pem --> /etc/ssl/certs/142882.pem (1708 bytes)
	I0109 00:32:16.438863   15272 start.go:303] post-start completed in 4.9573438s
	I0109 00:32:16.438863   15272 fix.go:56] fixHost completed within 1m22.8738626s
	I0109 00:32:16.438863   15272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-173500-m03 ).state
	I0109 00:32:18.654898   15272 main.go:141] libmachine: [stdout =====>] : Running
	
	I0109 00:32:18.654983   15272 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:32:18.654983   15272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-173500-m03 ).networkadapters[0]).ipaddresses[0]
	I0109 00:32:21.325147   15272 main.go:141] libmachine: [stdout =====>] : 172.24.101.30
	
	I0109 00:32:21.325147   15272 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:32:21.332375   15272 main.go:141] libmachine: Using SSH client type: native
	I0109 00:32:21.333050   15272 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa46120] 0xa48c60 <nil>  [] 0s} 172.24.101.30 22 <nil> <nil>}
	I0109 00:32:21.333050   15272 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0109 00:32:21.472157   15272 main.go:141] libmachine: SSH cmd err, output: <nil>: 1704760341.471060542
	
	I0109 00:32:21.472281   15272 fix.go:206] guest clock: 1704760341.471060542
	I0109 00:32:21.472281   15272 fix.go:219] Guest: 2024-01-09 00:32:21.471060542 +0000 UTC Remote: 2024-01-09 00:32:16.4388631 +0000 UTC m=+370.823510001 (delta=5.032197442s)
	I0109 00:32:21.472281   15272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-173500-m03 ).state
	I0109 00:32:23.679675   15272 main.go:141] libmachine: [stdout =====>] : Running
	
	I0109 00:32:23.679887   15272 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:32:23.679887   15272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-173500-m03 ).networkadapters[0]).ipaddresses[0]
	I0109 00:32:26.319873   15272 main.go:141] libmachine: [stdout =====>] : 172.24.101.30
	
	I0109 00:32:26.319873   15272 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:32:26.326454   15272 main.go:141] libmachine: Using SSH client type: native
	I0109 00:32:26.327253   15272 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa46120] 0xa48c60 <nil>  [] 0s} 172.24.101.30 22 <nil> <nil>}
	I0109 00:32:26.327253   15272 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1704760341

                                                
                                                
** /stderr **
multinode_test.go:325: failed to run minikube start. args "out/minikube-windows-amd64.exe node list -p multinode-173500" : exit status 1
multinode_test.go:328: (dbg) Run:  out/minikube-windows-amd64.exe node list -p multinode-173500
multinode_test.go:328: (dbg) Non-zero exit: out/minikube-windows-amd64.exe node list -p multinode-173500: context deadline exceeded (0s)
multinode_test.go:330: failed to run node list. args "out/minikube-windows-amd64.exe node list -p multinode-173500" : context deadline exceeded
multinode_test.go:335: reported node list is not the same after restart. Before restart: multinode-173500	172.24.100.178
multinode-173500-m02	172.24.108.84
multinode-173500-m03	172.24.100.87

                                                
                                                
After restart: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-173500 -n multinode-173500
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-173500 -n multinode-173500: (12.4347946s)
helpers_test.go:244: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/RestartKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-173500 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-173500 logs -n 25: (9.1025716s)
helpers_test.go:252: TestMultiNode/serial/RestartKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|-------------------------------------------------------------------------------------------------------------------------|------------------|-------------------|---------|---------------------|---------------------|
	| Command |                                                          Args                                                           |     Profile      |       User        | Version |     Start Time      |      End Time       |
	|---------|-------------------------------------------------------------------------------------------------------------------------|------------------|-------------------|---------|---------------------|---------------------|
	| cp      | multinode-173500 cp testdata\cp-test.txt                                                                                | multinode-173500 | minikube1\jenkins | v1.32.0 | 09 Jan 24 00:16 UTC | 09 Jan 24 00:17 UTC |
	|         | multinode-173500-m02:/home/docker/cp-test.txt                                                                           |                  |                   |         |                     |                     |
	| ssh     | multinode-173500 ssh -n                                                                                                 | multinode-173500 | minikube1\jenkins | v1.32.0 | 09 Jan 24 00:17 UTC | 09 Jan 24 00:17 UTC |
	|         | multinode-173500-m02 sudo cat                                                                                           |                  |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                |                  |                   |         |                     |                     |
	| cp      | multinode-173500 cp multinode-173500-m02:/home/docker/cp-test.txt                                                       | multinode-173500 | minikube1\jenkins | v1.32.0 | 09 Jan 24 00:17 UTC | 09 Jan 24 00:17 UTC |
	|         | C:\Users\jenkins.minikube1\AppData\Local\Temp\TestMultiNodeserialCopyFile273249123\001\cp-test_multinode-173500-m02.txt |                  |                   |         |                     |                     |
	| ssh     | multinode-173500 ssh -n                                                                                                 | multinode-173500 | minikube1\jenkins | v1.32.0 | 09 Jan 24 00:17 UTC | 09 Jan 24 00:17 UTC |
	|         | multinode-173500-m02 sudo cat                                                                                           |                  |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                |                  |                   |         |                     |                     |
	| cp      | multinode-173500 cp multinode-173500-m02:/home/docker/cp-test.txt                                                       | multinode-173500 | minikube1\jenkins | v1.32.0 | 09 Jan 24 00:17 UTC | 09 Jan 24 00:17 UTC |
	|         | multinode-173500:/home/docker/cp-test_multinode-173500-m02_multinode-173500.txt                                         |                  |                   |         |                     |                     |
	| ssh     | multinode-173500 ssh -n                                                                                                 | multinode-173500 | minikube1\jenkins | v1.32.0 | 09 Jan 24 00:17 UTC | 09 Jan 24 00:18 UTC |
	|         | multinode-173500-m02 sudo cat                                                                                           |                  |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                |                  |                   |         |                     |                     |
	| ssh     | multinode-173500 ssh -n multinode-173500 sudo cat                                                                       | multinode-173500 | minikube1\jenkins | v1.32.0 | 09 Jan 24 00:18 UTC | 09 Jan 24 00:18 UTC |
	|         | /home/docker/cp-test_multinode-173500-m02_multinode-173500.txt                                                          |                  |                   |         |                     |                     |
	| cp      | multinode-173500 cp multinode-173500-m02:/home/docker/cp-test.txt                                                       | multinode-173500 | minikube1\jenkins | v1.32.0 | 09 Jan 24 00:18 UTC | 09 Jan 24 00:18 UTC |
	|         | multinode-173500-m03:/home/docker/cp-test_multinode-173500-m02_multinode-173500-m03.txt                                 |                  |                   |         |                     |                     |
	| ssh     | multinode-173500 ssh -n                                                                                                 | multinode-173500 | minikube1\jenkins | v1.32.0 | 09 Jan 24 00:18 UTC | 09 Jan 24 00:18 UTC |
	|         | multinode-173500-m02 sudo cat                                                                                           |                  |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                |                  |                   |         |                     |                     |
	| ssh     | multinode-173500 ssh -n multinode-173500-m03 sudo cat                                                                   | multinode-173500 | minikube1\jenkins | v1.32.0 | 09 Jan 24 00:18 UTC | 09 Jan 24 00:18 UTC |
	|         | /home/docker/cp-test_multinode-173500-m02_multinode-173500-m03.txt                                                      |                  |                   |         |                     |                     |
	| cp      | multinode-173500 cp testdata\cp-test.txt                                                                                | multinode-173500 | minikube1\jenkins | v1.32.0 | 09 Jan 24 00:18 UTC | 09 Jan 24 00:18 UTC |
	|         | multinode-173500-m03:/home/docker/cp-test.txt                                                                           |                  |                   |         |                     |                     |
	| ssh     | multinode-173500 ssh -n                                                                                                 | multinode-173500 | minikube1\jenkins | v1.32.0 | 09 Jan 24 00:18 UTC | 09 Jan 24 00:19 UTC |
	|         | multinode-173500-m03 sudo cat                                                                                           |                  |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                |                  |                   |         |                     |                     |
	| cp      | multinode-173500 cp multinode-173500-m03:/home/docker/cp-test.txt                                                       | multinode-173500 | minikube1\jenkins | v1.32.0 | 09 Jan 24 00:19 UTC | 09 Jan 24 00:19 UTC |
	|         | C:\Users\jenkins.minikube1\AppData\Local\Temp\TestMultiNodeserialCopyFile273249123\001\cp-test_multinode-173500-m03.txt |                  |                   |         |                     |                     |
	| ssh     | multinode-173500 ssh -n                                                                                                 | multinode-173500 | minikube1\jenkins | v1.32.0 | 09 Jan 24 00:19 UTC | 09 Jan 24 00:19 UTC |
	|         | multinode-173500-m03 sudo cat                                                                                           |                  |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                |                  |                   |         |                     |                     |
	| cp      | multinode-173500 cp multinode-173500-m03:/home/docker/cp-test.txt                                                       | multinode-173500 | minikube1\jenkins | v1.32.0 | 09 Jan 24 00:19 UTC | 09 Jan 24 00:19 UTC |
	|         | multinode-173500:/home/docker/cp-test_multinode-173500-m03_multinode-173500.txt                                         |                  |                   |         |                     |                     |
	| ssh     | multinode-173500 ssh -n                                                                                                 | multinode-173500 | minikube1\jenkins | v1.32.0 | 09 Jan 24 00:19 UTC | 09 Jan 24 00:19 UTC |
	|         | multinode-173500-m03 sudo cat                                                                                           |                  |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                |                  |                   |         |                     |                     |
	| ssh     | multinode-173500 ssh -n multinode-173500 sudo cat                                                                       | multinode-173500 | minikube1\jenkins | v1.32.0 | 09 Jan 24 00:19 UTC | 09 Jan 24 00:20 UTC |
	|         | /home/docker/cp-test_multinode-173500-m03_multinode-173500.txt                                                          |                  |                   |         |                     |                     |
	| cp      | multinode-173500 cp multinode-173500-m03:/home/docker/cp-test.txt                                                       | multinode-173500 | minikube1\jenkins | v1.32.0 | 09 Jan 24 00:20 UTC | 09 Jan 24 00:20 UTC |
	|         | multinode-173500-m02:/home/docker/cp-test_multinode-173500-m03_multinode-173500-m02.txt                                 |                  |                   |         |                     |                     |
	| ssh     | multinode-173500 ssh -n                                                                                                 | multinode-173500 | minikube1\jenkins | v1.32.0 | 09 Jan 24 00:20 UTC | 09 Jan 24 00:20 UTC |
	|         | multinode-173500-m03 sudo cat                                                                                           |                  |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                |                  |                   |         |                     |                     |
	| ssh     | multinode-173500 ssh -n multinode-173500-m02 sudo cat                                                                   | multinode-173500 | minikube1\jenkins | v1.32.0 | 09 Jan 24 00:20 UTC | 09 Jan 24 00:20 UTC |
	|         | /home/docker/cp-test_multinode-173500-m03_multinode-173500-m02.txt                                                      |                  |                   |         |                     |                     |
	| node    | multinode-173500 node stop m03                                                                                          | multinode-173500 | minikube1\jenkins | v1.32.0 | 09 Jan 24 00:20 UTC | 09 Jan 24 00:20 UTC |
	| node    | multinode-173500 node start                                                                                             | multinode-173500 | minikube1\jenkins | v1.32.0 | 09 Jan 24 00:21 UTC | 09 Jan 24 00:24 UTC |
	|         | m03 --alsologtostderr                                                                                                   |                  |                   |         |                     |                     |
	| node    | list -p multinode-173500                                                                                                | multinode-173500 | minikube1\jenkins | v1.32.0 | 09 Jan 24 00:24 UTC |                     |
	| stop    | -p multinode-173500                                                                                                     | multinode-173500 | minikube1\jenkins | v1.32.0 | 09 Jan 24 00:24 UTC | 09 Jan 24 00:26 UTC |
	| start   | -p multinode-173500                                                                                                     | multinode-173500 | minikube1\jenkins | v1.32.0 | 09 Jan 24 00:26 UTC |                     |
	|         | --wait=true -v=8                                                                                                        |                  |                   |         |                     |                     |
	|         | --alsologtostderr                                                                                                       |                  |                   |         |                     |                     |
	|---------|-------------------------------------------------------------------------------------------------------------------------|------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/01/09 00:26:05
	Running on machine: minikube1
	Binary: Built with gc go1.21.5 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0109 00:26:05.796557   15272 out.go:296] Setting OutFile to fd 928 ...
	I0109 00:26:05.797412   15272 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0109 00:26:05.797412   15272 out.go:309] Setting ErrFile to fd 660...
	I0109 00:26:05.797412   15272 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0109 00:26:05.821878   15272 out.go:303] Setting JSON to false
	I0109 00:26:05.824870   15272 start.go:128] hostinfo: {"hostname":"minikube1","uptime":7460,"bootTime":1704752505,"procs":197,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.3803 Build 19045.3803","kernelVersion":"10.0.19045.3803 Build 19045.3803","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f3192dc9-6fb2-4797-bdaa-5f567903ef41"}
	W0109 00:26:05.824870   15272 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0109 00:26:05.828936   15272 out.go:177] * [multinode-173500] minikube v1.32.0 on Microsoft Windows 10 Enterprise N 10.0.19045.3803 Build 19045.3803
	I0109 00:26:05.832758   15272 notify.go:220] Checking for updates...
	I0109 00:26:05.837297   15272 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0109 00:26:05.841744   15272 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0109 00:26:05.844776   15272 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube1\minikube-integration\.minikube
	I0109 00:26:05.847770   15272 out.go:177]   - MINIKUBE_LOCATION=17830
	I0109 00:26:05.850770   15272 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0109 00:26:05.853709   15272 config.go:182] Loaded profile config "multinode-173500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0109 00:26:05.853709   15272 driver.go:392] Setting default libvirt URI to qemu:///system
	I0109 00:26:11.301939   15272 out.go:177] * Using the hyperv driver based on existing profile
	I0109 00:26:11.305482   15272 start.go:298] selected driver: hyperv
	I0109 00:26:11.305482   15272 start.go:902] validating driver "hyperv" against &{Name:multinode-173500 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17806/minikube-v1.32.1-1702708929-17806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704751654-17830@sha256:cabd32f8d9e8d804966eb117ed5366660f6363a4d1415f0b5480de6e396be617 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCon
fig:{KubernetesVersion:v1.28.4 ClusterName:multinode-173500 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:172.24.100.178 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.24.108.84 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.24.100.87 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false ina
ccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I0109 00:26:11.305762   15272 start.go:913] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0109 00:26:11.359424   15272 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0109 00:26:11.359944   15272 cni.go:84] Creating CNI manager for ""
	I0109 00:26:11.359944   15272 cni.go:136] 3 nodes found, recommending kindnet
	I0109 00:26:11.359944   15272 start_flags.go:323] config:
	{Name:multinode-173500 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17806/minikube-v1.32.1-1702708929-17806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704751654-17830@sha256:cabd32f8d9e8d804966eb117ed5366660f6363a4d1415f0b5480de6e396be617 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-173500 Namespace:default AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:172.24.100.178 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.24.108.84 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.24.100.87 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false is
tio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fals
e CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I0109 00:26:11.360326   15272 iso.go:125] acquiring lock: {Name:mk6e44fd4f974e035b521383471f58bfbae3f4f1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0109 00:26:11.366202   15272 out.go:177] * Starting control plane node multinode-173500 in cluster multinode-173500
	I0109 00:26:11.368739   15272 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0109 00:26:11.368739   15272 preload.go:148] Found local preload: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4
	I0109 00:26:11.368739   15272 cache.go:56] Caching tarball of preloaded images
	I0109 00:26:11.369500   15272 preload.go:174] Found C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0109 00:26:11.369500   15272 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on docker
	I0109 00:26:11.369500   15272 profile.go:148] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-173500\config.json ...
	I0109 00:26:11.372555   15272 start.go:365] acquiring machines lock for multinode-173500: {Name:mk1756dfe9622c208593bf9b718faf6c1651aea2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0109 00:26:11.372555   15272 start.go:369] acquired machines lock for "multinode-173500" in 0s
	I0109 00:26:11.373207   15272 start.go:96] Skipping create...Using existing machine configuration
	I0109 00:26:11.373358   15272 fix.go:54] fixHost starting: 
	I0109 00:26:11.373525   15272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-173500 ).state
	I0109 00:26:14.140760   15272 main.go:141] libmachine: [stdout =====>] : Off
	
	I0109 00:26:14.140760   15272 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:26:14.140856   15272 fix.go:102] recreateIfNeeded on multinode-173500: state=Stopped err=<nil>
	W0109 00:26:14.140856   15272 fix.go:128] unexpected machine state, will restart: <nil>
	I0109 00:26:14.148515   15272 out.go:177] * Restarting existing hyperv VM for "multinode-173500" ...
	I0109 00:26:14.151421   15272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-173500
	I0109 00:26:17.292767   15272 main.go:141] libmachine: [stdout =====>] : 
	I0109 00:26:17.293006   15272 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:26:17.293006   15272 main.go:141] libmachine: Waiting for host to start...
	I0109 00:26:17.293173   15272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-173500 ).state
	I0109 00:26:19.597242   15272 main.go:141] libmachine: [stdout =====>] : Running
	
	I0109 00:26:19.597242   15272 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:26:19.597334   15272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-173500 ).networkadapters[0]).ipaddresses[0]
	I0109 00:26:22.196185   15272 main.go:141] libmachine: [stdout =====>] : 
	I0109 00:26:22.196185   15272 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:26:23.199462   15272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-173500 ).state
	I0109 00:26:25.515082   15272 main.go:141] libmachine: [stdout =====>] : Running
	
	I0109 00:26:25.515386   15272 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:26:25.515386   15272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-173500 ).networkadapters[0]).ipaddresses[0]
	I0109 00:26:28.168608   15272 main.go:141] libmachine: [stdout =====>] : 
	I0109 00:26:28.169013   15272 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:26:29.172345   15272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-173500 ).state
	I0109 00:26:31.475773   15272 main.go:141] libmachine: [stdout =====>] : Running
	
	I0109 00:26:31.475955   15272 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:26:31.476014   15272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-173500 ).networkadapters[0]).ipaddresses[0]
	I0109 00:26:34.092026   15272 main.go:141] libmachine: [stdout =====>] : 
	I0109 00:26:34.096270   15272 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:26:35.111630   15272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-173500 ).state
	I0109 00:26:37.334976   15272 main.go:141] libmachine: [stdout =====>] : Running
	
	I0109 00:26:37.334976   15272 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:26:37.335089   15272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-173500 ).networkadapters[0]).ipaddresses[0]
	I0109 00:26:39.871643   15272 main.go:141] libmachine: [stdout =====>] : 
	I0109 00:26:39.871643   15272 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:26:40.873116   15272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-173500 ).state
	I0109 00:26:43.106065   15272 main.go:141] libmachine: [stdout =====>] : Running
	
	I0109 00:26:43.106065   15272 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:26:43.106065   15272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-173500 ).networkadapters[0]).ipaddresses[0]
	I0109 00:26:45.685055   15272 main.go:141] libmachine: [stdout =====>] : 172.24.109.120
	
	I0109 00:26:45.685272   15272 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:26:45.688344   15272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-173500 ).state
	I0109 00:26:47.847455   15272 main.go:141] libmachine: [stdout =====>] : Running
	
	I0109 00:26:47.847455   15272 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:26:47.847584   15272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-173500 ).networkadapters[0]).ipaddresses[0]
	I0109 00:26:50.439066   15272 main.go:141] libmachine: [stdout =====>] : 172.24.109.120
	
	I0109 00:26:50.439066   15272 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:26:50.439066   15272 profile.go:148] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-173500\config.json ...
	I0109 00:26:50.443235   15272 machine.go:88] provisioning docker machine ...
	I0109 00:26:50.443393   15272 buildroot.go:166] provisioning hostname "multinode-173500"
	I0109 00:26:50.443568   15272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-173500 ).state
	I0109 00:26:52.602210   15272 main.go:141] libmachine: [stdout =====>] : Running
	
	I0109 00:26:52.602269   15272 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:26:52.602269   15272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-173500 ).networkadapters[0]).ipaddresses[0]
	I0109 00:26:55.183643   15272 main.go:141] libmachine: [stdout =====>] : 172.24.109.120
	
	I0109 00:26:55.183643   15272 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:26:55.187887   15272 main.go:141] libmachine: Using SSH client type: native
	I0109 00:26:55.190570   15272 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa46120] 0xa48c60 <nil>  [] 0s} 172.24.109.120 22 <nil> <nil>}
	I0109 00:26:55.190570   15272 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-173500 && echo "multinode-173500" | sudo tee /etc/hostname
	I0109 00:26:55.353683   15272 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-173500
	
	I0109 00:26:55.353683   15272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-173500 ).state
	I0109 00:26:57.561376   15272 main.go:141] libmachine: [stdout =====>] : Running
	
	I0109 00:26:57.561605   15272 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:26:57.561818   15272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-173500 ).networkadapters[0]).ipaddresses[0]
	I0109 00:27:00.210510   15272 main.go:141] libmachine: [stdout =====>] : 172.24.109.120
	
	I0109 00:27:00.210510   15272 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:27:00.216618   15272 main.go:141] libmachine: Using SSH client type: native
	I0109 00:27:00.217390   15272 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa46120] 0xa48c60 <nil>  [] 0s} 172.24.109.120 22 <nil> <nil>}
	I0109 00:27:00.217390   15272 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-173500' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-173500/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-173500' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0109 00:27:00.383176   15272 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0109 00:27:00.383176   15272 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube1\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube1\minikube-integration\.minikube}
	I0109 00:27:00.383713   15272 buildroot.go:174] setting up certificates
	I0109 00:27:00.383790   15272 provision.go:83] configureAuth start
	I0109 00:27:00.383926   15272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-173500 ).state
	I0109 00:27:02.531185   15272 main.go:141] libmachine: [stdout =====>] : Running
	
	I0109 00:27:02.531265   15272 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:27:02.531265   15272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-173500 ).networkadapters[0]).ipaddresses[0]
	I0109 00:27:05.108789   15272 main.go:141] libmachine: [stdout =====>] : 172.24.109.120
	
	I0109 00:27:05.108789   15272 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:27:05.108789   15272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-173500 ).state
	I0109 00:27:07.260927   15272 main.go:141] libmachine: [stdout =====>] : Running
	
	I0109 00:27:07.261129   15272 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:27:07.261129   15272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-173500 ).networkadapters[0]).ipaddresses[0]
	I0109 00:27:09.821413   15272 main.go:141] libmachine: [stdout =====>] : 172.24.109.120
	
	I0109 00:27:09.821668   15272 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:27:09.821668   15272 provision.go:138] copyHostCerts
	I0109 00:27:09.821940   15272 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem
	I0109 00:27:09.822260   15272 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem, removing ...
	I0109 00:27:09.822260   15272 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\key.pem
	I0109 00:27:09.822778   15272 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem (1675 bytes)
	I0109 00:27:09.824073   15272 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem
	I0109 00:27:09.824073   15272 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem, removing ...
	I0109 00:27:09.824073   15272 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.pem
	I0109 00:27:09.824073   15272 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0109 00:27:09.826298   15272 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem
	I0109 00:27:09.826877   15272 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem, removing ...
	I0109 00:27:09.826877   15272 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cert.pem
	I0109 00:27:09.827263   15272 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0109 00:27:09.828385   15272 provision.go:112] generating server cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-173500 san=[172.24.109.120 172.24.109.120 localhost 127.0.0.1 minikube multinode-173500]
	I0109 00:27:10.251479   15272 provision.go:172] copyRemoteCerts
	I0109 00:27:10.264450   15272 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0109 00:27:10.264450   15272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-173500 ).state
	I0109 00:27:12.422068   15272 main.go:141] libmachine: [stdout =====>] : Running
	
	I0109 00:27:12.422068   15272 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:27:12.422068   15272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-173500 ).networkadapters[0]).ipaddresses[0]
	I0109 00:27:14.983322   15272 main.go:141] libmachine: [stdout =====>] : 172.24.109.120
	
	I0109 00:27:14.983322   15272 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:27:14.983631   15272 sshutil.go:53] new ssh client: &{IP:172.24.109.120 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-173500\id_rsa Username:docker}
	I0109 00:27:15.094491   15272 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.8300406s)
	I0109 00:27:15.094491   15272 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0109 00:27:15.095120   15272 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0109 00:27:15.137900   15272 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0109 00:27:15.137900   15272 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0109 00:27:15.184708   15272 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0109 00:27:15.185298   15272 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1224 bytes)
	I0109 00:27:15.224204   15272 provision.go:86] duration metric: configureAuth took 14.8404119s
	I0109 00:27:15.224204   15272 buildroot.go:189] setting minikube options for container-runtime
	I0109 00:27:15.224759   15272 config.go:182] Loaded profile config "multinode-173500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0109 00:27:15.224974   15272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-173500 ).state
	I0109 00:27:17.382512   15272 main.go:141] libmachine: [stdout =====>] : Running
	
	I0109 00:27:17.382765   15272 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:27:17.382765   15272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-173500 ).networkadapters[0]).ipaddresses[0]
	I0109 00:27:19.979022   15272 main.go:141] libmachine: [stdout =====>] : 172.24.109.120
	
	I0109 00:27:19.979022   15272 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:27:19.988045   15272 main.go:141] libmachine: Using SSH client type: native
	I0109 00:27:19.988757   15272 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa46120] 0xa48c60 <nil>  [] 0s} 172.24.109.120 22 <nil> <nil>}
	I0109 00:27:19.988757   15272 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0109 00:27:20.128576   15272 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0109 00:27:20.128689   15272 buildroot.go:70] root file system type: tmpfs
	I0109 00:27:20.128929   15272 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0109 00:27:20.128929   15272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-173500 ).state
	I0109 00:27:22.266487   15272 main.go:141] libmachine: [stdout =====>] : Running
	
	I0109 00:27:22.266558   15272 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:27:22.266558   15272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-173500 ).networkadapters[0]).ipaddresses[0]
	I0109 00:27:24.834101   15272 main.go:141] libmachine: [stdout =====>] : 172.24.109.120
	
	I0109 00:27:24.834101   15272 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:27:24.840227   15272 main.go:141] libmachine: Using SSH client type: native
	I0109 00:27:24.840922   15272 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa46120] 0xa48c60 <nil>  [] 0s} 172.24.109.120 22 <nil> <nil>}
	I0109 00:27:24.840922   15272 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0109 00:27:25.002186   15272 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0109 00:27:25.002403   15272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-173500 ).state
	I0109 00:27:27.158104   15272 main.go:141] libmachine: [stdout =====>] : Running
	
	I0109 00:27:27.158300   15272 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:27:27.158300   15272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-173500 ).networkadapters[0]).ipaddresses[0]
	I0109 00:27:29.710265   15272 main.go:141] libmachine: [stdout =====>] : 172.24.109.120
	
	I0109 00:27:29.710265   15272 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:27:29.716276   15272 main.go:141] libmachine: Using SSH client type: native
	I0109 00:27:29.717065   15272 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa46120] 0xa48c60 <nil>  [] 0s} 172.24.109.120 22 <nil> <nil>}
	I0109 00:27:29.717065   15272 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0109 00:27:31.113088   15272 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0109 00:27:31.113369   15272 machine.go:91] provisioned docker machine in 40.6699723s
	I0109 00:27:31.113369   15272 start.go:300] post-start starting for "multinode-173500" (driver="hyperv")
	I0109 00:27:31.113369   15272 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0109 00:27:31.129608   15272 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0109 00:27:31.129608   15272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-173500 ).state
	I0109 00:27:33.280606   15272 main.go:141] libmachine: [stdout =====>] : Running
	
	I0109 00:27:33.280606   15272 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:27:33.280715   15272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-173500 ).networkadapters[0]).ipaddresses[0]
	I0109 00:27:35.823605   15272 main.go:141] libmachine: [stdout =====>] : 172.24.109.120
	
	I0109 00:27:35.823605   15272 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:27:35.823605   15272 sshutil.go:53] new ssh client: &{IP:172.24.109.120 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-173500\id_rsa Username:docker}
	I0109 00:27:35.934133   15272 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.8045242s)
	I0109 00:27:35.947961   15272 ssh_runner.go:195] Run: cat /etc/os-release
	I0109 00:27:35.955651   15272 command_runner.go:130] > NAME=Buildroot
	I0109 00:27:35.955842   15272 command_runner.go:130] > VERSION=2021.02.12-1-gae27a7b-dirty
	I0109 00:27:35.955842   15272 command_runner.go:130] > ID=buildroot
	I0109 00:27:35.955878   15272 command_runner.go:130] > VERSION_ID=2021.02.12
	I0109 00:27:35.955878   15272 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
	I0109 00:27:35.955878   15272 info.go:137] Remote host: Buildroot 2021.02.12
	I0109 00:27:35.955982   15272 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\addons for local assets ...
	I0109 00:27:35.956515   15272 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\files for local assets ...
	I0109 00:27:35.957602   15272 filesync.go:149] local asset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\142882.pem -> 142882.pem in /etc/ssl/certs
	I0109 00:27:35.957602   15272 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\142882.pem -> /etc/ssl/certs/142882.pem
	I0109 00:27:35.971825   15272 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0109 00:27:35.990487   15272 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\142882.pem --> /etc/ssl/certs/142882.pem (1708 bytes)
	I0109 00:27:36.030118   15272 start.go:303] post-start completed in 4.9167482s
	I0109 00:27:36.030247   15272 fix.go:56] fixHost completed within 1m24.656818s
	I0109 00:27:36.030247   15272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-173500 ).state
	I0109 00:27:38.193733   15272 main.go:141] libmachine: [stdout =====>] : Running
	
	I0109 00:27:38.193733   15272 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:27:38.193733   15272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-173500 ).networkadapters[0]).ipaddresses[0]
	I0109 00:27:40.759254   15272 main.go:141] libmachine: [stdout =====>] : 172.24.109.120
	
	I0109 00:27:40.759254   15272 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:27:40.765310   15272 main.go:141] libmachine: Using SSH client type: native
	I0109 00:27:40.765984   15272 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa46120] 0xa48c60 <nil>  [] 0s} 172.24.109.120 22 <nil> <nil>}
	I0109 00:27:40.765984   15272 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0109 00:27:40.906315   15272 main.go:141] libmachine: SSH cmd err, output: <nil>: 1704760060.905080714
	
	I0109 00:27:40.906315   15272 fix.go:206] guest clock: 1704760060.905080714
	I0109 00:27:40.906315   15272 fix.go:219] Guest: 2024-01-09 00:27:40.905080714 +0000 UTC Remote: 2024-01-09 00:27:36.0302478 +0000 UTC m=+90.414922801 (delta=4.874832914s)
	I0109 00:27:40.906854   15272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-173500 ).state
	I0109 00:27:43.034084   15272 main.go:141] libmachine: [stdout =====>] : Running
	
	I0109 00:27:43.034084   15272 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:27:43.034207   15272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-173500 ).networkadapters[0]).ipaddresses[0]
	I0109 00:27:45.557377   15272 main.go:141] libmachine: [stdout =====>] : 172.24.109.120
	
	I0109 00:27:45.557461   15272 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:27:45.565357   15272 main.go:141] libmachine: Using SSH client type: native
	I0109 00:27:45.566284   15272 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa46120] 0xa48c60 <nil>  [] 0s} 172.24.109.120 22 <nil> <nil>}
	I0109 00:27:45.566284   15272 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1704760060
	I0109 00:27:45.714317   15272 main.go:141] libmachine: SSH cmd err, output: <nil>: Tue Jan  9 00:27:40 UTC 2024
	
	I0109 00:27:45.714317   15272 fix.go:226] clock set: Tue Jan  9 00:27:40 UTC 2024
	 (err=<nil>)
	I0109 00:27:45.714317   15272 start.go:83] releasing machines lock for "multinode-173500", held for 1m34.3417528s
	I0109 00:27:45.714317   15272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-173500 ).state
	I0109 00:27:47.833066   15272 main.go:141] libmachine: [stdout =====>] : Running
	
	I0109 00:27:47.833066   15272 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:27:47.833385   15272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-173500 ).networkadapters[0]).ipaddresses[0]
	I0109 00:27:50.342357   15272 main.go:141] libmachine: [stdout =====>] : 172.24.109.120
	
	I0109 00:27:50.342442   15272 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:27:50.347800   15272 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0109 00:27:50.347891   15272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-173500 ).state
	I0109 00:27:50.359859   15272 ssh_runner.go:195] Run: cat /version.json
	I0109 00:27:50.359859   15272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-173500 ).state
	I0109 00:27:52.532476   15272 main.go:141] libmachine: [stdout =====>] : Running
	
	I0109 00:27:52.532476   15272 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:27:52.532476   15272 main.go:141] libmachine: [stdout =====>] : Running
	
	I0109 00:27:52.532649   15272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-173500 ).networkadapters[0]).ipaddresses[0]
	I0109 00:27:52.532649   15272 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:27:52.532808   15272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-173500 ).networkadapters[0]).ipaddresses[0]
	I0109 00:27:55.206958   15272 main.go:141] libmachine: [stdout =====>] : 172.24.109.120
	
	I0109 00:27:55.207067   15272 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:27:55.207232   15272 sshutil.go:53] new ssh client: &{IP:172.24.109.120 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-173500\id_rsa Username:docker}
	I0109 00:27:55.225892   15272 main.go:141] libmachine: [stdout =====>] : 172.24.109.120
	
	I0109 00:27:55.225892   15272 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:27:55.225892   15272 sshutil.go:53] new ssh client: &{IP:172.24.109.120 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-173500\id_rsa Username:docker}
	I0109 00:27:55.308703   15272 command_runner.go:130] > {"iso_version": "v1.32.1-1702708929-17806", "kicbase_version": "v0.0.42-1702660877-17806", "minikube_version": "v1.32.0", "commit": "957da21b08687cca2533dd65b67e68ead277b79e"}
	I0109 00:27:55.308703   15272 ssh_runner.go:235] Completed: cat /version.json: (4.9488429s)
	I0109 00:27:55.322918   15272 ssh_runner.go:195] Run: systemctl --version
	I0109 00:27:55.444170   15272 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0109 00:27:55.444170   15272 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (5.0963694s)
	I0109 00:27:55.444298   15272 command_runner.go:130] > systemd 247 (247)
	I0109 00:27:55.444298   15272 command_runner.go:130] > -PAM -AUDIT -SELINUX -IMA -APPARMOR -SMACK -SYSVINIT -UTMP -LIBCRYPTSETUP -GCRYPT -GNUTLS +ACL +XZ +LZ4 -ZSTD +SECCOMP +BLKID -ELFUTILS +KMOD -IDN2 -IDN -PCRE2 default-hierarchy=hybrid
	I0109 00:27:55.458482   15272 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0109 00:27:55.467342   15272 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0109 00:27:55.468108   15272 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0109 00:27:55.482594   15272 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0109 00:27:55.504269   15272 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0109 00:27:55.504269   15272 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0109 00:27:55.504269   15272 start.go:475] detecting cgroup driver to use...
	I0109 00:27:55.504572   15272 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0109 00:27:55.531907   15272 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0109 00:27:55.545771   15272 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0109 00:27:55.582095   15272 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0109 00:27:55.598763   15272 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0109 00:27:55.613075   15272 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0109 00:27:55.642850   15272 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0109 00:27:55.672642   15272 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0109 00:27:55.703130   15272 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0109 00:27:55.734472   15272 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0109 00:27:55.765166   15272 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0109 00:27:55.795649   15272 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0109 00:27:55.811632   15272 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0109 00:27:55.823898   15272 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0109 00:27:55.853749   15272 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0109 00:27:56.025872   15272 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0109 00:27:56.057491   15272 start.go:475] detecting cgroup driver to use...
	I0109 00:27:56.073810   15272 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0109 00:27:56.098189   15272 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0109 00:27:56.098310   15272 command_runner.go:130] > [Unit]
	I0109 00:27:56.098310   15272 command_runner.go:130] > Description=Docker Application Container Engine
	I0109 00:27:56.098310   15272 command_runner.go:130] > Documentation=https://docs.docker.com
	I0109 00:27:56.098310   15272 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0109 00:27:56.098310   15272 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0109 00:27:56.098310   15272 command_runner.go:130] > StartLimitBurst=3
	I0109 00:27:56.098446   15272 command_runner.go:130] > StartLimitIntervalSec=60
	I0109 00:27:56.098498   15272 command_runner.go:130] > [Service]
	I0109 00:27:56.098498   15272 command_runner.go:130] > Type=notify
	I0109 00:27:56.098498   15272 command_runner.go:130] > Restart=on-failure
	I0109 00:27:56.098592   15272 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0109 00:27:56.098653   15272 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0109 00:27:56.098653   15272 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0109 00:27:56.098653   15272 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0109 00:27:56.098653   15272 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0109 00:27:56.098801   15272 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0109 00:27:56.098801   15272 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0109 00:27:56.098841   15272 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0109 00:27:56.098841   15272 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0109 00:27:56.098841   15272 command_runner.go:130] > ExecStart=
	I0109 00:27:56.098971   15272 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0109 00:27:56.098971   15272 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0109 00:27:56.099100   15272 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0109 00:27:56.099100   15272 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0109 00:27:56.099100   15272 command_runner.go:130] > LimitNOFILE=infinity
	I0109 00:27:56.099100   15272 command_runner.go:130] > LimitNPROC=infinity
	I0109 00:27:56.099100   15272 command_runner.go:130] > LimitCORE=infinity
	I0109 00:27:56.099232   15272 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0109 00:27:56.099232   15272 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0109 00:27:56.099232   15272 command_runner.go:130] > TasksMax=infinity
	I0109 00:27:56.099232   15272 command_runner.go:130] > TimeoutStartSec=0
	I0109 00:27:56.099232   15272 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0109 00:27:56.099352   15272 command_runner.go:130] > Delegate=yes
	I0109 00:27:56.099352   15272 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0109 00:27:56.099352   15272 command_runner.go:130] > KillMode=process
	I0109 00:27:56.099352   15272 command_runner.go:130] > [Install]
	I0109 00:27:56.099484   15272 command_runner.go:130] > WantedBy=multi-user.target
	I0109 00:27:56.118127   15272 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0109 00:27:56.150121   15272 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0109 00:27:56.196289   15272 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0109 00:27:56.229165   15272 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0109 00:27:56.266734   15272 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0109 00:27:56.335927   15272 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0109 00:27:56.358258   15272 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0109 00:27:56.385273   15272 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0109 00:27:56.404337   15272 ssh_runner.go:195] Run: which cri-dockerd
	I0109 00:27:56.410401   15272 command_runner.go:130] > /usr/bin/cri-dockerd
	I0109 00:27:56.425464   15272 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0109 00:27:56.443462   15272 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0109 00:27:56.483958   15272 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0109 00:27:56.659193   15272 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0109 00:27:56.810052   15272 docker.go:560] configuring docker to use "cgroupfs" as cgroup driver...
	I0109 00:27:56.811275   15272 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0109 00:27:56.864361   15272 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0109 00:27:57.032798   15272 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0109 00:27:58.746706   15272 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.7129039s)
	I0109 00:27:58.761092   15272 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0109 00:27:58.923197   15272 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0109 00:27:59.096043   15272 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0109 00:27:59.259656   15272 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0109 00:27:59.433097   15272 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0109 00:27:59.471868   15272 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0109 00:27:59.641390   15272 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I0109 00:27:59.744721   15272 start.go:522] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0109 00:27:59.762778   15272 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0109 00:27:59.770587   15272 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0109 00:27:59.770587   15272 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0109 00:27:59.770587   15272 command_runner.go:130] > Device: 16h/22d	Inode: 942         Links: 1
	I0109 00:27:59.770587   15272 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0109 00:27:59.770587   15272 command_runner.go:130] > Access: 2024-01-09 00:27:59.660292301 +0000
	I0109 00:27:59.770587   15272 command_runner.go:130] > Modify: 2024-01-09 00:27:59.660292301 +0000
	I0109 00:27:59.770587   15272 command_runner.go:130] > Change: 2024-01-09 00:27:59.664292301 +0000
	I0109 00:27:59.770587   15272 command_runner.go:130] >  Birth: -
	I0109 00:27:59.770587   15272 start.go:543] Will wait 60s for crictl version
	I0109 00:27:59.784516   15272 ssh_runner.go:195] Run: which crictl
	I0109 00:27:59.789727   15272 command_runner.go:130] > /usr/bin/crictl
	I0109 00:27:59.807020   15272 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0109 00:27:59.875440   15272 command_runner.go:130] > Version:  0.1.0
	I0109 00:27:59.875440   15272 command_runner.go:130] > RuntimeName:  docker
	I0109 00:27:59.876010   15272 command_runner.go:130] > RuntimeVersion:  24.0.7
	I0109 00:27:59.876010   15272 command_runner.go:130] > RuntimeApiVersion:  v1
	I0109 00:27:59.877983   15272 start.go:559] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  24.0.7
	RuntimeApiVersion:  v1
	I0109 00:27:59.888680   15272 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0109 00:27:59.924220   15272 command_runner.go:130] > 24.0.7
	I0109 00:27:59.936026   15272 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0109 00:27:59.973526   15272 command_runner.go:130] > 24.0.7
	I0109 00:27:59.977739   15272 out.go:204] * Preparing Kubernetes v1.28.4 on Docker 24.0.7 ...
	I0109 00:27:59.977832   15272 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0109 00:27:59.982044   15272 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0109 00:27:59.982623   15272 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0109 00:27:59.982623   15272 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0109 00:27:59.982623   15272 ip.go:207] Found interface: {Index:13 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:c4:61:0a Flags:up|broadcast|multicast|running}
	I0109 00:27:59.984509   15272 ip.go:210] interface addr: fe80::3fa5:15f5:46dc:dc8f/64
	I0109 00:27:59.984509   15272 ip.go:210] interface addr: 172.24.96.1/20
	I0109 00:27:59.998587   15272 ssh_runner.go:195] Run: grep 172.24.96.1	host.minikube.internal$ /etc/hosts
	I0109 00:28:00.003697   15272 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.24.96.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0109 00:28:00.024488   15272 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0109 00:28:00.035326   15272 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0109 00:28:00.065275   15272 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.28.4
	I0109 00:28:00.066331   15272 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.28.4
	I0109 00:28:00.066331   15272 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.28.4
	I0109 00:28:00.066378   15272 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.28.4
	I0109 00:28:00.066378   15272 command_runner.go:130] > kindest/kindnetd:v20230809-80a64d96
	I0109 00:28:00.066378   15272 command_runner.go:130] > registry.k8s.io/etcd:3.5.9-0
	I0109 00:28:00.066378   15272 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.10.1
	I0109 00:28:00.066378   15272 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0109 00:28:00.066378   15272 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0109 00:28:00.066378   15272 command_runner.go:130] > gcr.io/k8s-minikube/busybox:1.28
	I0109 00:28:00.066592   15272 docker.go:671] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.28.4
	registry.k8s.io/kube-scheduler:v1.28.4
	registry.k8s.io/kube-controller-manager:v1.28.4
	registry.k8s.io/kube-proxy:v1.28.4
	kindest/kindnetd:v20230809-80a64d96
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0109 00:28:00.066628   15272 docker.go:601] Images already preloaded, skipping extraction
	I0109 00:28:00.077733   15272 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0109 00:28:00.104485   15272 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.28.4
	I0109 00:28:00.104628   15272 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.28.4
	I0109 00:28:00.104687   15272 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.28.4
	I0109 00:28:00.104736   15272 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.28.4
	I0109 00:28:00.104736   15272 command_runner.go:130] > kindest/kindnetd:v20230809-80a64d96
	I0109 00:28:00.104736   15272 command_runner.go:130] > registry.k8s.io/etcd:3.5.9-0
	I0109 00:28:00.104779   15272 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.10.1
	I0109 00:28:00.104779   15272 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0109 00:28:00.104779   15272 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0109 00:28:00.104779   15272 command_runner.go:130] > gcr.io/k8s-minikube/busybox:1.28
	I0109 00:28:00.104880   15272 docker.go:671] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.28.4
	registry.k8s.io/kube-scheduler:v1.28.4
	registry.k8s.io/kube-controller-manager:v1.28.4
	registry.k8s.io/kube-proxy:v1.28.4
	kindest/kindnetd:v20230809-80a64d96
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0109 00:28:00.104933   15272 cache_images.go:84] Images are preloaded, skipping loading
	I0109 00:28:00.116044   15272 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0109 00:28:00.154237   15272 command_runner.go:130] > cgroupfs
	I0109 00:28:00.154454   15272 cni.go:84] Creating CNI manager for ""
	I0109 00:28:00.154596   15272 cni.go:136] 3 nodes found, recommending kindnet
	I0109 00:28:00.154596   15272 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0109 00:28:00.154596   15272 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.24.109.120 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-173500 NodeName:multinode-173500 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.24.109.120"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.24.109.120 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0109 00:28:00.154596   15272 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.24.109.120
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "multinode-173500"
	  kubeletExtraArgs:
	    node-ip: 172.24.109.120
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.24.109.120"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0109 00:28:00.155201   15272 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=multinode-173500 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.24.109.120
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:multinode-173500 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0109 00:28:00.171114   15272 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0109 00:28:00.187746   15272 command_runner.go:130] > kubeadm
	I0109 00:28:00.187746   15272 command_runner.go:130] > kubectl
	I0109 00:28:00.187746   15272 command_runner.go:130] > kubelet
	I0109 00:28:00.187746   15272 binaries.go:44] Found k8s binaries, skipping transfer
	I0109 00:28:00.202563   15272 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0109 00:28:00.217548   15272 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (379 bytes)
	I0109 00:28:00.243759   15272 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0109 00:28:00.269429   15272 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2105 bytes)
	I0109 00:28:00.313316   15272 ssh_runner.go:195] Run: grep 172.24.109.120	control-plane.minikube.internal$ /etc/hosts
	I0109 00:28:00.321091   15272 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.24.109.120	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0109 00:28:00.339703   15272 certs.go:56] Setting up C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-173500 for IP: 172.24.109.120
	I0109 00:28:00.340019   15272 certs.go:190] acquiring lock for shared ca certs: {Name:mk7484196a709b348d442d7deac4228c8c4b804e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0109 00:28:00.340795   15272 certs.go:199] skipping minikubeCA CA generation: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key
	I0109 00:28:00.341148   15272 certs.go:199] skipping proxyClientCA CA generation: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key
	I0109 00:28:00.341989   15272 certs.go:315] skipping minikube-user signed cert generation: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-173500\client.key
	I0109 00:28:00.342152   15272 certs.go:319] generating minikube signed cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-173500\apiserver.key.bbfd95bd
	I0109 00:28:00.342237   15272 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-173500\apiserver.crt.bbfd95bd with IP's: [172.24.109.120 10.96.0.1 127.0.0.1 10.0.0.1]
	I0109 00:28:00.798419   15272 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-173500\apiserver.crt.bbfd95bd ...
	I0109 00:28:00.800410   15272 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-173500\apiserver.crt.bbfd95bd: {Name:mk9251a5692d3b9d1e3ab6651d92285071b27f44 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0109 00:28:00.802316   15272 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-173500\apiserver.key.bbfd95bd ...
	I0109 00:28:00.802316   15272 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-173500\apiserver.key.bbfd95bd: {Name:mk669cd331a0c838d1aad5edde66451e49f2ffcd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0109 00:28:00.803348   15272 certs.go:337] copying C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-173500\apiserver.crt.bbfd95bd -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-173500\apiserver.crt
	I0109 00:28:00.814062   15272 certs.go:341] copying C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-173500\apiserver.key.bbfd95bd -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-173500\apiserver.key
	I0109 00:28:00.815730   15272 certs.go:315] skipping aggregator signed cert generation: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-173500\proxy-client.key
	I0109 00:28:00.815730   15272 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-173500\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0109 00:28:00.816292   15272 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-173500\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0109 00:28:00.816828   15272 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-173500\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0109 00:28:00.817117   15272 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-173500\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0109 00:28:00.817222   15272 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0109 00:28:00.817222   15272 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0109 00:28:00.817222   15272 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0109 00:28:00.817757   15272 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0109 00:28:00.817806   15272 certs.go:437] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\14288.pem (1338 bytes)
	W0109 00:28:00.818617   15272 certs.go:433] ignoring C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\14288_empty.pem, impossibly tiny 0 bytes
	I0109 00:28:00.819004   15272 certs.go:437] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I0109 00:28:00.819004   15272 certs.go:437] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0109 00:28:00.819640   15272 certs.go:437] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0109 00:28:00.819640   15272 certs.go:437] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I0109 00:28:00.820755   15272 certs.go:437] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\142882.pem (1708 bytes)
	I0109 00:28:00.821077   15272 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\142882.pem -> /usr/share/ca-certificates/142882.pem
	I0109 00:28:00.821191   15272 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0109 00:28:00.821191   15272 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\14288.pem -> /usr/share/ca-certificates/14288.pem
	I0109 00:28:00.822469   15272 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-173500\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0109 00:28:00.864045   15272 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-173500\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0109 00:28:00.902393   15272 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-173500\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0109 00:28:00.948511   15272 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-173500\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0109 00:28:00.987660   15272 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0109 00:28:01.026065   15272 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0109 00:28:01.067516   15272 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0109 00:28:01.111611   15272 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0109 00:28:01.150594   15272 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\142882.pem --> /usr/share/ca-certificates/142882.pem (1708 bytes)
	I0109 00:28:01.189867   15272 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0109 00:28:01.228818   15272 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\14288.pem --> /usr/share/ca-certificates/14288.pem (1338 bytes)
	I0109 00:28:01.265944   15272 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0109 00:28:01.309461   15272 ssh_runner.go:195] Run: openssl version
	I0109 00:28:01.316336   15272 command_runner.go:130] > OpenSSL 1.1.1n  15 Mar 2022
	I0109 00:28:01.330567   15272 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/142882.pem && ln -fs /usr/share/ca-certificates/142882.pem /etc/ssl/certs/142882.pem"
	I0109 00:28:01.361114   15272 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/142882.pem
	I0109 00:28:01.367222   15272 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Jan  8 23:11 /usr/share/ca-certificates/142882.pem
	I0109 00:28:01.367222   15272 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan  8 23:11 /usr/share/ca-certificates/142882.pem
	I0109 00:28:01.383942   15272 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/142882.pem
	I0109 00:28:01.391524   15272 command_runner.go:130] > 3ec20f2e
	I0109 00:28:01.405125   15272 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/142882.pem /etc/ssl/certs/3ec20f2e.0"
	I0109 00:28:01.434762   15272 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0109 00:28:01.465937   15272 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0109 00:28:01.472018   15272 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Jan  8 22:56 /usr/share/ca-certificates/minikubeCA.pem
	I0109 00:28:01.472167   15272 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan  8 22:56 /usr/share/ca-certificates/minikubeCA.pem
	I0109 00:28:01.486134   15272 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0109 00:28:01.496357   15272 command_runner.go:130] > b5213941
	I0109 00:28:01.511397   15272 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0109 00:28:01.542749   15272 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14288.pem && ln -fs /usr/share/ca-certificates/14288.pem /etc/ssl/certs/14288.pem"
	I0109 00:28:01.573936   15272 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14288.pem
	I0109 00:28:01.579591   15272 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Jan  8 23:11 /usr/share/ca-certificates/14288.pem
	I0109 00:28:01.579591   15272 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan  8 23:11 /usr/share/ca-certificates/14288.pem
	I0109 00:28:01.593099   15272 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14288.pem
	I0109 00:28:01.601244   15272 command_runner.go:130] > 51391683
	I0109 00:28:01.615639   15272 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14288.pem /etc/ssl/certs/51391683.0"
	I0109 00:28:01.647696   15272 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0109 00:28:01.654760   15272 command_runner.go:130] > ca.crt
	I0109 00:28:01.654760   15272 command_runner.go:130] > ca.key
	I0109 00:28:01.654760   15272 command_runner.go:130] > healthcheck-client.crt
	I0109 00:28:01.654760   15272 command_runner.go:130] > healthcheck-client.key
	I0109 00:28:01.654760   15272 command_runner.go:130] > peer.crt
	I0109 00:28:01.654760   15272 command_runner.go:130] > peer.key
	I0109 00:28:01.654760   15272 command_runner.go:130] > server.crt
	I0109 00:28:01.654760   15272 command_runner.go:130] > server.key
	I0109 00:28:01.668796   15272 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0109 00:28:01.677235   15272 command_runner.go:130] > Certificate will not expire
	I0109 00:28:01.690640   15272 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0109 00:28:01.698852   15272 command_runner.go:130] > Certificate will not expire
	I0109 00:28:01.712364   15272 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0109 00:28:01.720975   15272 command_runner.go:130] > Certificate will not expire
	I0109 00:28:01.735702   15272 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0109 00:28:01.744720   15272 command_runner.go:130] > Certificate will not expire
	I0109 00:28:01.757920   15272 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0109 00:28:01.764801   15272 command_runner.go:130] > Certificate will not expire
	I0109 00:28:01.779125   15272 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0109 00:28:01.786741   15272 command_runner.go:130] > Certificate will not expire
	I0109 00:28:01.788239   15272 kubeadm.go:404] StartCluster: {Name:multinode-173500 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17806/minikube-v1.32.1-1702708929-17806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704751654-17830@sha256:cabd32f8d9e8d804966eb117ed5366660f6363a4d1415f0b5480de6e396be617 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersi
on:v1.28.4 ClusterName:multinode-173500 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:172.24.109.120 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.24.108.84 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.24.100.87 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ing
ress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:d
ocker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I0109 00:28:01.799303   15272 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0109 00:28:01.841802   15272 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0109 00:28:01.861426   15272 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I0109 00:28:01.861503   15272 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I0109 00:28:01.861503   15272 command_runner.go:130] > /var/lib/minikube/etcd:
	I0109 00:28:01.861503   15272 command_runner.go:130] > member
	I0109 00:28:01.861565   15272 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0109 00:28:01.861647   15272 kubeadm.go:636] restartCluster start
	I0109 00:28:01.874359   15272 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0109 00:28:01.891474   15272 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0109 00:28:01.892667   15272 kubeconfig.go:135] verify returned: extract IP: "multinode-173500" does not appear in C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0109 00:28:01.892772   15272 kubeconfig.go:146] "multinode-173500" context is missing from C:\Users\jenkins.minikube1\minikube-integration\kubeconfig - will repair!
	I0109 00:28:01.893342   15272 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\kubeconfig: {Name:mk2e2755bd0ffee2cfcc8bbf22c26f99d53697ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0109 00:28:01.905617   15272 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0109 00:28:01.906575   15272 kapi.go:59] client config for multinode-173500: &rest.Config{Host:"https://172.24.109.120:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\multinode-173500/client.crt", KeyFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\multinode-173500/client.key", CAFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CADat
a:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1e2c140), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0109 00:28:01.908130   15272 cert_rotation.go:137] Starting client certificate rotation controller
	I0109 00:28:01.921594   15272 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0109 00:28:01.940453   15272 command_runner.go:130] > --- /var/tmp/minikube/kubeadm.yaml
	I0109 00:28:01.940453   15272 command_runner.go:130] > +++ /var/tmp/minikube/kubeadm.yaml.new
	I0109 00:28:01.940453   15272 command_runner.go:130] > @@ -1,7 +1,7 @@
	I0109 00:28:01.940453   15272 command_runner.go:130] >  apiVersion: kubeadm.k8s.io/v1beta3
	I0109 00:28:01.940453   15272 command_runner.go:130] >  kind: InitConfiguration
	I0109 00:28:01.940453   15272 command_runner.go:130] >  localAPIEndpoint:
	I0109 00:28:01.941491   15272 command_runner.go:130] > -  advertiseAddress: 172.24.100.178
	I0109 00:28:01.941491   15272 command_runner.go:130] > +  advertiseAddress: 172.24.109.120
	I0109 00:28:01.941491   15272 command_runner.go:130] >    bindPort: 8443
	I0109 00:28:01.941491   15272 command_runner.go:130] >  bootstrapTokens:
	I0109 00:28:01.941491   15272 command_runner.go:130] >    - groups:
	I0109 00:28:01.941491   15272 command_runner.go:130] > @@ -14,13 +14,13 @@
	I0109 00:28:01.941491   15272 command_runner.go:130] >    criSocket: unix:///var/run/cri-dockerd.sock
	I0109 00:28:01.941491   15272 command_runner.go:130] >    name: "multinode-173500"
	I0109 00:28:01.941491   15272 command_runner.go:130] >    kubeletExtraArgs:
	I0109 00:28:01.941491   15272 command_runner.go:130] > -    node-ip: 172.24.100.178
	I0109 00:28:01.941491   15272 command_runner.go:130] > +    node-ip: 172.24.109.120
	I0109 00:28:01.941491   15272 command_runner.go:130] >    taints: []
	I0109 00:28:01.941491   15272 command_runner.go:130] >  ---
	I0109 00:28:01.941491   15272 command_runner.go:130] >  apiVersion: kubeadm.k8s.io/v1beta3
	I0109 00:28:01.941491   15272 command_runner.go:130] >  kind: ClusterConfiguration
	I0109 00:28:01.941491   15272 command_runner.go:130] >  apiServer:
	I0109 00:28:01.941491   15272 command_runner.go:130] > -  certSANs: ["127.0.0.1", "localhost", "172.24.100.178"]
	I0109 00:28:01.941491   15272 command_runner.go:130] > +  certSANs: ["127.0.0.1", "localhost", "172.24.109.120"]
	I0109 00:28:01.941491   15272 command_runner.go:130] >    extraArgs:
	I0109 00:28:01.941491   15272 command_runner.go:130] >      enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	I0109 00:28:01.941491   15272 command_runner.go:130] >  controllerManager:
	I0109 00:28:01.941491   15272 kubeadm.go:602] needs reconfigure: configs differ:
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -1,7 +1,7 @@
	 apiVersion: kubeadm.k8s.io/v1beta3
	 kind: InitConfiguration
	 localAPIEndpoint:
	-  advertiseAddress: 172.24.100.178
	+  advertiseAddress: 172.24.109.120
	   bindPort: 8443
	 bootstrapTokens:
	   - groups:
	@@ -14,13 +14,13 @@
	   criSocket: unix:///var/run/cri-dockerd.sock
	   name: "multinode-173500"
	   kubeletExtraArgs:
	-    node-ip: 172.24.100.178
	+    node-ip: 172.24.109.120
	   taints: []
	 ---
	 apiVersion: kubeadm.k8s.io/v1beta3
	 kind: ClusterConfiguration
	 apiServer:
	-  certSANs: ["127.0.0.1", "localhost", "172.24.100.178"]
	+  certSANs: ["127.0.0.1", "localhost", "172.24.109.120"]
	   extraArgs:
	     enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	 controllerManager:
	
	-- /stdout --
	I0109 00:28:01.941491   15272 kubeadm.go:1135] stopping kube-system containers ...
	I0109 00:28:01.950457   15272 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0109 00:28:01.982991   15272 command_runner.go:130] > cc24fe03754e
	I0109 00:28:01.982991   15272 command_runner.go:130] > 87cfa509bf08
	I0109 00:28:01.982991   15272 command_runner.go:130] > 95f02a16160e
	I0109 00:28:01.982991   15272 command_runner.go:130] > ea6b136c3ff5
	I0109 00:28:01.982991   15272 command_runner.go:130] > 73ce70f8eca1
	I0109 00:28:01.982991   15272 command_runner.go:130] > 9faec0fdff89
	I0109 00:28:01.982991   15272 command_runner.go:130] > f8bc35a82f65
	I0109 00:28:01.982991   15272 command_runner.go:130] > 4ab23b363c35
	I0109 00:28:01.982991   15272 command_runner.go:130] > 16fd62cddf8b
	I0109 00:28:01.982991   15272 command_runner.go:130] > c6bc1bb3e368
	I0109 00:28:01.982991   15272 command_runner.go:130] > aa0ba9733b8d
	I0109 00:28:01.982991   15272 command_runner.go:130] > e4e40eb718ff
	I0109 00:28:01.982991   15272 command_runner.go:130] > 414e36a1f442
	I0109 00:28:01.982991   15272 command_runner.go:130] > 1b9f9a6d5d52
	I0109 00:28:01.982991   15272 command_runner.go:130] > f45ca2656d29
	I0109 00:28:01.982991   15272 command_runner.go:130] > ae920e11c344
	I0109 00:28:01.982991   15272 docker.go:469] Stopping containers: [cc24fe03754e 87cfa509bf08 95f02a16160e ea6b136c3ff5 73ce70f8eca1 9faec0fdff89 f8bc35a82f65 4ab23b363c35 16fd62cddf8b c6bc1bb3e368 aa0ba9733b8d e4e40eb718ff 414e36a1f442 1b9f9a6d5d52 f45ca2656d29 ae920e11c344]
	I0109 00:28:01.994478   15272 ssh_runner.go:195] Run: docker stop cc24fe03754e 87cfa509bf08 95f02a16160e ea6b136c3ff5 73ce70f8eca1 9faec0fdff89 f8bc35a82f65 4ab23b363c35 16fd62cddf8b c6bc1bb3e368 aa0ba9733b8d e4e40eb718ff 414e36a1f442 1b9f9a6d5d52 f45ca2656d29 ae920e11c344
	I0109 00:28:02.021557   15272 command_runner.go:130] > cc24fe03754e
	I0109 00:28:02.021557   15272 command_runner.go:130] > 87cfa509bf08
	I0109 00:28:02.021557   15272 command_runner.go:130] > 95f02a16160e
	I0109 00:28:02.021557   15272 command_runner.go:130] > ea6b136c3ff5
	I0109 00:28:02.021557   15272 command_runner.go:130] > 73ce70f8eca1
	I0109 00:28:02.021557   15272 command_runner.go:130] > 9faec0fdff89
	I0109 00:28:02.021557   15272 command_runner.go:130] > f8bc35a82f65
	I0109 00:28:02.021557   15272 command_runner.go:130] > 4ab23b363c35
	I0109 00:28:02.021557   15272 command_runner.go:130] > 16fd62cddf8b
	I0109 00:28:02.021688   15272 command_runner.go:130] > c6bc1bb3e368
	I0109 00:28:02.021688   15272 command_runner.go:130] > aa0ba9733b8d
	I0109 00:28:02.021688   15272 command_runner.go:130] > e4e40eb718ff
	I0109 00:28:02.021688   15272 command_runner.go:130] > 414e36a1f442
	I0109 00:28:02.021688   15272 command_runner.go:130] > 1b9f9a6d5d52
	I0109 00:28:02.021738   15272 command_runner.go:130] > f45ca2656d29
	I0109 00:28:02.021738   15272 command_runner.go:130] > ae920e11c344
	I0109 00:28:02.035591   15272 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0109 00:28:02.076835   15272 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0109 00:28:02.092005   15272 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I0109 00:28:02.092271   15272 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I0109 00:28:02.092271   15272 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I0109 00:28:02.092326   15272 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0109 00:28:02.092530   15272 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0109 00:28:02.107619   15272 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0109 00:28:02.122226   15272 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0109 00:28:02.122226   15272 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0109 00:28:02.538013   15272 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0109 00:28:02.538013   15272 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0109 00:28:02.538013   15272 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0109 00:28:02.538185   15272 command_runner.go:130] > [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0109 00:28:02.538185   15272 command_runner.go:130] > [certs] Using existing front-proxy-ca certificate authority
	I0109 00:28:02.538185   15272 command_runner.go:130] > [certs] Using existing front-proxy-client certificate and key on disk
	I0109 00:28:02.538185   15272 command_runner.go:130] > [certs] Using existing etcd/ca certificate authority
	I0109 00:28:02.538185   15272 command_runner.go:130] > [certs] Using existing etcd/server certificate and key on disk
	I0109 00:28:02.538185   15272 command_runner.go:130] > [certs] Using existing etcd/peer certificate and key on disk
	I0109 00:28:02.538185   15272 command_runner.go:130] > [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0109 00:28:02.538185   15272 command_runner.go:130] > [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0109 00:28:02.538185   15272 command_runner.go:130] > [certs] Using the existing "sa" key
	I0109 00:28:02.538185   15272 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0109 00:28:03.908879   15272 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0109 00:28:03.908963   15272 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I0109 00:28:03.909045   15272 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0109 00:28:03.909045   15272 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0109 00:28:03.909045   15272 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0109 00:28:03.909111   15272 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.3709258s)
	I0109 00:28:03.909111   15272 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0109 00:28:04.189591   15272 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0109 00:28:04.189591   15272 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0109 00:28:04.189591   15272 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0109 00:28:04.190535   15272 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0109 00:28:04.285548   15272 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0109 00:28:04.285644   15272 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0109 00:28:04.285644   15272 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0109 00:28:04.285644   15272 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0109 00:28:04.285729   15272 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0109 00:28:04.370779   15272 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0109 00:28:04.370779   15272 api_server.go:52] waiting for apiserver process to appear ...
	I0109 00:28:04.385515   15272 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0109 00:28:04.888886   15272 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0109 00:28:05.393956   15272 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0109 00:28:05.898445   15272 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0109 00:28:06.391926   15272 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0109 00:28:06.901453   15272 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0109 00:28:07.396758   15272 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0109 00:28:07.434753   15272 command_runner.go:130] > 1838
	I0109 00:28:07.437808   15272 api_server.go:72] duration metric: took 3.0670063s to wait for apiserver process to appear ...
	I0109 00:28:07.437808   15272 api_server.go:88] waiting for apiserver healthz status ...
	I0109 00:28:07.437871   15272 api_server.go:253] Checking apiserver healthz at https://172.24.109.120:8443/healthz ...
	I0109 00:28:11.926831   15272 api_server.go:279] https://172.24.109.120:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0109 00:28:11.927431   15272 api_server.go:103] status: https://172.24.109.120:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0109 00:28:11.927541   15272 api_server.go:253] Checking apiserver healthz at https://172.24.109.120:8443/healthz ...
	I0109 00:28:11.998616   15272 api_server.go:279] https://172.24.109.120:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0109 00:28:11.999185   15272 api_server.go:103] status: https://172.24.109.120:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0109 00:28:11.999185   15272 api_server.go:253] Checking apiserver healthz at https://172.24.109.120:8443/healthz ...
	I0109 00:28:12.025074   15272 api_server.go:279] https://172.24.109.120:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0109 00:28:12.025074   15272 api_server.go:103] status: https://172.24.109.120:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0109 00:28:12.445306   15272 api_server.go:253] Checking apiserver healthz at https://172.24.109.120:8443/healthz ...
	I0109 00:28:12.454566   15272 api_server.go:279] https://172.24.109.120:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0109 00:28:12.454566   15272 api_server.go:103] status: https://172.24.109.120:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0109 00:28:12.946830   15272 api_server.go:253] Checking apiserver healthz at https://172.24.109.120:8443/healthz ...
	I0109 00:28:12.955551   15272 api_server.go:279] https://172.24.109.120:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0109 00:28:12.955721   15272 api_server.go:103] status: https://172.24.109.120:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0109 00:28:13.450913   15272 api_server.go:253] Checking apiserver healthz at https://172.24.109.120:8443/healthz ...
	I0109 00:28:13.460096   15272 api_server.go:279] https://172.24.109.120:8443/healthz returned 200:
	ok
	I0109 00:28:13.460550   15272 round_trippers.go:463] GET https://172.24.109.120:8443/version
	I0109 00:28:13.460550   15272 round_trippers.go:469] Request Headers:
	I0109 00:28:13.460550   15272 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:28:13.460550   15272 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0109 00:28:13.474318   15272 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I0109 00:28:13.474392   15272 round_trippers.go:577] Response Headers:
	I0109 00:28:13.474392   15272 round_trippers.go:580]     Audit-Id: 845d0f29-8073-49bb-83e3-7a5c9701a899
	I0109 00:28:13.474392   15272 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:28:13.474392   15272 round_trippers.go:580]     Content-Type: application/json
	I0109 00:28:13.474392   15272 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b65f6d3f-e21a-46b0-805d-9b71486b6c1c
	I0109 00:28:13.474486   15272 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 160c1d93-30c8-4413-9986-7aa0d5f0fdd1
	I0109 00:28:13.474486   15272 round_trippers.go:580]     Content-Length: 264
	I0109 00:28:13.474486   15272 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:28:13 GMT
	I0109 00:28:13.474559   15272 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "28",
	  "gitVersion": "v1.28.4",
	  "gitCommit": "bae2c62678db2b5053817bc97181fcc2e8388103",
	  "gitTreeState": "clean",
	  "buildDate": "2023-11-15T16:48:54Z",
	  "goVersion": "go1.20.11",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0109 00:28:13.474653   15272 api_server.go:141] control plane version: v1.28.4
	I0109 00:28:13.474741   15272 api_server.go:131] duration metric: took 6.036933s to wait for apiserver health ...
	I0109 00:28:13.474741   15272 cni.go:84] Creating CNI manager for ""
	I0109 00:28:13.474741   15272 cni.go:136] 3 nodes found, recommending kindnet
	I0109 00:28:13.477562   15272 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0109 00:28:13.493679   15272 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0109 00:28:13.501626   15272 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0109 00:28:13.501626   15272 command_runner.go:130] >   Size: 2685752   	Blocks: 5248       IO Block: 4096   regular file
	I0109 00:28:13.501714   15272 command_runner.go:130] > Device: 11h/17d	Inode: 3544        Links: 1
	I0109 00:28:13.501714   15272 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0109 00:28:13.501714   15272 command_runner.go:130] > Access: 2024-01-09 00:26:43.947705700 +0000
	I0109 00:28:13.501714   15272 command_runner.go:130] > Modify: 2023-12-16 11:53:47.000000000 +0000
	I0109 00:28:13.501714   15272 command_runner.go:130] > Change: 2024-01-09 00:26:31.489000000 +0000
	I0109 00:28:13.501714   15272 command_runner.go:130] >  Birth: -
	I0109 00:28:13.501810   15272 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I0109 00:28:13.501867   15272 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0109 00:28:13.548345   15272 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0109 00:28:16.132925   15272 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I0109 00:28:16.133000   15272 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I0109 00:28:16.133000   15272 command_runner.go:130] > serviceaccount/kindnet unchanged
	I0109 00:28:16.133000   15272 command_runner.go:130] > daemonset.apps/kindnet configured
	I0109 00:28:16.133000   15272 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (2.5846543s)
	I0109 00:28:16.133216   15272 system_pods.go:43] waiting for kube-system pods to appear ...
	I0109 00:28:16.133442   15272 round_trippers.go:463] GET https://172.24.109.120:8443/api/v1/namespaces/kube-system/pods
	I0109 00:28:16.133442   15272 round_trippers.go:469] Request Headers:
	I0109 00:28:16.133442   15272 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:28:16.133512   15272 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0109 00:28:16.138831   15272 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0109 00:28:16.139858   15272 round_trippers.go:577] Response Headers:
	I0109 00:28:16.139902   15272 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:28:16 GMT
	I0109 00:28:16.139902   15272 round_trippers.go:580]     Audit-Id: 46182e30-0800-4ab0-b236-c403a7e5ddf6
	I0109 00:28:16.139902   15272 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:28:16.139902   15272 round_trippers.go:580]     Content-Type: application/json
	I0109 00:28:16.139902   15272 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b65f6d3f-e21a-46b0-805d-9b71486b6c1c
	I0109 00:28:16.139902   15272 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 160c1d93-30c8-4413-9986-7aa0d5f0fdd1
	I0109 00:28:16.141190   15272 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1734"},"items":[{"metadata":{"name":"coredns-5dd5756b68-bkss9","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"463fb6c6-1e85-419f-9c13-96e58a2ec22e","resourceVersion":"1670","creationTimestamp":"2024-01-09T00:05:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"391af85f-9c35-497b-9b4f-c347a35d4a42","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-09T00:05:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"391af85f-9c35-497b-9b4f-c347a35d4a42\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 84171 chars]
	I0109 00:28:16.147800   15272 system_pods.go:59] 12 kube-system pods found
	I0109 00:28:16.147800   15272 system_pods.go:61] "coredns-5dd5756b68-bkss9" [463fb6c6-1e85-419f-9c13-96e58a2ec22e] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0109 00:28:16.147800   15272 system_pods.go:61] "etcd-multinode-173500" [bbcb3d33-7daf-43d9-b596-66cbce3552ad] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0109 00:28:16.147800   15272 system_pods.go:61] "kindnet-6nz87" [73ad6ec4-cbfb-4b93-888c-3d430f3c7bf2] Running
	I0109 00:28:16.147800   15272 system_pods.go:61] "kindnet-ht547" [711faf1a-9f77-487c-bd84-1e227ab9c51a] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0109 00:28:16.147800   15272 system_pods.go:61] "kindnet-t72cs" [63893803-de87-4df9-ac98-3772bd46603c] Running
	I0109 00:28:16.147800   15272 system_pods.go:61] "kube-apiserver-multinode-173500" [6ec45d85-b2d5-483f-afdd-ee98dbb0edd1] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0109 00:28:16.147800   15272 system_pods.go:61] "kube-controller-manager-multinode-173500" [a0252ea5-5d6a-4303-b7e6-151481d4cd8a] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0109 00:28:16.147800   15272 system_pods.go:61] "kube-proxy-4h4sv" [a45861ba-73e0-452f-a535-f66e154ea1c6] Running
	I0109 00:28:16.147800   15272 system_pods.go:61] "kube-proxy-mj6ks" [bd23c4c8-d363-4a3f-b750-a3de2346a3bb] Running
	I0109 00:28:16.147800   15272 system_pods.go:61] "kube-proxy-qrtm6" [37d066e0-6ff3-4f22-abc3-6bddfa64736e] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0109 00:28:16.147800   15272 system_pods.go:61] "kube-scheduler-multinode-173500" [31d8cdf6-292f-4b3c-87c5-951fc34d0ea4] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0109 00:28:16.147800   15272 system_pods.go:61] "storage-provisioner" [936240bb-4bdd-4681-91a9-cb458c623805] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0109 00:28:16.147800   15272 system_pods.go:74] duration metric: took 14.5839ms to wait for pod list to return data ...
	I0109 00:28:16.147800   15272 node_conditions.go:102] verifying NodePressure condition ...
	I0109 00:28:16.147800   15272 round_trippers.go:463] GET https://172.24.109.120:8443/api/v1/nodes
	I0109 00:28:16.147800   15272 round_trippers.go:469] Request Headers:
	I0109 00:28:16.147800   15272 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:28:16.147800   15272 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0109 00:28:16.153789   15272 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0109 00:28:16.153789   15272 round_trippers.go:577] Response Headers:
	I0109 00:28:16.153789   15272 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:28:16.153789   15272 round_trippers.go:580]     Content-Type: application/json
	I0109 00:28:16.153789   15272 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b65f6d3f-e21a-46b0-805d-9b71486b6c1c
	I0109 00:28:16.153789   15272 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 160c1d93-30c8-4413-9986-7aa0d5f0fdd1
	I0109 00:28:16.153789   15272 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:28:16 GMT
	I0109 00:28:16.153789   15272 round_trippers.go:580]     Audit-Id: 2293575c-ba4e-439b-ae5d-f108447b3fef
	I0109 00:28:16.153789   15272 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"1734"},"items":[{"metadata":{"name":"multinode-173500","uid":"5e5c58e3-1ae8-4346-8766-7537fea36975","resourceVersion":"1661","creationTimestamp":"2024-01-09T00:05:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-173500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-173500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_09T00_05_33_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFi
elds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","tim [truncated 14858 chars]
	I0109 00:28:16.154796   15272 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0109 00:28:16.155785   15272 node_conditions.go:123] node cpu capacity is 2
	I0109 00:28:16.155785   15272 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0109 00:28:16.155785   15272 node_conditions.go:123] node cpu capacity is 2
	I0109 00:28:16.155785   15272 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0109 00:28:16.155785   15272 node_conditions.go:123] node cpu capacity is 2
	I0109 00:28:16.155785   15272 node_conditions.go:105] duration metric: took 7.9855ms to run NodePressure ...
	I0109 00:28:16.155785   15272 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0109 00:28:16.645878   15272 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0109 00:28:16.645878   15272 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0109 00:28:16.646033   15272 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0109 00:28:16.646155   15272 round_trippers.go:463] GET https://172.24.109.120:8443/api/v1/namespaces/kube-system/pods?labelSelector=tier%!D(MISSING)control-plane
	I0109 00:28:16.646235   15272 round_trippers.go:469] Request Headers:
	I0109 00:28:16.646235   15272 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:28:16.646235   15272 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0109 00:28:16.651110   15272 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0109 00:28:16.651110   15272 round_trippers.go:577] Response Headers:
	I0109 00:28:16.651110   15272 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 160c1d93-30c8-4413-9986-7aa0d5f0fdd1
	I0109 00:28:16.651110   15272 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:28:16 GMT
	I0109 00:28:16.651110   15272 round_trippers.go:580]     Audit-Id: 1c1375f4-94a5-4965-887a-9fac15f9a697
	I0109 00:28:16.651110   15272 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:28:16.651110   15272 round_trippers.go:580]     Content-Type: application/json
	I0109 00:28:16.651110   15272 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b65f6d3f-e21a-46b0-805d-9b71486b6c1c
	I0109 00:28:16.651594   15272 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1739"},"items":[{"metadata":{"name":"etcd-multinode-173500","namespace":"kube-system","uid":"bbcb3d33-7daf-43d9-b596-66cbce3552ad","resourceVersion":"1660","creationTimestamp":"2024-01-09T00:05:31Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.24.100.178:2379","kubernetes.io/config.hash":"8b9b6f8e7be121dc69cce9e8aca59417","kubernetes.io/config.mirror":"8b9b6f8e7be121dc69cce9e8aca59417","kubernetes.io/config.seen":"2024-01-09T00:05:31.606498270Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-173500","uid":"5e5c58e3-1ae8-4346-8766-7537fea36975","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-09T00:05:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotati
ons":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-client-urls":{},"f [truncated 29377 chars]
	I0109 00:28:16.653731   15272 kubeadm.go:787] kubelet initialised
	I0109 00:28:16.653800   15272 kubeadm.go:788] duration metric: took 7.7669ms waiting for restarted kubelet to initialise ...
	I0109 00:28:16.653800   15272 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0109 00:28:16.653942   15272 round_trippers.go:463] GET https://172.24.109.120:8443/api/v1/namespaces/kube-system/pods
	I0109 00:28:16.653942   15272 round_trippers.go:469] Request Headers:
	I0109 00:28:16.653942   15272 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0109 00:28:16.654011   15272 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:28:16.658451   15272 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0109 00:28:16.658451   15272 round_trippers.go:577] Response Headers:
	I0109 00:28:16.658451   15272 round_trippers.go:580]     Audit-Id: 62fe31ad-e159-40fb-ace1-2860b1cbe504
	I0109 00:28:16.658451   15272 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:28:16.659466   15272 round_trippers.go:580]     Content-Type: application/json
	I0109 00:28:16.659466   15272 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b65f6d3f-e21a-46b0-805d-9b71486b6c1c
	I0109 00:28:16.659510   15272 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 160c1d93-30c8-4413-9986-7aa0d5f0fdd1
	I0109 00:28:16.659510   15272 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:28:16 GMT
	I0109 00:28:16.661558   15272 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1739"},"items":[{"metadata":{"name":"coredns-5dd5756b68-bkss9","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"463fb6c6-1e85-419f-9c13-96e58a2ec22e","resourceVersion":"1670","creationTimestamp":"2024-01-09T00:05:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"391af85f-9c35-497b-9b4f-c347a35d4a42","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-09T00:05:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"391af85f-9c35-497b-9b4f-c347a35d4a42\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 84171 chars]
	I0109 00:28:16.665707   15272 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-bkss9" in "kube-system" namespace to be "Ready" ...
	I0109 00:28:16.665845   15272 round_trippers.go:463] GET https://172.24.109.120:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-bkss9
	I0109 00:28:16.665949   15272 round_trippers.go:469] Request Headers:
	I0109 00:28:16.665949   15272 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:28:16.665989   15272 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0109 00:28:16.669365   15272 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0109 00:28:16.669365   15272 round_trippers.go:577] Response Headers:
	I0109 00:28:16.669365   15272 round_trippers.go:580]     Audit-Id: c0e760a2-bf91-4bfd-9982-72b42bebd44d
	I0109 00:28:16.669365   15272 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:28:16.670367   15272 round_trippers.go:580]     Content-Type: application/json
	I0109 00:28:16.670367   15272 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b65f6d3f-e21a-46b0-805d-9b71486b6c1c
	I0109 00:28:16.670367   15272 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 160c1d93-30c8-4413-9986-7aa0d5f0fdd1
	I0109 00:28:16.670367   15272 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:28:16 GMT
	I0109 00:28:16.670608   15272 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-bkss9","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"463fb6c6-1e85-419f-9c13-96e58a2ec22e","resourceVersion":"1670","creationTimestamp":"2024-01-09T00:05:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"391af85f-9c35-497b-9b4f-c347a35d4a42","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-09T00:05:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"391af85f-9c35-497b-9b4f-c347a35d4a42\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6546 chars]
	I0109 00:28:16.671258   15272 round_trippers.go:463] GET https://172.24.109.120:8443/api/v1/nodes/multinode-173500
	I0109 00:28:16.671258   15272 round_trippers.go:469] Request Headers:
	I0109 00:28:16.671258   15272 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:28:16.671332   15272 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0109 00:28:16.676637   15272 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0109 00:28:16.676637   15272 round_trippers.go:577] Response Headers:
	I0109 00:28:16.676637   15272 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b65f6d3f-e21a-46b0-805d-9b71486b6c1c
	I0109 00:28:16.676637   15272 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 160c1d93-30c8-4413-9986-7aa0d5f0fdd1
	I0109 00:28:16.676637   15272 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:28:16 GMT
	I0109 00:28:16.676637   15272 round_trippers.go:580]     Audit-Id: b79e19a1-86a2-43f7-b713-ffa7655775c7
	I0109 00:28:16.676637   15272 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:28:16.676637   15272 round_trippers.go:580]     Content-Type: application/json
	I0109 00:28:16.676637   15272 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-173500","uid":"5e5c58e3-1ae8-4346-8766-7537fea36975","resourceVersion":"1661","creationTimestamp":"2024-01-09T00:05:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-173500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-173500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_09T00_05_33_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-01-09T00:05:27Z","fieldsType":"FieldsV1","f [truncated 5365 chars]
	I0109 00:28:16.677421   15272 pod_ready.go:97] node "multinode-173500" hosting pod "coredns-5dd5756b68-bkss9" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-173500" has status "Ready":"False"
	I0109 00:28:16.677421   15272 pod_ready.go:81] duration metric: took 11.7141ms waiting for pod "coredns-5dd5756b68-bkss9" in "kube-system" namespace to be "Ready" ...
	E0109 00:28:16.677421   15272 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-173500" hosting pod "coredns-5dd5756b68-bkss9" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-173500" has status "Ready":"False"
	I0109 00:28:16.677421   15272 pod_ready.go:78] waiting up to 4m0s for pod "etcd-multinode-173500" in "kube-system" namespace to be "Ready" ...
	I0109 00:28:16.677421   15272 round_trippers.go:463] GET https://172.24.109.120:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-173500
	I0109 00:28:16.677421   15272 round_trippers.go:469] Request Headers:
	I0109 00:28:16.677421   15272 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0109 00:28:16.677421   15272 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:28:16.681724   15272 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0109 00:28:16.681724   15272 round_trippers.go:577] Response Headers:
	I0109 00:28:16.681724   15272 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b65f6d3f-e21a-46b0-805d-9b71486b6c1c
	I0109 00:28:16.681724   15272 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 160c1d93-30c8-4413-9986-7aa0d5f0fdd1
	I0109 00:28:16.681724   15272 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:28:16 GMT
	I0109 00:28:16.681724   15272 round_trippers.go:580]     Audit-Id: 1c666e58-77b3-49bc-9d0e-f15ae83cc4fe
	I0109 00:28:16.681724   15272 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:28:16.681724   15272 round_trippers.go:580]     Content-Type: application/json
	I0109 00:28:16.682105   15272 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-173500","namespace":"kube-system","uid":"bbcb3d33-7daf-43d9-b596-66cbce3552ad","resourceVersion":"1660","creationTimestamp":"2024-01-09T00:05:31Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.24.100.178:2379","kubernetes.io/config.hash":"8b9b6f8e7be121dc69cce9e8aca59417","kubernetes.io/config.mirror":"8b9b6f8e7be121dc69cce9e8aca59417","kubernetes.io/config.seen":"2024-01-09T00:05:31.606498270Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-173500","uid":"5e5c58e3-1ae8-4346-8766-7537fea36975","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-09T00:05:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise
-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/conf [truncated 6097 chars]
	I0109 00:28:16.682681   15272 round_trippers.go:463] GET https://172.24.109.120:8443/api/v1/nodes/multinode-173500
	I0109 00:28:16.682738   15272 round_trippers.go:469] Request Headers:
	I0109 00:28:16.682738   15272 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:28:16.682810   15272 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0109 00:28:16.685784   15272 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0109 00:28:16.685907   15272 round_trippers.go:577] Response Headers:
	I0109 00:28:16.685907   15272 round_trippers.go:580]     Audit-Id: b9c95b47-e43f-4979-8194-764ea91d789c
	I0109 00:28:16.686006   15272 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:28:16.686006   15272 round_trippers.go:580]     Content-Type: application/json
	I0109 00:28:16.686006   15272 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b65f6d3f-e21a-46b0-805d-9b71486b6c1c
	I0109 00:28:16.686006   15272 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 160c1d93-30c8-4413-9986-7aa0d5f0fdd1
	I0109 00:28:16.686084   15272 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:28:16 GMT
	I0109 00:28:16.686161   15272 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-173500","uid":"5e5c58e3-1ae8-4346-8766-7537fea36975","resourceVersion":"1661","creationTimestamp":"2024-01-09T00:05:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-173500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-173500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_09T00_05_33_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-01-09T00:05:27Z","fieldsType":"FieldsV1","f [truncated 5365 chars]
	I0109 00:28:16.686699   15272 pod_ready.go:97] node "multinode-173500" hosting pod "etcd-multinode-173500" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-173500" has status "Ready":"False"
	I0109 00:28:16.686760   15272 pod_ready.go:81] duration metric: took 9.3389ms waiting for pod "etcd-multinode-173500" in "kube-system" namespace to be "Ready" ...
	E0109 00:28:16.686760   15272 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-173500" hosting pod "etcd-multinode-173500" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-173500" has status "Ready":"False"
	I0109 00:28:16.686842   15272 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-multinode-173500" in "kube-system" namespace to be "Ready" ...
	I0109 00:28:16.686915   15272 round_trippers.go:463] GET https://172.24.109.120:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-173500
	I0109 00:28:16.686915   15272 round_trippers.go:469] Request Headers:
	I0109 00:28:16.686915   15272 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:28:16.686915   15272 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0109 00:28:16.689118   15272 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0109 00:28:16.689118   15272 round_trippers.go:577] Response Headers:
	I0109 00:28:16.689118   15272 round_trippers.go:580]     Audit-Id: 838d4199-e440-4dc3-990a-0e99ae3707e6
	I0109 00:28:16.689118   15272 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:28:16.689118   15272 round_trippers.go:580]     Content-Type: application/json
	I0109 00:28:16.689118   15272 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b65f6d3f-e21a-46b0-805d-9b71486b6c1c
	I0109 00:28:16.689118   15272 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 160c1d93-30c8-4413-9986-7aa0d5f0fdd1
	I0109 00:28:16.689118   15272 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:28:16 GMT
	I0109 00:28:16.690158   15272 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-173500","namespace":"kube-system","uid":"6ec45d85-b2d5-483f-afdd-ee98dbb0edd1","resourceVersion":"1664","creationTimestamp":"2024-01-09T00:05:31Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.24.100.178:8443","kubernetes.io/config.hash":"6d4780fbf78826137e2d0549410b3c52","kubernetes.io/config.mirror":"6d4780fbf78826137e2d0549410b3c52","kubernetes.io/config.seen":"2024-01-09T00:05:31.606503570Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-173500","uid":"5e5c58e3-1ae8-4346-8766-7537fea36975","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-09T00:05:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.k
ubernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernete [truncated 7653 chars]
	I0109 00:28:16.690158   15272 round_trippers.go:463] GET https://172.24.109.120:8443/api/v1/nodes/multinode-173500
	I0109 00:28:16.690158   15272 round_trippers.go:469] Request Headers:
	I0109 00:28:16.690158   15272 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:28:16.690158   15272 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0109 00:28:16.694120   15272 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0109 00:28:16.694120   15272 round_trippers.go:577] Response Headers:
	I0109 00:28:16.694406   15272 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 160c1d93-30c8-4413-9986-7aa0d5f0fdd1
	I0109 00:28:16.694406   15272 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:28:16 GMT
	I0109 00:28:16.694406   15272 round_trippers.go:580]     Audit-Id: fcfa1c6d-ab20-484a-8cbf-ce288cdd93e6
	I0109 00:28:16.694406   15272 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:28:16.694406   15272 round_trippers.go:580]     Content-Type: application/json
	I0109 00:28:16.694406   15272 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b65f6d3f-e21a-46b0-805d-9b71486b6c1c
	I0109 00:28:16.694709   15272 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-173500","uid":"5e5c58e3-1ae8-4346-8766-7537fea36975","resourceVersion":"1661","creationTimestamp":"2024-01-09T00:05:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-173500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-173500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_09T00_05_33_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-01-09T00:05:27Z","fieldsType":"FieldsV1","f [truncated 5365 chars]
	I0109 00:28:16.695173   15272 pod_ready.go:97] node "multinode-173500" hosting pod "kube-apiserver-multinode-173500" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-173500" has status "Ready":"False"
	I0109 00:28:16.695265   15272 pod_ready.go:81] duration metric: took 8.4222ms waiting for pod "kube-apiserver-multinode-173500" in "kube-system" namespace to be "Ready" ...
	E0109 00:28:16.695265   15272 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-173500" hosting pod "kube-apiserver-multinode-173500" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-173500" has status "Ready":"False"
	I0109 00:28:16.695265   15272 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-multinode-173500" in "kube-system" namespace to be "Ready" ...
	I0109 00:28:16.695265   15272 round_trippers.go:463] GET https://172.24.109.120:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-173500
	I0109 00:28:16.695265   15272 round_trippers.go:469] Request Headers:
	I0109 00:28:16.695265   15272 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:28:16.695265   15272 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0109 00:28:16.697875   15272 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0109 00:28:16.697875   15272 round_trippers.go:577] Response Headers:
	I0109 00:28:16.697875   15272 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b65f6d3f-e21a-46b0-805d-9b71486b6c1c
	I0109 00:28:16.697875   15272 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 160c1d93-30c8-4413-9986-7aa0d5f0fdd1
	I0109 00:28:16.697875   15272 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:28:16 GMT
	I0109 00:28:16.697875   15272 round_trippers.go:580]     Audit-Id: 6d4d3f48-ad44-40a2-a989-800ffa185c2a
	I0109 00:28:16.697875   15272 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:28:16.697875   15272 round_trippers.go:580]     Content-Type: application/json
	I0109 00:28:16.697875   15272 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-173500","namespace":"kube-system","uid":"a0252ea5-5d6a-4303-b7e6-151481d4cd8a","resourceVersion":"1712","creationTimestamp":"2024-01-09T00:05:31Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"f6b180d5a2686dc98b0355b6df7f53ea","kubernetes.io/config.mirror":"f6b180d5a2686dc98b0355b6df7f53ea","kubernetes.io/config.seen":"2024-01-09T00:05:31.606504770Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-173500","uid":"5e5c58e3-1ae8-4346-8766-7537fea36975","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-09T00:05:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.
io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".": [truncated 7441 chars]
	I0109 00:28:16.698876   15272 round_trippers.go:463] GET https://172.24.109.120:8443/api/v1/nodes/multinode-173500
	I0109 00:28:16.698876   15272 round_trippers.go:469] Request Headers:
	I0109 00:28:16.698876   15272 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:28:16.698876   15272 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0109 00:28:16.701875   15272 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0109 00:28:16.701875   15272 round_trippers.go:577] Response Headers:
	I0109 00:28:16.701875   15272 round_trippers.go:580]     Content-Type: application/json
	I0109 00:28:16.701875   15272 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b65f6d3f-e21a-46b0-805d-9b71486b6c1c
	I0109 00:28:16.701875   15272 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 160c1d93-30c8-4413-9986-7aa0d5f0fdd1
	I0109 00:28:16.702322   15272 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:28:16 GMT
	I0109 00:28:16.702322   15272 round_trippers.go:580]     Audit-Id: a25caa60-adf0-456f-871b-4b0c22d4a104
	I0109 00:28:16.702375   15272 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:28:16.702735   15272 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-173500","uid":"5e5c58e3-1ae8-4346-8766-7537fea36975","resourceVersion":"1661","creationTimestamp":"2024-01-09T00:05:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-173500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-173500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_09T00_05_33_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-01-09T00:05:27Z","fieldsType":"FieldsV1","f [truncated 5365 chars]
	I0109 00:28:16.702735   15272 pod_ready.go:97] node "multinode-173500" hosting pod "kube-controller-manager-multinode-173500" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-173500" has status "Ready":"False"
	I0109 00:28:16.702735   15272 pod_ready.go:81] duration metric: took 7.4705ms waiting for pod "kube-controller-manager-multinode-173500" in "kube-system" namespace to be "Ready" ...
	E0109 00:28:16.702735   15272 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-173500" hosting pod "kube-controller-manager-multinode-173500" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-173500" has status "Ready":"False"
	I0109 00:28:16.703293   15272 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-4h4sv" in "kube-system" namespace to be "Ready" ...
	I0109 00:28:16.858341   15272 request.go:629] Waited for 154.7254ms due to client-side throttling, not priority and fairness, request: GET:https://172.24.109.120:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4h4sv
	I0109 00:28:16.858478   15272 round_trippers.go:463] GET https://172.24.109.120:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4h4sv
	I0109 00:28:16.858478   15272 round_trippers.go:469] Request Headers:
	I0109 00:28:16.858478   15272 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:28:16.858478   15272 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0109 00:28:16.864201   15272 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0109 00:28:16.864201   15272 round_trippers.go:577] Response Headers:
	I0109 00:28:16.864201   15272 round_trippers.go:580]     Audit-Id: e1755f9b-a866-41c0-be63-8fb3151bd3be
	I0109 00:28:16.864201   15272 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:28:16.864201   15272 round_trippers.go:580]     Content-Type: application/json
	I0109 00:28:16.864201   15272 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b65f6d3f-e21a-46b0-805d-9b71486b6c1c
	I0109 00:28:16.864201   15272 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 160c1d93-30c8-4413-9986-7aa0d5f0fdd1
	I0109 00:28:16.864201   15272 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:28:16 GMT
	I0109 00:28:16.864483   15272 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-4h4sv","generateName":"kube-proxy-","namespace":"kube-system","uid":"a45861ba-73e0-452f-a535-f66e154ea1c6","resourceVersion":"592","creationTimestamp":"2024-01-09T00:08:43Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"ec2434b9-012c-4df1-b401-04556fed7700","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-09T00:08:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ec2434b9-012c-4df1-b401-04556fed7700\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5541 chars]
	I0109 00:28:17.061026   15272 request.go:629] Waited for 195.998ms due to client-side throttling, not priority and fairness, request: GET:https://172.24.109.120:8443/api/v1/nodes/multinode-173500-m02
	I0109 00:28:17.061187   15272 round_trippers.go:463] GET https://172.24.109.120:8443/api/v1/nodes/multinode-173500-m02
	I0109 00:28:17.061187   15272 round_trippers.go:469] Request Headers:
	I0109 00:28:17.061187   15272 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0109 00:28:17.061404   15272 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:28:17.065801   15272 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0109 00:28:17.065801   15272 round_trippers.go:577] Response Headers:
	I0109 00:28:17.065801   15272 round_trippers.go:580]     Content-Type: application/json
	I0109 00:28:17.065801   15272 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b65f6d3f-e21a-46b0-805d-9b71486b6c1c
	I0109 00:28:17.065801   15272 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 160c1d93-30c8-4413-9986-7aa0d5f0fdd1
	I0109 00:28:17.066117   15272 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:28:17 GMT
	I0109 00:28:17.066117   15272 round_trippers.go:580]     Audit-Id: d369ee4a-f561-4513-b463-93fb9ba94bb5
	I0109 00:28:17.066117   15272 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:28:17.066311   15272 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-173500-m02","uid":"2696f851-45f3-47f4-953f-d03a5dc2fac0","resourceVersion":"1573","creationTimestamp":"2024-01-09T00:08:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-173500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-173500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_09T00_23_52_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-09T00:08:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:anno
tations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager": [truncated 3819 chars]
	I0109 00:28:17.066776   15272 pod_ready.go:92] pod "kube-proxy-4h4sv" in "kube-system" namespace has status "Ready":"True"
	I0109 00:28:17.066841   15272 pod_ready.go:81] duration metric: took 363.5477ms waiting for pod "kube-proxy-4h4sv" in "kube-system" namespace to be "Ready" ...
	I0109 00:28:17.066841   15272 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-mj6ks" in "kube-system" namespace to be "Ready" ...
	I0109 00:28:17.248845   15272 request.go:629] Waited for 181.6881ms due to client-side throttling, not priority and fairness, request: GET:https://172.24.109.120:8443/api/v1/namespaces/kube-system/pods/kube-proxy-mj6ks
	I0109 00:28:17.248941   15272 round_trippers.go:463] GET https://172.24.109.120:8443/api/v1/namespaces/kube-system/pods/kube-proxy-mj6ks
	I0109 00:28:17.248941   15272 round_trippers.go:469] Request Headers:
	I0109 00:28:17.248941   15272 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:28:17.249035   15272 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0109 00:28:17.253453   15272 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0109 00:28:17.253453   15272 round_trippers.go:577] Response Headers:
	I0109 00:28:17.253453   15272 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 160c1d93-30c8-4413-9986-7aa0d5f0fdd1
	I0109 00:28:17.253453   15272 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:28:17 GMT
	I0109 00:28:17.253453   15272 round_trippers.go:580]     Audit-Id: e2e4bbfb-22ea-429e-ac28-382b573059ba
	I0109 00:28:17.253453   15272 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:28:17.253453   15272 round_trippers.go:580]     Content-Type: application/json
	I0109 00:28:17.253453   15272 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b65f6d3f-e21a-46b0-805d-9b71486b6c1c
	I0109 00:28:17.254084   15272 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-mj6ks","generateName":"kube-proxy-","namespace":"kube-system","uid":"bd23c4c8-d363-4a3f-b750-a3de2346a3bb","resourceVersion":"1587","creationTimestamp":"2024-01-09T00:13:24Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"ec2434b9-012c-4df1-b401-04556fed7700","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-09T00:13:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ec2434b9-012c-4df1-b401-04556fed7700\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5747 chars]
	I0109 00:28:17.453694   15272 request.go:629] Waited for 198.6122ms due to client-side throttling, not priority and fairness, request: GET:https://172.24.109.120:8443/api/v1/nodes/multinode-173500-m03
	I0109 00:28:17.453927   15272 round_trippers.go:463] GET https://172.24.109.120:8443/api/v1/nodes/multinode-173500-m03
	I0109 00:28:17.453927   15272 round_trippers.go:469] Request Headers:
	I0109 00:28:17.453927   15272 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:28:17.454027   15272 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0109 00:28:17.457991   15272 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0109 00:28:17.457991   15272 round_trippers.go:577] Response Headers:
	I0109 00:28:17.457991   15272 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b65f6d3f-e21a-46b0-805d-9b71486b6c1c
	I0109 00:28:17.457991   15272 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 160c1d93-30c8-4413-9986-7aa0d5f0fdd1
	I0109 00:28:17.457991   15272 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:28:17 GMT
	I0109 00:28:17.457991   15272 round_trippers.go:580]     Audit-Id: 797b0fce-0d07-4493-b613-e1e500c6475d
	I0109 00:28:17.457991   15272 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:28:17.457991   15272 round_trippers.go:580]     Content-Type: application/json
	I0109 00:28:17.459176   15272 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-173500-m03","uid":"9d8a783a-d01b-498d-94ae-1e3f65e7667c","resourceVersion":"1603","creationTimestamp":"2024-01-09T00:23:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-173500-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-173500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_09T00_23_52_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-09T00:23:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:anno
tations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-det [truncated 3636 chars]
	I0109 00:28:17.459520   15272 pod_ready.go:92] pod "kube-proxy-mj6ks" in "kube-system" namespace has status "Ready":"True"
	I0109 00:28:17.459640   15272 pod_ready.go:81] duration metric: took 392.7988ms waiting for pod "kube-proxy-mj6ks" in "kube-system" namespace to be "Ready" ...
	I0109 00:28:17.459640   15272 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-qrtm6" in "kube-system" namespace to be "Ready" ...
	I0109 00:28:17.658797   15272 request.go:629] Waited for 198.9374ms due to client-side throttling, not priority and fairness, request: GET:https://172.24.109.120:8443/api/v1/namespaces/kube-system/pods/kube-proxy-qrtm6
	I0109 00:28:17.658797   15272 round_trippers.go:463] GET https://172.24.109.120:8443/api/v1/namespaces/kube-system/pods/kube-proxy-qrtm6
	I0109 00:28:17.658797   15272 round_trippers.go:469] Request Headers:
	I0109 00:28:17.658797   15272 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0109 00:28:17.658797   15272 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:28:17.663701   15272 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0109 00:28:17.663701   15272 round_trippers.go:577] Response Headers:
	I0109 00:28:17.663701   15272 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b65f6d3f-e21a-46b0-805d-9b71486b6c1c
	I0109 00:28:17.663701   15272 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 160c1d93-30c8-4413-9986-7aa0d5f0fdd1
	I0109 00:28:17.663701   15272 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:28:17 GMT
	I0109 00:28:17.663701   15272 round_trippers.go:580]     Audit-Id: a15d7487-5c0a-4f60-9399-3cddb281509c
	I0109 00:28:17.663701   15272 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:28:17.663701   15272 round_trippers.go:580]     Content-Type: application/json
	I0109 00:28:17.663701   15272 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-qrtm6","generateName":"kube-proxy-","namespace":"kube-system","uid":"37d066e0-6ff3-4f22-abc3-6bddfa64736e","resourceVersion":"1659","creationTimestamp":"2024-01-09T00:05:45Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"ec2434b9-012c-4df1-b401-04556fed7700","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-09T00:05:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ec2434b9-012c-4df1-b401-04556fed7700\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5933 chars]
	I0109 00:28:17.846599   15272 request.go:629] Waited for 181.974ms due to client-side throttling, not priority and fairness, request: GET:https://172.24.109.120:8443/api/v1/nodes/multinode-173500
	I0109 00:28:17.846599   15272 round_trippers.go:463] GET https://172.24.109.120:8443/api/v1/nodes/multinode-173500
	I0109 00:28:17.846599   15272 round_trippers.go:469] Request Headers:
	I0109 00:28:17.846599   15272 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:28:17.846599   15272 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0109 00:28:17.851434   15272 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0109 00:28:17.851434   15272 round_trippers.go:577] Response Headers:
	I0109 00:28:17.851528   15272 round_trippers.go:580]     Audit-Id: 8f0edb2b-65ad-4f85-a91f-7ff1b75dd82b
	I0109 00:28:17.851528   15272 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:28:17.851528   15272 round_trippers.go:580]     Content-Type: application/json
	I0109 00:28:17.851528   15272 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b65f6d3f-e21a-46b0-805d-9b71486b6c1c
	I0109 00:28:17.851528   15272 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 160c1d93-30c8-4413-9986-7aa0d5f0fdd1
	I0109 00:28:17.851528   15272 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:28:17 GMT
	I0109 00:28:17.851851   15272 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-173500","uid":"5e5c58e3-1ae8-4346-8766-7537fea36975","resourceVersion":"1661","creationTimestamp":"2024-01-09T00:05:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-173500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-173500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_09T00_05_33_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-01-09T00:05:27Z","fieldsType":"FieldsV1","f [truncated 5365 chars]
	I0109 00:28:17.852323   15272 pod_ready.go:97] node "multinode-173500" hosting pod "kube-proxy-qrtm6" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-173500" has status "Ready":"False"
	I0109 00:28:17.852417   15272 pod_ready.go:81] duration metric: took 392.7774ms waiting for pod "kube-proxy-qrtm6" in "kube-system" namespace to be "Ready" ...
	E0109 00:28:17.852417   15272 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-173500" hosting pod "kube-proxy-qrtm6" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-173500" has status "Ready":"False"
	I0109 00:28:17.852417   15272 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-multinode-173500" in "kube-system" namespace to be "Ready" ...
	I0109 00:28:18.049110   15272 request.go:629] Waited for 196.3633ms due to client-side throttling, not priority and fairness, request: GET:https://172.24.109.120:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-173500
	I0109 00:28:18.049300   15272 round_trippers.go:463] GET https://172.24.109.120:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-173500
	I0109 00:28:18.049359   15272 round_trippers.go:469] Request Headers:
	I0109 00:28:18.049359   15272 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:28:18.049359   15272 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0109 00:28:18.053633   15272 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0109 00:28:18.054376   15272 round_trippers.go:577] Response Headers:
	I0109 00:28:18.054376   15272 round_trippers.go:580]     Audit-Id: 7b9aaa9d-2b80-497c-bab3-0e264d561aab
	I0109 00:28:18.054376   15272 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:28:18.054376   15272 round_trippers.go:580]     Content-Type: application/json
	I0109 00:28:18.054376   15272 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b65f6d3f-e21a-46b0-805d-9b71486b6c1c
	I0109 00:28:18.054376   15272 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 160c1d93-30c8-4413-9986-7aa0d5f0fdd1
	I0109 00:28:18.054376   15272 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:28:18 GMT
	I0109 00:28:18.054544   15272 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-173500","namespace":"kube-system","uid":"31d8cdf6-292f-4b3c-87c5-951fc34d0ea4","resourceVersion":"1663","creationTimestamp":"2024-01-09T00:05:30Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"70306498a200a6bbe0aa0b41e240d63b","kubernetes.io/config.mirror":"70306498a200a6bbe0aa0b41e240d63b","kubernetes.io/config.seen":"2024-01-09T00:05:21.481168866Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-173500","uid":"5e5c58e3-1ae8-4346-8766-7537fea36975","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-09T00:05:30Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{}
,"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{ [truncated 5153 chars]
	I0109 00:28:18.252644   15272 request.go:629] Waited for 197.7817ms due to client-side throttling, not priority and fairness, request: GET:https://172.24.109.120:8443/api/v1/nodes/multinode-173500
	I0109 00:28:18.252724   15272 round_trippers.go:463] GET https://172.24.109.120:8443/api/v1/nodes/multinode-173500
	I0109 00:28:18.252724   15272 round_trippers.go:469] Request Headers:
	I0109 00:28:18.252793   15272 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:28:18.252793   15272 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0109 00:28:18.257335   15272 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0109 00:28:18.257335   15272 round_trippers.go:577] Response Headers:
	I0109 00:28:18.257335   15272 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:28:18 GMT
	I0109 00:28:18.257335   15272 round_trippers.go:580]     Audit-Id: 51295b7b-4924-446a-8bf5-a99ac6c843e3
	I0109 00:28:18.257335   15272 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:28:18.257335   15272 round_trippers.go:580]     Content-Type: application/json
	I0109 00:28:18.257335   15272 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b65f6d3f-e21a-46b0-805d-9b71486b6c1c
	I0109 00:28:18.257335   15272 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 160c1d93-30c8-4413-9986-7aa0d5f0fdd1
	I0109 00:28:18.257961   15272 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-173500","uid":"5e5c58e3-1ae8-4346-8766-7537fea36975","resourceVersion":"1661","creationTimestamp":"2024-01-09T00:05:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-173500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-173500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_09T00_05_33_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-01-09T00:05:27Z","fieldsType":"FieldsV1","f [truncated 5365 chars]
	I0109 00:28:18.257961   15272 pod_ready.go:97] node "multinode-173500" hosting pod "kube-scheduler-multinode-173500" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-173500" has status "Ready":"False"
	I0109 00:28:18.257961   15272 pod_ready.go:81] duration metric: took 405.5438ms waiting for pod "kube-scheduler-multinode-173500" in "kube-system" namespace to be "Ready" ...
	E0109 00:28:18.257961   15272 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-173500" hosting pod "kube-scheduler-multinode-173500" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-173500" has status "Ready":"False"
	I0109 00:28:18.257961   15272 pod_ready.go:38] duration metric: took 1.6041611s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0109 00:28:18.258506   15272 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0109 00:28:18.276527   15272 command_runner.go:130] > -16
	I0109 00:28:18.277572   15272 ops.go:34] apiserver oom_adj: -16
	I0109 00:28:18.278136   15272 kubeadm.go:640] restartCluster took 16.4159093s
	I0109 00:28:18.278136   15272 kubeadm.go:406] StartCluster complete in 16.489958s
	I0109 00:28:18.278136   15272 settings.go:142] acquiring lock: {Name:mkab556291043b7dcd90a9d60c03aa7fa181e125 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0109 00:28:18.278390   15272 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0109 00:28:18.279954   15272 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\kubeconfig: {Name:mk2e2755bd0ffee2cfcc8bbf22c26f99d53697ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0109 00:28:18.281407   15272 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0109 00:28:18.281562   15272 addons.go:505] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0109 00:28:18.287760   15272 out.go:177] * Enabled addons: 
	I0109 00:28:18.281931   15272 config.go:182] Loaded profile config "multinode-173500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0109 00:28:18.294796   15272 addons.go:508] enable addons completed in 13.2337ms: enabled=[]
	I0109 00:28:18.296350   15272 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0109 00:28:18.297407   15272 kapi.go:59] client config for multinode-173500: &rest.Config{Host:"https://172.24.109.120:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\multinode-173500\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\multinode-173500\\client.key", CAFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CA
Data:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1e2c140), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0109 00:28:18.299050   15272 cert_rotation.go:137] Starting client certificate rotation controller
	I0109 00:28:18.299419   15272 round_trippers.go:463] GET https://172.24.109.120:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0109 00:28:18.299483   15272 round_trippers.go:469] Request Headers:
	I0109 00:28:18.299483   15272 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:28:18.299483   15272 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0109 00:28:18.315417   15272 round_trippers.go:574] Response Status: 200 OK in 15 milliseconds
	I0109 00:28:18.315417   15272 round_trippers.go:577] Response Headers:
	I0109 00:28:18.315500   15272 round_trippers.go:580]     Content-Type: application/json
	I0109 00:28:18.315500   15272 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b65f6d3f-e21a-46b0-805d-9b71486b6c1c
	I0109 00:28:18.315500   15272 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 160c1d93-30c8-4413-9986-7aa0d5f0fdd1
	I0109 00:28:18.315500   15272 round_trippers.go:580]     Content-Length: 292
	I0109 00:28:18.315500   15272 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:28:18 GMT
	I0109 00:28:18.315500   15272 round_trippers.go:580]     Audit-Id: 8308034a-c7ea-4e35-9ca0-c70ece8c0672
	I0109 00:28:18.315570   15272 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:28:18.315600   15272 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"40c365d2-2414-4cb8-9731-fc615f6d2dcd","resourceVersion":"1737","creationTimestamp":"2024-01-09T00:05:31Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0109 00:28:18.315908   15272 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-173500" context rescaled to 1 replicas
	I0109 00:28:18.315908   15272 start.go:223] Will wait 6m0s for node &{Name: IP:172.24.109.120 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0109 00:28:18.319540   15272 out.go:177] * Verifying Kubernetes components...
	I0109 00:28:18.335530   15272 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0109 00:28:18.426525   15272 command_runner.go:130] > apiVersion: v1
	I0109 00:28:18.426525   15272 command_runner.go:130] > data:
	I0109 00:28:18.426525   15272 command_runner.go:130] >   Corefile: |
	I0109 00:28:18.426525   15272 command_runner.go:130] >     .:53 {
	I0109 00:28:18.426525   15272 command_runner.go:130] >         log
	I0109 00:28:18.427531   15272 command_runner.go:130] >         errors
	I0109 00:28:18.427531   15272 command_runner.go:130] >         health {
	I0109 00:28:18.427556   15272 command_runner.go:130] >            lameduck 5s
	I0109 00:28:18.427556   15272 command_runner.go:130] >         }
	I0109 00:28:18.427556   15272 command_runner.go:130] >         ready
	I0109 00:28:18.427556   15272 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I0109 00:28:18.427556   15272 command_runner.go:130] >            pods insecure
	I0109 00:28:18.427556   15272 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I0109 00:28:18.427556   15272 command_runner.go:130] >            ttl 30
	I0109 00:28:18.427627   15272 command_runner.go:130] >         }
	I0109 00:28:18.427627   15272 command_runner.go:130] >         prometheus :9153
	I0109 00:28:18.427627   15272 command_runner.go:130] >         hosts {
	I0109 00:28:18.427627   15272 command_runner.go:130] >            172.24.96.1 host.minikube.internal
	I0109 00:28:18.427627   15272 command_runner.go:130] >            fallthrough
	I0109 00:28:18.427627   15272 command_runner.go:130] >         }
	I0109 00:28:18.427627   15272 command_runner.go:130] >         forward . /etc/resolv.conf {
	I0109 00:28:18.427695   15272 command_runner.go:130] >            max_concurrent 1000
	I0109 00:28:18.427695   15272 command_runner.go:130] >         }
	I0109 00:28:18.427695   15272 command_runner.go:130] >         cache 30
	I0109 00:28:18.427695   15272 command_runner.go:130] >         loop
	I0109 00:28:18.427695   15272 command_runner.go:130] >         reload
	I0109 00:28:18.427695   15272 command_runner.go:130] >         loadbalance
	I0109 00:28:18.427695   15272 command_runner.go:130] >     }
	I0109 00:28:18.427766   15272 command_runner.go:130] > kind: ConfigMap
	I0109 00:28:18.427766   15272 command_runner.go:130] > metadata:
	I0109 00:28:18.427766   15272 command_runner.go:130] >   creationTimestamp: "2024-01-09T00:05:31Z"
	I0109 00:28:18.427766   15272 command_runner.go:130] >   name: coredns
	I0109 00:28:18.427766   15272 command_runner.go:130] >   namespace: kube-system
	I0109 00:28:18.427836   15272 command_runner.go:130] >   resourceVersion: "362"
	I0109 00:28:18.427836   15272 command_runner.go:130] >   uid: 3f96b20d-2896-4a3f-95df-633f61fcd852
	I0109 00:28:18.434124   15272 node_ready.go:35] waiting up to 6m0s for node "multinode-173500" to be "Ready" ...
	I0109 00:28:18.434769   15272 start.go:902] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0109 00:28:18.455715   15272 round_trippers.go:463] GET https://172.24.109.120:8443/api/v1/nodes/multinode-173500
	I0109 00:28:18.455715   15272 round_trippers.go:469] Request Headers:
	I0109 00:28:18.455715   15272 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:28:18.455789   15272 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0109 00:28:18.459323   15272 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0109 00:28:18.459323   15272 round_trippers.go:577] Response Headers:
	I0109 00:28:18.459323   15272 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:28:18 GMT
	I0109 00:28:18.459323   15272 round_trippers.go:580]     Audit-Id: 993219df-cf52-422d-a584-f4b15030d824
	I0109 00:28:18.459323   15272 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:28:18.459323   15272 round_trippers.go:580]     Content-Type: application/json
	I0109 00:28:18.459323   15272 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b65f6d3f-e21a-46b0-805d-9b71486b6c1c
	I0109 00:28:18.459323   15272 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 160c1d93-30c8-4413-9986-7aa0d5f0fdd1
	I0109 00:28:18.459628   15272 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-173500","uid":"5e5c58e3-1ae8-4346-8766-7537fea36975","resourceVersion":"1661","creationTimestamp":"2024-01-09T00:05:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-173500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-173500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_09T00_05_33_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-01-09T00:05:27Z","fieldsType":"FieldsV1","f [truncated 5365 chars]
	I0109 00:28:18.937326   15272 round_trippers.go:463] GET https://172.24.109.120:8443/api/v1/nodes/multinode-173500
	I0109 00:28:18.937326   15272 round_trippers.go:469] Request Headers:
	I0109 00:28:18.937326   15272 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:28:18.937326   15272 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0109 00:28:18.941941   15272 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0109 00:28:18.941941   15272 round_trippers.go:577] Response Headers:
	I0109 00:28:18.941941   15272 round_trippers.go:580]     Audit-Id: 7b40a18a-81b0-4bee-b73e-1ef7a6289414
	I0109 00:28:18.942395   15272 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:28:18.942395   15272 round_trippers.go:580]     Content-Type: application/json
	I0109 00:28:18.942395   15272 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b65f6d3f-e21a-46b0-805d-9b71486b6c1c
	I0109 00:28:18.942550   15272 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 160c1d93-30c8-4413-9986-7aa0d5f0fdd1
	I0109 00:28:18.942550   15272 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:28:18 GMT
	I0109 00:28:18.942787   15272 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-173500","uid":"5e5c58e3-1ae8-4346-8766-7537fea36975","resourceVersion":"1661","creationTimestamp":"2024-01-09T00:05:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-173500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-173500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_09T00_05_33_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-01-09T00:05:27Z","fieldsType":"FieldsV1","f [truncated 5365 chars]
	I0109 00:28:19.444674   15272 round_trippers.go:463] GET https://172.24.109.120:8443/api/v1/nodes/multinode-173500
	I0109 00:28:19.444803   15272 round_trippers.go:469] Request Headers:
	I0109 00:28:19.444803   15272 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:28:19.444803   15272 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0109 00:28:19.449415   15272 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0109 00:28:19.449415   15272 round_trippers.go:577] Response Headers:
	I0109 00:28:19.449415   15272 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b65f6d3f-e21a-46b0-805d-9b71486b6c1c
	I0109 00:28:19.449415   15272 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 160c1d93-30c8-4413-9986-7aa0d5f0fdd1
	I0109 00:28:19.449415   15272 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:28:19 GMT
	I0109 00:28:19.449415   15272 round_trippers.go:580]     Audit-Id: 3d427490-3bc0-4acf-bf6a-349b4d6425df
	I0109 00:28:19.449415   15272 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:28:19.449415   15272 round_trippers.go:580]     Content-Type: application/json
	I0109 00:28:19.450370   15272 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-173500","uid":"5e5c58e3-1ae8-4346-8766-7537fea36975","resourceVersion":"1661","creationTimestamp":"2024-01-09T00:05:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-173500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-173500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_09T00_05_33_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-01-09T00:05:27Z","fieldsType":"FieldsV1","f [truncated 5365 chars]
	I0109 00:28:19.950100   15272 round_trippers.go:463] GET https://172.24.109.120:8443/api/v1/nodes/multinode-173500
	I0109 00:28:19.950232   15272 round_trippers.go:469] Request Headers:
	I0109 00:28:19.950232   15272 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:28:19.950232   15272 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0109 00:28:19.955871   15272 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0109 00:28:19.955871   15272 round_trippers.go:577] Response Headers:
	I0109 00:28:19.955871   15272 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:28:19.956867   15272 round_trippers.go:580]     Content-Type: application/json
	I0109 00:28:19.956900   15272 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b65f6d3f-e21a-46b0-805d-9b71486b6c1c
	I0109 00:28:19.956900   15272 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 160c1d93-30c8-4413-9986-7aa0d5f0fdd1
	I0109 00:28:19.956900   15272 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:28:19 GMT
	I0109 00:28:19.956900   15272 round_trippers.go:580]     Audit-Id: be011850-b7bb-4947-a210-b1d1985be30b
	I0109 00:28:19.959553   15272 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-173500","uid":"5e5c58e3-1ae8-4346-8766-7537fea36975","resourceVersion":"1661","creationTimestamp":"2024-01-09T00:05:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-173500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-173500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_09T00_05_33_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-01-09T00:05:27Z","fieldsType":"FieldsV1","f [truncated 5365 chars]
	I0109 00:28:20.438398   15272 round_trippers.go:463] GET https://172.24.109.120:8443/api/v1/nodes/multinode-173500
	I0109 00:28:20.438549   15272 round_trippers.go:469] Request Headers:
	I0109 00:28:20.438549   15272 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:28:20.438549   15272 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0109 00:28:20.442955   15272 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0109 00:28:20.442955   15272 round_trippers.go:577] Response Headers:
	I0109 00:28:20.442955   15272 round_trippers.go:580]     Audit-Id: d83ca35f-9b0f-4e6f-a5a8-b97ac628cac5
	I0109 00:28:20.442955   15272 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:28:20.442955   15272 round_trippers.go:580]     Content-Type: application/json
	I0109 00:28:20.442955   15272 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b65f6d3f-e21a-46b0-805d-9b71486b6c1c
	I0109 00:28:20.442955   15272 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 160c1d93-30c8-4413-9986-7aa0d5f0fdd1
	I0109 00:28:20.442955   15272 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:28:20 GMT
	I0109 00:28:20.444406   15272 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-173500","uid":"5e5c58e3-1ae8-4346-8766-7537fea36975","resourceVersion":"1661","creationTimestamp":"2024-01-09T00:05:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-173500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-173500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_09T00_05_33_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-01-09T00:05:27Z","fieldsType":"FieldsV1","f [truncated 5365 chars]
	I0109 00:28:20.444619   15272 node_ready.go:58] node "multinode-173500" has status "Ready":"False"
	I0109 00:28:20.938040   15272 round_trippers.go:463] GET https://172.24.109.120:8443/api/v1/nodes/multinode-173500
	I0109 00:28:20.938142   15272 round_trippers.go:469] Request Headers:
	I0109 00:28:20.938142   15272 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:28:20.938142   15272 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0109 00:28:20.942470   15272 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0109 00:28:20.942470   15272 round_trippers.go:577] Response Headers:
	I0109 00:28:20.942470   15272 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b65f6d3f-e21a-46b0-805d-9b71486b6c1c
	I0109 00:28:20.942623   15272 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 160c1d93-30c8-4413-9986-7aa0d5f0fdd1
	I0109 00:28:20.942623   15272 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:28:20 GMT
	I0109 00:28:20.942623   15272 round_trippers.go:580]     Audit-Id: 51874a23-d834-456d-bf78-bab7d0128779
	I0109 00:28:20.942623   15272 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:28:20.942623   15272 round_trippers.go:580]     Content-Type: application/json
	I0109 00:28:20.943064   15272 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-173500","uid":"5e5c58e3-1ae8-4346-8766-7537fea36975","resourceVersion":"1661","creationTimestamp":"2024-01-09T00:05:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-173500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-173500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_09T00_05_33_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-01-09T00:05:27Z","fieldsType":"FieldsV1","f [truncated 5365 chars]
	I0109 00:28:21.438969   15272 round_trippers.go:463] GET https://172.24.109.120:8443/api/v1/nodes/multinode-173500
	I0109 00:28:21.438969   15272 round_trippers.go:469] Request Headers:
	I0109 00:28:21.438969   15272 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:28:21.438969   15272 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0109 00:28:21.446847   15272 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0109 00:28:21.447675   15272 round_trippers.go:577] Response Headers:
	I0109 00:28:21.447675   15272 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:28:21.447675   15272 round_trippers.go:580]     Content-Type: application/json
	I0109 00:28:21.447675   15272 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b65f6d3f-e21a-46b0-805d-9b71486b6c1c
	I0109 00:28:21.447675   15272 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 160c1d93-30c8-4413-9986-7aa0d5f0fdd1
	I0109 00:28:21.447675   15272 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:28:21 GMT
	I0109 00:28:21.447675   15272 round_trippers.go:580]     Audit-Id: 888aa3bf-e6f3-4864-88c4-ad186d6d66fc
	I0109 00:28:21.447833   15272 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-173500","uid":"5e5c58e3-1ae8-4346-8766-7537fea36975","resourceVersion":"1661","creationTimestamp":"2024-01-09T00:05:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-173500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-173500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_09T00_05_33_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-01-09T00:05:27Z","fieldsType":"FieldsV1","f [truncated 5365 chars]
	I0109 00:28:21.940252   15272 round_trippers.go:463] GET https://172.24.109.120:8443/api/v1/nodes/multinode-173500
	I0109 00:28:21.940407   15272 round_trippers.go:469] Request Headers:
	I0109 00:28:21.940407   15272 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:28:21.940519   15272 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0109 00:28:21.944355   15272 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0109 00:28:21.944355   15272 round_trippers.go:577] Response Headers:
	I0109 00:28:21.944355   15272 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:28:21 GMT
	I0109 00:28:21.944355   15272 round_trippers.go:580]     Audit-Id: 0cb3f857-7e75-466f-9380-6f4884561a0b
	I0109 00:28:21.944355   15272 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:28:21.945041   15272 round_trippers.go:580]     Content-Type: application/json
	I0109 00:28:21.945041   15272 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b65f6d3f-e21a-46b0-805d-9b71486b6c1c
	I0109 00:28:21.945041   15272 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 160c1d93-30c8-4413-9986-7aa0d5f0fdd1
	I0109 00:28:21.945485   15272 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-173500","uid":"5e5c58e3-1ae8-4346-8766-7537fea36975","resourceVersion":"1661","creationTimestamp":"2024-01-09T00:05:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-173500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-173500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_09T00_05_33_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-01-09T00:05:27Z","fieldsType":"FieldsV1","f [truncated 5365 chars]
	I0109 00:28:22.443373   15272 round_trippers.go:463] GET https://172.24.109.120:8443/api/v1/nodes/multinode-173500
	I0109 00:28:22.443485   15272 round_trippers.go:469] Request Headers:
	I0109 00:28:22.443485   15272 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:28:22.443485   15272 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0109 00:28:22.447428   15272 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0109 00:28:22.447428   15272 round_trippers.go:577] Response Headers:
	I0109 00:28:22.448011   15272 round_trippers.go:580]     Audit-Id: 61b1f586-d1e0-40c7-8891-0f7bf701e3dc
	I0109 00:28:22.448011   15272 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:28:22.448011   15272 round_trippers.go:580]     Content-Type: application/json
	I0109 00:28:22.448011   15272 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b65f6d3f-e21a-46b0-805d-9b71486b6c1c
	I0109 00:28:22.448011   15272 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 160c1d93-30c8-4413-9986-7aa0d5f0fdd1
	I0109 00:28:22.448011   15272 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:28:22 GMT
	I0109 00:28:22.448620   15272 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-173500","uid":"5e5c58e3-1ae8-4346-8766-7537fea36975","resourceVersion":"1661","creationTimestamp":"2024-01-09T00:05:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-173500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-173500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_09T00_05_33_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-01-09T00:05:27Z","fieldsType":"FieldsV1","f [truncated 5365 chars]
	I0109 00:28:22.449796   15272 node_ready.go:58] node "multinode-173500" has status "Ready":"False"
	I0109 00:28:22.940679   15272 round_trippers.go:463] GET https://172.24.109.120:8443/api/v1/nodes/multinode-173500
	I0109 00:28:22.940679   15272 round_trippers.go:469] Request Headers:
	I0109 00:28:22.940679   15272 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:28:22.940679   15272 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0109 00:28:22.945209   15272 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0109 00:28:22.945209   15272 round_trippers.go:577] Response Headers:
	I0109 00:28:22.945209   15272 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 160c1d93-30c8-4413-9986-7aa0d5f0fdd1
	I0109 00:28:22.946212   15272 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:28:22 GMT
	I0109 00:28:22.946212   15272 round_trippers.go:580]     Audit-Id: 80f314d1-eac9-4531-baca-ba564796fb43
	I0109 00:28:22.946212   15272 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:28:22.946212   15272 round_trippers.go:580]     Content-Type: application/json
	I0109 00:28:22.946260   15272 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b65f6d3f-e21a-46b0-805d-9b71486b6c1c
	I0109 00:28:22.946575   15272 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-173500","uid":"5e5c58e3-1ae8-4346-8766-7537fea36975","resourceVersion":"1661","creationTimestamp":"2024-01-09T00:05:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-173500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-173500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_09T00_05_33_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-01-09T00:05:27Z","fieldsType":"FieldsV1","f [truncated 5365 chars]
	I0109 00:28:23.439325   15272 round_trippers.go:463] GET https://172.24.109.120:8443/api/v1/nodes/multinode-173500
	I0109 00:28:23.439325   15272 round_trippers.go:469] Request Headers:
	I0109 00:28:23.439325   15272 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:28:23.439325   15272 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0109 00:28:23.447586   15272 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0109 00:28:23.447586   15272 round_trippers.go:577] Response Headers:
	I0109 00:28:23.447586   15272 round_trippers.go:580]     Audit-Id: 4a9938cd-4058-40f5-83ca-d2da6d897915
	I0109 00:28:23.447586   15272 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:28:23.447586   15272 round_trippers.go:580]     Content-Type: application/json
	I0109 00:28:23.447586   15272 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b65f6d3f-e21a-46b0-805d-9b71486b6c1c
	I0109 00:28:23.447586   15272 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 160c1d93-30c8-4413-9986-7aa0d5f0fdd1
	I0109 00:28:23.447586   15272 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:28:23 GMT
	I0109 00:28:23.447586   15272 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-173500","uid":"5e5c58e3-1ae8-4346-8766-7537fea36975","resourceVersion":"1661","creationTimestamp":"2024-01-09T00:05:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-173500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-173500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_09T00_05_33_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-01-09T00:05:27Z","fieldsType":"FieldsV1","f [truncated 5365 chars]
	I0109 00:28:23.940391   15272 round_trippers.go:463] GET https://172.24.109.120:8443/api/v1/nodes/multinode-173500
	I0109 00:28:23.940391   15272 round_trippers.go:469] Request Headers:
	I0109 00:28:23.940476   15272 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:28:23.940476   15272 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0109 00:28:23.944725   15272 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0109 00:28:23.944725   15272 round_trippers.go:577] Response Headers:
	I0109 00:28:23.944725   15272 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b65f6d3f-e21a-46b0-805d-9b71486b6c1c
	I0109 00:28:23.944725   15272 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 160c1d93-30c8-4413-9986-7aa0d5f0fdd1
	I0109 00:28:23.944725   15272 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:28:23 GMT
	I0109 00:28:23.944725   15272 round_trippers.go:580]     Audit-Id: dc88aff8-8b2b-4554-bc45-2d149ef195db
	I0109 00:28:23.944725   15272 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:28:23.944725   15272 round_trippers.go:580]     Content-Type: application/json
	I0109 00:28:23.945347   15272 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-173500","uid":"5e5c58e3-1ae8-4346-8766-7537fea36975","resourceVersion":"1661","creationTimestamp":"2024-01-09T00:05:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-173500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-173500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_09T00_05_33_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-01-09T00:05:27Z","fieldsType":"FieldsV1","f [truncated 5365 chars]
	I0109 00:28:24.445978   15272 round_trippers.go:463] GET https://172.24.109.120:8443/api/v1/nodes/multinode-173500
	I0109 00:28:24.446272   15272 round_trippers.go:469] Request Headers:
	I0109 00:28:24.446272   15272 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:28:24.446272   15272 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0109 00:28:24.450749   15272 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0109 00:28:24.450749   15272 round_trippers.go:577] Response Headers:
	I0109 00:28:24.450749   15272 round_trippers.go:580]     Audit-Id: 85e80c24-7594-487c-b5fa-7f5a82af18da
	I0109 00:28:24.450749   15272 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:28:24.450749   15272 round_trippers.go:580]     Content-Type: application/json
	I0109 00:28:24.451206   15272 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b65f6d3f-e21a-46b0-805d-9b71486b6c1c
	I0109 00:28:24.451206   15272 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 160c1d93-30c8-4413-9986-7aa0d5f0fdd1
	I0109 00:28:24.451206   15272 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:28:24 GMT
	I0109 00:28:24.452060   15272 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-173500","uid":"5e5c58e3-1ae8-4346-8766-7537fea36975","resourceVersion":"1661","creationTimestamp":"2024-01-09T00:05:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-173500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-173500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_09T00_05_33_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-01-09T00:05:27Z","fieldsType":"FieldsV1","f [truncated 5365 chars]
	I0109 00:28:24.452628   15272 node_ready.go:58] node "multinode-173500" has status "Ready":"False"
	I0109 00:28:24.948798   15272 round_trippers.go:463] GET https://172.24.109.120:8443/api/v1/nodes/multinode-173500
	I0109 00:28:24.948798   15272 round_trippers.go:469] Request Headers:
	I0109 00:28:24.948798   15272 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:28:24.948798   15272 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0109 00:28:24.952997   15272 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0109 00:28:24.952997   15272 round_trippers.go:577] Response Headers:
	I0109 00:28:24.952997   15272 round_trippers.go:580]     Audit-Id: 7bdc33c5-d0bd-4d6d-80be-61af823dcace
	I0109 00:28:24.952997   15272 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:28:24.952997   15272 round_trippers.go:580]     Content-Type: application/json
	I0109 00:28:24.953540   15272 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b65f6d3f-e21a-46b0-805d-9b71486b6c1c
	I0109 00:28:24.953593   15272 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 160c1d93-30c8-4413-9986-7aa0d5f0fdd1
	I0109 00:28:24.953593   15272 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:28:24 GMT
	I0109 00:28:24.953664   15272 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-173500","uid":"5e5c58e3-1ae8-4346-8766-7537fea36975","resourceVersion":"1661","creationTimestamp":"2024-01-09T00:05:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-173500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-173500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_09T00_05_33_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-01-09T00:05:27Z","fieldsType":"FieldsV1","f [truncated 5365 chars]
	I0109 00:28:25.438423   15272 round_trippers.go:463] GET https://172.24.109.120:8443/api/v1/nodes/multinode-173500
	I0109 00:28:25.438423   15272 round_trippers.go:469] Request Headers:
	I0109 00:28:25.438423   15272 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:28:25.438423   15272 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0109 00:28:25.442461   15272 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0109 00:28:25.442461   15272 round_trippers.go:577] Response Headers:
	I0109 00:28:25.442461   15272 round_trippers.go:580]     Audit-Id: f41201be-abf8-4697-b0f1-d6f9775f4f69
	I0109 00:28:25.443058   15272 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:28:25.443058   15272 round_trippers.go:580]     Content-Type: application/json
	I0109 00:28:25.443058   15272 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b65f6d3f-e21a-46b0-805d-9b71486b6c1c
	I0109 00:28:25.443058   15272 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 160c1d93-30c8-4413-9986-7aa0d5f0fdd1
	I0109 00:28:25.443058   15272 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:28:25 GMT
	I0109 00:28:25.443153   15272 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-173500","uid":"5e5c58e3-1ae8-4346-8766-7537fea36975","resourceVersion":"1789","creationTimestamp":"2024-01-09T00:05:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-173500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-173500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_09T00_05_33_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-01-09T00:05:27Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0109 00:28:25.939965   15272 round_trippers.go:463] GET https://172.24.109.120:8443/api/v1/nodes/multinode-173500
	I0109 00:28:25.939965   15272 round_trippers.go:469] Request Headers:
	I0109 00:28:25.939965   15272 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:28:25.939965   15272 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0109 00:28:25.944529   15272 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0109 00:28:25.944529   15272 round_trippers.go:577] Response Headers:
	I0109 00:28:25.944529   15272 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:28:25.944665   15272 round_trippers.go:580]     Content-Type: application/json
	I0109 00:28:25.944665   15272 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b65f6d3f-e21a-46b0-805d-9b71486b6c1c
	I0109 00:28:25.944665   15272 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 160c1d93-30c8-4413-9986-7aa0d5f0fdd1
	I0109 00:28:25.944665   15272 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:28:25 GMT
	I0109 00:28:25.944665   15272 round_trippers.go:580]     Audit-Id: 6d2f64c1-5f0a-4b9f-affb-4b6fd2e9278e
	I0109 00:28:25.944665   15272 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-173500","uid":"5e5c58e3-1ae8-4346-8766-7537fea36975","resourceVersion":"1789","creationTimestamp":"2024-01-09T00:05:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-173500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-173500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_09T00_05_33_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-01-09T00:05:27Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0109 00:28:26.445289   15272 round_trippers.go:463] GET https://172.24.109.120:8443/api/v1/nodes/multinode-173500
	I0109 00:28:26.445289   15272 round_trippers.go:469] Request Headers:
	I0109 00:28:26.445289   15272 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:28:26.445289   15272 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0109 00:28:26.449699   15272 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0109 00:28:26.449699   15272 round_trippers.go:577] Response Headers:
	I0109 00:28:26.449699   15272 round_trippers.go:580]     Audit-Id: 276a8a9a-bb1e-4334-822e-feeacfb7d57a
	I0109 00:28:26.450180   15272 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:28:26.450180   15272 round_trippers.go:580]     Content-Type: application/json
	I0109 00:28:26.450180   15272 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b65f6d3f-e21a-46b0-805d-9b71486b6c1c
	I0109 00:28:26.450180   15272 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 160c1d93-30c8-4413-9986-7aa0d5f0fdd1
	I0109 00:28:26.450180   15272 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:28:26 GMT
	I0109 00:28:26.450491   15272 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-173500","uid":"5e5c58e3-1ae8-4346-8766-7537fea36975","resourceVersion":"1789","creationTimestamp":"2024-01-09T00:05:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-173500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-173500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_09T00_05_33_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-01-09T00:05:27Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0109 00:28:26.944803   15272 round_trippers.go:463] GET https://172.24.109.120:8443/api/v1/nodes/multinode-173500
	I0109 00:28:26.944803   15272 round_trippers.go:469] Request Headers:
	I0109 00:28:26.944803   15272 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:28:26.944803   15272 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0109 00:28:26.949218   15272 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0109 00:28:26.949407   15272 round_trippers.go:577] Response Headers:
	I0109 00:28:26.949407   15272 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 160c1d93-30c8-4413-9986-7aa0d5f0fdd1
	I0109 00:28:26.949407   15272 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:28:26 GMT
	I0109 00:28:26.949407   15272 round_trippers.go:580]     Audit-Id: 9f85f2ee-b659-4b1d-a006-ecf1362e5609
	I0109 00:28:26.949407   15272 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:28:26.949407   15272 round_trippers.go:580]     Content-Type: application/json
	I0109 00:28:26.949407   15272 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b65f6d3f-e21a-46b0-805d-9b71486b6c1c
	I0109 00:28:26.950018   15272 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-173500","uid":"5e5c58e3-1ae8-4346-8766-7537fea36975","resourceVersion":"1789","creationTimestamp":"2024-01-09T00:05:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-173500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-173500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_09T00_05_33_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-01-09T00:05:27Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0109 00:28:26.950169   15272 node_ready.go:58] node "multinode-173500" has status "Ready":"False"
	I0109 00:28:27.441714   15272 round_trippers.go:463] GET https://172.24.109.120:8443/api/v1/nodes/multinode-173500
	I0109 00:28:27.441805   15272 round_trippers.go:469] Request Headers:
	I0109 00:28:27.441805   15272 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:28:27.441805   15272 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0109 00:28:27.448169   15272 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0109 00:28:27.448169   15272 round_trippers.go:577] Response Headers:
	I0109 00:28:27.448169   15272 round_trippers.go:580]     Audit-Id: c31f3323-cd2d-422f-a169-538601cd9316
	I0109 00:28:27.448169   15272 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:28:27.448169   15272 round_trippers.go:580]     Content-Type: application/json
	I0109 00:28:27.448169   15272 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b65f6d3f-e21a-46b0-805d-9b71486b6c1c
	I0109 00:28:27.448169   15272 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 160c1d93-30c8-4413-9986-7aa0d5f0fdd1
	I0109 00:28:27.448169   15272 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:28:27 GMT
	I0109 00:28:27.448707   15272 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-173500","uid":"5e5c58e3-1ae8-4346-8766-7537fea36975","resourceVersion":"1789","creationTimestamp":"2024-01-09T00:05:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-173500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-173500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_09T00_05_33_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-01-09T00:05:27Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0109 00:28:27.943254   15272 round_trippers.go:463] GET https://172.24.109.120:8443/api/v1/nodes/multinode-173500
	I0109 00:28:27.943317   15272 round_trippers.go:469] Request Headers:
	I0109 00:28:27.943362   15272 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:28:27.943362   15272 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0109 00:28:27.947378   15272 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0109 00:28:27.947731   15272 round_trippers.go:577] Response Headers:
	I0109 00:28:27.947731   15272 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 160c1d93-30c8-4413-9986-7aa0d5f0fdd1
	I0109 00:28:27.947731   15272 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:28:27 GMT
	I0109 00:28:27.947731   15272 round_trippers.go:580]     Audit-Id: 02bb5ede-3c3f-4378-b790-30ee6d60f184
	I0109 00:28:27.947731   15272 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:28:27.947731   15272 round_trippers.go:580]     Content-Type: application/json
	I0109 00:28:27.947731   15272 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b65f6d3f-e21a-46b0-805d-9b71486b6c1c
	I0109 00:28:27.947871   15272 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-173500","uid":"5e5c58e3-1ae8-4346-8766-7537fea36975","resourceVersion":"1789","creationTimestamp":"2024-01-09T00:05:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-173500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-173500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_09T00_05_33_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-01-09T00:05:27Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0109 00:28:28.435602   15272 round_trippers.go:463] GET https://172.24.109.120:8443/api/v1/nodes/multinode-173500
	I0109 00:28:28.435659   15272 round_trippers.go:469] Request Headers:
	I0109 00:28:28.435659   15272 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:28:28.435659   15272 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0109 00:28:28.444082   15272 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0109 00:28:28.444082   15272 round_trippers.go:577] Response Headers:
	I0109 00:28:28.444082   15272 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:28:28 GMT
	I0109 00:28:28.444082   15272 round_trippers.go:580]     Audit-Id: 4fe884b3-4771-4f9c-8af5-da9c6d6f27cc
	I0109 00:28:28.444082   15272 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:28:28.444082   15272 round_trippers.go:580]     Content-Type: application/json
	I0109 00:28:28.444082   15272 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b65f6d3f-e21a-46b0-805d-9b71486b6c1c
	I0109 00:28:28.444082   15272 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 160c1d93-30c8-4413-9986-7aa0d5f0fdd1
	I0109 00:28:28.444754   15272 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-173500","uid":"5e5c58e3-1ae8-4346-8766-7537fea36975","resourceVersion":"1789","creationTimestamp":"2024-01-09T00:05:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-173500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-173500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_09T00_05_33_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-01-09T00:05:27Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0109 00:28:28.943145   15272 round_trippers.go:463] GET https://172.24.109.120:8443/api/v1/nodes/multinode-173500
	I0109 00:28:28.943145   15272 round_trippers.go:469] Request Headers:
	I0109 00:28:28.943204   15272 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0109 00:28:28.943204   15272 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:28:28.947621   15272 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0109 00:28:28.947621   15272 round_trippers.go:577] Response Headers:
	I0109 00:28:28.947621   15272 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:28:28 GMT
	I0109 00:28:28.948601   15272 round_trippers.go:580]     Audit-Id: d0e2d802-2c23-48e4-a080-76c5690afc3e
	I0109 00:28:28.948601   15272 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:28:28.948601   15272 round_trippers.go:580]     Content-Type: application/json
	I0109 00:28:28.948601   15272 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b65f6d3f-e21a-46b0-805d-9b71486b6c1c
	I0109 00:28:28.948601   15272 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 160c1d93-30c8-4413-9986-7aa0d5f0fdd1
	I0109 00:28:28.949450   15272 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-173500","uid":"5e5c58e3-1ae8-4346-8766-7537fea36975","resourceVersion":"1789","creationTimestamp":"2024-01-09T00:05:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-173500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-173500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_09T00_05_33_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-01-09T00:05:27Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0109 00:28:29.449610   15272 round_trippers.go:463] GET https://172.24.109.120:8443/api/v1/nodes/multinode-173500
	I0109 00:28:29.449610   15272 round_trippers.go:469] Request Headers:
	I0109 00:28:29.449610   15272 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:28:29.449610   15272 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0109 00:28:29.455250   15272 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0109 00:28:29.455250   15272 round_trippers.go:577] Response Headers:
	I0109 00:28:29.455250   15272 round_trippers.go:580]     Audit-Id: ac51c96d-a389-4380-ab4f-8c9105b17b05
	I0109 00:28:29.455250   15272 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:28:29.455250   15272 round_trippers.go:580]     Content-Type: application/json
	I0109 00:28:29.455250   15272 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b65f6d3f-e21a-46b0-805d-9b71486b6c1c
	I0109 00:28:29.455250   15272 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 160c1d93-30c8-4413-9986-7aa0d5f0fdd1
	I0109 00:28:29.455250   15272 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:28:29 GMT
	I0109 00:28:29.455250   15272 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-173500","uid":"5e5c58e3-1ae8-4346-8766-7537fea36975","resourceVersion":"1789","creationTimestamp":"2024-01-09T00:05:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-173500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-173500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_09T00_05_33_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-01-09T00:05:27Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0109 00:28:29.455250   15272 node_ready.go:58] node "multinode-173500" has status "Ready":"False"
	I0109 00:28:29.947563   15272 round_trippers.go:463] GET https://172.24.109.120:8443/api/v1/nodes/multinode-173500
	I0109 00:28:29.947563   15272 round_trippers.go:469] Request Headers:
	I0109 00:28:29.947645   15272 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:28:29.947645   15272 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0109 00:28:29.952003   15272 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0109 00:28:29.952659   15272 round_trippers.go:577] Response Headers:
	I0109 00:28:29.952659   15272 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:28:29.952659   15272 round_trippers.go:580]     Content-Type: application/json
	I0109 00:28:29.952659   15272 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b65f6d3f-e21a-46b0-805d-9b71486b6c1c
	I0109 00:28:29.952659   15272 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 160c1d93-30c8-4413-9986-7aa0d5f0fdd1
	I0109 00:28:29.952659   15272 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:28:29 GMT
	I0109 00:28:29.952659   15272 round_trippers.go:580]     Audit-Id: 16fae90c-6723-457e-9418-996682856d23
	I0109 00:28:29.953004   15272 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-173500","uid":"5e5c58e3-1ae8-4346-8766-7537fea36975","resourceVersion":"1789","creationTimestamp":"2024-01-09T00:05:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-173500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-173500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_09T00_05_33_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-01-09T00:05:27Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0109 00:28:30.447379   15272 round_trippers.go:463] GET https://172.24.109.120:8443/api/v1/nodes/multinode-173500
	I0109 00:28:30.447438   15272 round_trippers.go:469] Request Headers:
	I0109 00:28:30.447438   15272 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0109 00:28:30.447516   15272 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:28:30.475742   15272 round_trippers.go:574] Response Status: 200 OK in 28 milliseconds
	I0109 00:28:30.475742   15272 round_trippers.go:577] Response Headers:
	I0109 00:28:30.475742   15272 round_trippers.go:580]     Audit-Id: c01c8ea9-1731-4838-86d3-cb2b5fad6784
	I0109 00:28:30.475742   15272 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:28:30.476369   15272 round_trippers.go:580]     Content-Type: application/json
	I0109 00:28:30.476369   15272 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b65f6d3f-e21a-46b0-805d-9b71486b6c1c
	I0109 00:28:30.476369   15272 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 160c1d93-30c8-4413-9986-7aa0d5f0fdd1
	I0109 00:28:30.476369   15272 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:28:30 GMT
	I0109 00:28:30.477092   15272 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-173500","uid":"5e5c58e3-1ae8-4346-8766-7537fea36975","resourceVersion":"1789","creationTimestamp":"2024-01-09T00:05:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-173500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-173500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_09T00_05_33_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-01-09T00:05:27Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0109 00:28:30.948209   15272 round_trippers.go:463] GET https://172.24.109.120:8443/api/v1/nodes/multinode-173500
	I0109 00:28:30.948293   15272 round_trippers.go:469] Request Headers:
	I0109 00:28:30.948293   15272 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:28:30.948293   15272 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0109 00:28:30.951709   15272 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0109 00:28:30.951709   15272 round_trippers.go:577] Response Headers:
	I0109 00:28:30.951709   15272 round_trippers.go:580]     Audit-Id: 412f08bd-5073-4475-a6c5-f40cb7dca553
	I0109 00:28:30.951709   15272 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:28:30.951709   15272 round_trippers.go:580]     Content-Type: application/json
	I0109 00:28:30.951709   15272 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b65f6d3f-e21a-46b0-805d-9b71486b6c1c
	I0109 00:28:30.951966   15272 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 160c1d93-30c8-4413-9986-7aa0d5f0fdd1
	I0109 00:28:30.951966   15272 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:28:30 GMT
	I0109 00:28:30.952297   15272 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-173500","uid":"5e5c58e3-1ae8-4346-8766-7537fea36975","resourceVersion":"1789","creationTimestamp":"2024-01-09T00:05:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-173500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-173500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_09T00_05_33_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-01-09T00:05:27Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0109 00:28:31.444615   15272 round_trippers.go:463] GET https://172.24.109.120:8443/api/v1/nodes/multinode-173500
	I0109 00:28:31.444615   15272 round_trippers.go:469] Request Headers:
	I0109 00:28:31.444615   15272 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:28:31.444615   15272 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0109 00:28:31.449211   15272 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0109 00:28:31.449211   15272 round_trippers.go:577] Response Headers:
	I0109 00:28:31.449211   15272 round_trippers.go:580]     Audit-Id: 71fc9b03-98e3-4a9e-b0be-3c56a176fdb5
	I0109 00:28:31.449211   15272 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:28:31.449211   15272 round_trippers.go:580]     Content-Type: application/json
	I0109 00:28:31.449211   15272 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b65f6d3f-e21a-46b0-805d-9b71486b6c1c
	I0109 00:28:31.449211   15272 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 160c1d93-30c8-4413-9986-7aa0d5f0fdd1
	I0109 00:28:31.449211   15272 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:28:31 GMT
	I0109 00:28:31.450934   15272 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-173500","uid":"5e5c58e3-1ae8-4346-8766-7537fea36975","resourceVersion":"1789","creationTimestamp":"2024-01-09T00:05:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-173500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-173500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_09T00_05_33_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-01-09T00:05:27Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0109 00:28:31.947835   15272 round_trippers.go:463] GET https://172.24.109.120:8443/api/v1/nodes/multinode-173500
	I0109 00:28:31.947923   15272 round_trippers.go:469] Request Headers:
	I0109 00:28:31.947923   15272 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:28:31.947923   15272 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0109 00:28:31.958655   15272 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0109 00:28:31.958655   15272 round_trippers.go:577] Response Headers:
	I0109 00:28:31.958655   15272 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:28:31.958655   15272 round_trippers.go:580]     Content-Type: application/json
	I0109 00:28:31.958655   15272 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b65f6d3f-e21a-46b0-805d-9b71486b6c1c
	I0109 00:28:31.958655   15272 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 160c1d93-30c8-4413-9986-7aa0d5f0fdd1
	I0109 00:28:31.958655   15272 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:28:31 GMT
	I0109 00:28:31.958655   15272 round_trippers.go:580]     Audit-Id: 650cca2f-066e-4256-bf0e-c72adfe38b4a
	I0109 00:28:31.960067   15272 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-173500","uid":"5e5c58e3-1ae8-4346-8766-7537fea36975","resourceVersion":"1789","creationTimestamp":"2024-01-09T00:05:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-173500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-173500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_09T00_05_33_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-01-09T00:05:27Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0109 00:28:31.960629   15272 node_ready.go:58] node "multinode-173500" has status "Ready":"False"
	I0109 00:28:32.450648   15272 round_trippers.go:463] GET https://172.24.109.120:8443/api/v1/nodes/multinode-173500
	I0109 00:28:32.450718   15272 round_trippers.go:469] Request Headers:
	I0109 00:28:32.450718   15272 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:28:32.450718   15272 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0109 00:28:32.460134   15272 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0109 00:28:32.460134   15272 round_trippers.go:577] Response Headers:
	I0109 00:28:32.460134   15272 round_trippers.go:580]     Audit-Id: 11a586e0-6826-4d35-8528-c8df0a94f1e6
	I0109 00:28:32.460134   15272 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:28:32.460134   15272 round_trippers.go:580]     Content-Type: application/json
	I0109 00:28:32.460134   15272 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b65f6d3f-e21a-46b0-805d-9b71486b6c1c
	I0109 00:28:32.460134   15272 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 160c1d93-30c8-4413-9986-7aa0d5f0fdd1
	I0109 00:28:32.460134   15272 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:28:32 GMT
	I0109 00:28:32.460729   15272 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-173500","uid":"5e5c58e3-1ae8-4346-8766-7537fea36975","resourceVersion":"1789","creationTimestamp":"2024-01-09T00:05:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-173500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-173500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_09T00_05_33_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-01-09T00:05:27Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0109 00:28:32.948451   15272 round_trippers.go:463] GET https://172.24.109.120:8443/api/v1/nodes/multinode-173500
	I0109 00:28:32.948451   15272 round_trippers.go:469] Request Headers:
	I0109 00:28:32.948451   15272 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:28:32.948451   15272 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0109 00:28:32.956122   15272 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0109 00:28:32.956316   15272 round_trippers.go:577] Response Headers:
	I0109 00:28:32.956316   15272 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:28:32.956316   15272 round_trippers.go:580]     Content-Type: application/json
	I0109 00:28:32.956316   15272 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b65f6d3f-e21a-46b0-805d-9b71486b6c1c
	I0109 00:28:32.956316   15272 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 160c1d93-30c8-4413-9986-7aa0d5f0fdd1
	I0109 00:28:32.956316   15272 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:28:32 GMT
	I0109 00:28:32.956316   15272 round_trippers.go:580]     Audit-Id: b33f1d04-e83a-41e6-ae90-5ab14d9a8437
	I0109 00:28:32.956659   15272 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-173500","uid":"5e5c58e3-1ae8-4346-8766-7537fea36975","resourceVersion":"1823","creationTimestamp":"2024-01-09T00:05:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-173500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-173500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_09T00_05_33_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-01-09T00:05:27Z","fieldsType":"FieldsV1","f [truncated 5358 chars]
	I0109 00:28:32.957064   15272 node_ready.go:49] node "multinode-173500" has status "Ready":"True"
	I0109 00:28:32.957178   15272 node_ready.go:38] duration metric: took 14.5230535s waiting for node "multinode-173500" to be "Ready" ...
	I0109 00:28:32.957178   15272 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0109 00:28:32.957328   15272 round_trippers.go:463] GET https://172.24.109.120:8443/api/v1/namespaces/kube-system/pods
	I0109 00:28:32.957388   15272 round_trippers.go:469] Request Headers:
	I0109 00:28:32.957388   15272 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:28:32.957388   15272 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0109 00:28:32.967540   15272 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0109 00:28:32.967540   15272 round_trippers.go:577] Response Headers:
	I0109 00:28:32.967540   15272 round_trippers.go:580]     Audit-Id: 386416e1-5b2c-49af-b161-98df9f2ed30f
	I0109 00:28:32.967540   15272 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:28:32.967540   15272 round_trippers.go:580]     Content-Type: application/json
	I0109 00:28:32.967540   15272 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b65f6d3f-e21a-46b0-805d-9b71486b6c1c
	I0109 00:28:32.967540   15272 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 160c1d93-30c8-4413-9986-7aa0d5f0fdd1
	I0109 00:28:32.967540   15272 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:28:32 GMT
	I0109 00:28:32.970209   15272 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1825"},"items":[{"metadata":{"name":"coredns-5dd5756b68-bkss9","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"463fb6c6-1e85-419f-9c13-96e58a2ec22e","resourceVersion":"1809","creationTimestamp":"2024-01-09T00:05:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"391af85f-9c35-497b-9b4f-c347a35d4a42","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-09T00:05:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"391af85f-9c35-497b-9b4f-c347a35d4a42\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 83455 chars]
	I0109 00:28:32.974968   15272 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-bkss9" in "kube-system" namespace to be "Ready" ...
	I0109 00:28:32.975010   15272 round_trippers.go:463] GET https://172.24.109.120:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-bkss9
	I0109 00:28:32.975010   15272 round_trippers.go:469] Request Headers:
	I0109 00:28:32.975010   15272 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:28:32.975010   15272 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0109 00:28:32.978213   15272 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0109 00:28:32.978213   15272 round_trippers.go:577] Response Headers:
	I0109 00:28:32.978213   15272 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:28:32.978213   15272 round_trippers.go:580]     Content-Type: application/json
	I0109 00:28:32.978213   15272 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b65f6d3f-e21a-46b0-805d-9b71486b6c1c
	I0109 00:28:32.978213   15272 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 160c1d93-30c8-4413-9986-7aa0d5f0fdd1
	I0109 00:28:32.978213   15272 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:28:32 GMT
	I0109 00:28:32.978213   15272 round_trippers.go:580]     Audit-Id: 63773b39-d89e-48f7-9d91-fc1946268c10
	I0109 00:28:32.979555   15272 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-bkss9","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"463fb6c6-1e85-419f-9c13-96e58a2ec22e","resourceVersion":"1809","creationTimestamp":"2024-01-09T00:05:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"391af85f-9c35-497b-9b4f-c347a35d4a42","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-09T00:05:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"391af85f-9c35-497b-9b4f-c347a35d4a42\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6494 chars]
	I0109 00:28:32.980366   15272 round_trippers.go:463] GET https://172.24.109.120:8443/api/v1/nodes/multinode-173500
	I0109 00:28:32.980366   15272 round_trippers.go:469] Request Headers:
	I0109 00:28:32.980366   15272 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:28:32.980366   15272 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0109 00:28:32.983263   15272 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0109 00:28:32.983263   15272 round_trippers.go:577] Response Headers:
	I0109 00:28:32.983263   15272 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b65f6d3f-e21a-46b0-805d-9b71486b6c1c
	I0109 00:28:32.983263   15272 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 160c1d93-30c8-4413-9986-7aa0d5f0fdd1
	I0109 00:28:32.983263   15272 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:28:32 GMT
	I0109 00:28:32.983263   15272 round_trippers.go:580]     Audit-Id: 3d55f3ba-7468-4cad-a784-b6076c410de4
	I0109 00:28:32.983263   15272 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:28:32.983263   15272 round_trippers.go:580]     Content-Type: application/json
	I0109 00:28:32.984469   15272 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-173500","uid":"5e5c58e3-1ae8-4346-8766-7537fea36975","resourceVersion":"1823","creationTimestamp":"2024-01-09T00:05:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-173500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-173500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_09T00_05_33_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-01-09T00:05:27Z","fieldsType":"FieldsV1","f [truncated 5358 chars]
	I0109 00:28:32.984865   15272 pod_ready.go:92] pod "coredns-5dd5756b68-bkss9" in "kube-system" namespace has status "Ready":"True"
	I0109 00:28:32.984865   15272 pod_ready.go:81] duration metric: took 9.8547ms waiting for pod "coredns-5dd5756b68-bkss9" in "kube-system" namespace to be "Ready" ...
	I0109 00:28:32.984865   15272 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-173500" in "kube-system" namespace to be "Ready" ...
	I0109 00:28:32.984943   15272 round_trippers.go:463] GET https://172.24.109.120:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-173500
	I0109 00:28:32.984943   15272 round_trippers.go:469] Request Headers:
	I0109 00:28:32.984943   15272 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:28:32.984943   15272 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0109 00:28:32.987713   15272 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0109 00:28:32.987713   15272 round_trippers.go:577] Response Headers:
	I0109 00:28:32.987713   15272 round_trippers.go:580]     Audit-Id: a5ad06e6-0cba-49a8-8a91-9a6ab9c38a7f
	I0109 00:28:32.987713   15272 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:28:32.987713   15272 round_trippers.go:580]     Content-Type: application/json
	I0109 00:28:32.987713   15272 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b65f6d3f-e21a-46b0-805d-9b71486b6c1c
	I0109 00:28:32.987713   15272 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 160c1d93-30c8-4413-9986-7aa0d5f0fdd1
	I0109 00:28:32.987713   15272 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:28:32 GMT
	I0109 00:28:32.988926   15272 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-173500","namespace":"kube-system","uid":"43da51b9-2249-4c4d-a9c0-4c899270d870","resourceVersion":"1777","creationTimestamp":"2024-01-09T00:28:18Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.24.109.120:2379","kubernetes.io/config.hash":"d21425b7f4d2774c35dc812132e81582","kubernetes.io/config.mirror":"d21425b7f4d2774c35dc812132e81582","kubernetes.io/config.seen":"2024-01-09T00:28:04.947418401Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-173500","uid":"5e5c58e3-1ae8-4346-8766-7537fea36975","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-09T00:28:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise
-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/conf [truncated 5873 chars]
	I0109 00:28:32.989532   15272 round_trippers.go:463] GET https://172.24.109.120:8443/api/v1/nodes/multinode-173500
	I0109 00:28:32.989643   15272 round_trippers.go:469] Request Headers:
	I0109 00:28:32.989643   15272 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:28:32.989643   15272 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0109 00:28:32.995986   15272 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0109 00:28:32.995986   15272 round_trippers.go:577] Response Headers:
	I0109 00:28:32.995986   15272 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:28:32.995986   15272 round_trippers.go:580]     Content-Type: application/json
	I0109 00:28:32.995986   15272 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b65f6d3f-e21a-46b0-805d-9b71486b6c1c
	I0109 00:28:32.995986   15272 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 160c1d93-30c8-4413-9986-7aa0d5f0fdd1
	I0109 00:28:32.995986   15272 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:28:32 GMT
	I0109 00:28:32.995986   15272 round_trippers.go:580]     Audit-Id: cdcdf480-0f21-4591-9644-06c21adc87bd
	I0109 00:28:32.996943   15272 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-173500","uid":"5e5c58e3-1ae8-4346-8766-7537fea36975","resourceVersion":"1823","creationTimestamp":"2024-01-09T00:05:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-173500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-173500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_09T00_05_33_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-01-09T00:05:27Z","fieldsType":"FieldsV1","f [truncated 5358 chars]
	I0109 00:28:32.997223   15272 pod_ready.go:92] pod "etcd-multinode-173500" in "kube-system" namespace has status "Ready":"True"
	I0109 00:28:32.997223   15272 pod_ready.go:81] duration metric: took 12.3584ms waiting for pod "etcd-multinode-173500" in "kube-system" namespace to be "Ready" ...
	I0109 00:28:32.997223   15272 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-173500" in "kube-system" namespace to be "Ready" ...
	I0109 00:28:32.997223   15272 round_trippers.go:463] GET https://172.24.109.120:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-173500
	I0109 00:28:32.997223   15272 round_trippers.go:469] Request Headers:
	I0109 00:28:32.997223   15272 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:28:32.997223   15272 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0109 00:28:33.002861   15272 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0109 00:28:33.002861   15272 round_trippers.go:577] Response Headers:
	I0109 00:28:33.002861   15272 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:28:33.002861   15272 round_trippers.go:580]     Content-Type: application/json
	I0109 00:28:33.003620   15272 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b65f6d3f-e21a-46b0-805d-9b71486b6c1c
	I0109 00:28:33.003620   15272 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 160c1d93-30c8-4413-9986-7aa0d5f0fdd1
	I0109 00:28:33.003620   15272 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:28:33 GMT
	I0109 00:28:33.003620   15272 round_trippers.go:580]     Audit-Id: feba445f-904d-4abd-8653-3b628208b67c
	I0109 00:28:33.003843   15272 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-173500","namespace":"kube-system","uid":"5c089ac2-fe84-48d8-9727-a932903b646d","resourceVersion":"1772","creationTimestamp":"2024-01-09T00:28:19Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.24.109.120:8443","kubernetes.io/config.hash":"3af26441278f10d0a9196ab55837c292","kubernetes.io/config.mirror":"3af26441278f10d0a9196ab55837c292","kubernetes.io/config.seen":"2024-01-09T00:28:04.947424101Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-173500","uid":"5e5c58e3-1ae8-4346-8766-7537fea36975","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-09T00:28:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.k
ubernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernete [truncated 7634 chars]
	I0109 00:28:33.004329   15272 round_trippers.go:463] GET https://172.24.109.120:8443/api/v1/nodes/multinode-173500
	I0109 00:28:33.004394   15272 round_trippers.go:469] Request Headers:
	I0109 00:28:33.004394   15272 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:28:33.004394   15272 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0109 00:28:33.008357   15272 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0109 00:28:33.008357   15272 round_trippers.go:577] Response Headers:
	I0109 00:28:33.008357   15272 round_trippers.go:580]     Audit-Id: d4228a35-8084-41ca-ba1a-c9a5930fb54d
	I0109 00:28:33.008357   15272 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:28:33.008357   15272 round_trippers.go:580]     Content-Type: application/json
	I0109 00:28:33.008357   15272 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b65f6d3f-e21a-46b0-805d-9b71486b6c1c
	I0109 00:28:33.008357   15272 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 160c1d93-30c8-4413-9986-7aa0d5f0fdd1
	I0109 00:28:33.008357   15272 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:28:33 GMT
	I0109 00:28:33.009104   15272 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-173500","uid":"5e5c58e3-1ae8-4346-8766-7537fea36975","resourceVersion":"1823","creationTimestamp":"2024-01-09T00:05:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-173500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-173500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_09T00_05_33_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-01-09T00:05:27Z","fieldsType":"FieldsV1","f [truncated 5358 chars]
	I0109 00:28:33.509736   15272 round_trippers.go:463] GET https://172.24.109.120:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-173500
	I0109 00:28:33.509825   15272 round_trippers.go:469] Request Headers:
	I0109 00:28:33.509825   15272 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0109 00:28:33.509825   15272 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:28:33.515815   15272 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0109 00:28:33.515815   15272 round_trippers.go:577] Response Headers:
	I0109 00:28:33.515815   15272 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b65f6d3f-e21a-46b0-805d-9b71486b6c1c
	I0109 00:28:33.515815   15272 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 160c1d93-30c8-4413-9986-7aa0d5f0fdd1
	I0109 00:28:33.515815   15272 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:28:33 GMT
	I0109 00:28:33.515815   15272 round_trippers.go:580]     Audit-Id: 25a23ccd-0361-47c1-8007-8af2ed647b06
	I0109 00:28:33.515815   15272 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:28:33.515815   15272 round_trippers.go:580]     Content-Type: application/json
	I0109 00:28:33.516599   15272 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-173500","namespace":"kube-system","uid":"5c089ac2-fe84-48d8-9727-a932903b646d","resourceVersion":"1772","creationTimestamp":"2024-01-09T00:28:19Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.24.109.120:8443","kubernetes.io/config.hash":"3af26441278f10d0a9196ab55837c292","kubernetes.io/config.mirror":"3af26441278f10d0a9196ab55837c292","kubernetes.io/config.seen":"2024-01-09T00:28:04.947424101Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-173500","uid":"5e5c58e3-1ae8-4346-8766-7537fea36975","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-09T00:28:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.k
ubernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernete [truncated 7634 chars]
	I0109 00:28:33.517403   15272 round_trippers.go:463] GET https://172.24.109.120:8443/api/v1/nodes/multinode-173500
	I0109 00:28:33.517434   15272 round_trippers.go:469] Request Headers:
	I0109 00:28:33.517434   15272 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:28:33.517434   15272 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0109 00:28:33.521082   15272 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0109 00:28:33.521082   15272 round_trippers.go:577] Response Headers:
	I0109 00:28:33.521082   15272 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b65f6d3f-e21a-46b0-805d-9b71486b6c1c
	I0109 00:28:33.521082   15272 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 160c1d93-30c8-4413-9986-7aa0d5f0fdd1
	I0109 00:28:33.521082   15272 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:28:33 GMT
	I0109 00:28:33.521082   15272 round_trippers.go:580]     Audit-Id: 6defc422-7924-4c93-b23a-cef309b3eba3
	I0109 00:28:33.521082   15272 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:28:33.521234   15272 round_trippers.go:580]     Content-Type: application/json
	I0109 00:28:33.521590   15272 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-173500","uid":"5e5c58e3-1ae8-4346-8766-7537fea36975","resourceVersion":"1823","creationTimestamp":"2024-01-09T00:05:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-173500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-173500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_09T00_05_33_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-01-09T00:05:27Z","fieldsType":"FieldsV1","f [truncated 5358 chars]
	I0109 00:28:34.009836   15272 round_trippers.go:463] GET https://172.24.109.120:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-173500
	I0109 00:28:34.009904   15272 round_trippers.go:469] Request Headers:
	I0109 00:28:34.009904   15272 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:28:34.009904   15272 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0109 00:28:34.014654   15272 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0109 00:28:34.014654   15272 round_trippers.go:577] Response Headers:
	I0109 00:28:34.014654   15272 round_trippers.go:580]     Audit-Id: b8177ed8-f355-4051-a971-065f1c9e59d9
	I0109 00:28:34.014778   15272 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:28:34.014778   15272 round_trippers.go:580]     Content-Type: application/json
	I0109 00:28:34.014778   15272 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b65f6d3f-e21a-46b0-805d-9b71486b6c1c
	I0109 00:28:34.014778   15272 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 160c1d93-30c8-4413-9986-7aa0d5f0fdd1
	I0109 00:28:34.014778   15272 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:28:34 GMT
	I0109 00:28:34.015523   15272 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-173500","namespace":"kube-system","uid":"5c089ac2-fe84-48d8-9727-a932903b646d","resourceVersion":"1772","creationTimestamp":"2024-01-09T00:28:19Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.24.109.120:8443","kubernetes.io/config.hash":"3af26441278f10d0a9196ab55837c292","kubernetes.io/config.mirror":"3af26441278f10d0a9196ab55837c292","kubernetes.io/config.seen":"2024-01-09T00:28:04.947424101Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-173500","uid":"5e5c58e3-1ae8-4346-8766-7537fea36975","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-09T00:28:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.k
ubernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernete [truncated 7634 chars]
	I0109 00:28:34.016210   15272 round_trippers.go:463] GET https://172.24.109.120:8443/api/v1/nodes/multinode-173500
	I0109 00:28:34.016210   15272 round_trippers.go:469] Request Headers:
	I0109 00:28:34.016313   15272 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:28:34.016313   15272 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0109 00:28:34.023363   15272 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0109 00:28:34.023363   15272 round_trippers.go:577] Response Headers:
	I0109 00:28:34.023363   15272 round_trippers.go:580]     Audit-Id: 93c82f16-ed49-487c-8807-adacebc02d75
	I0109 00:28:34.023363   15272 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:28:34.023363   15272 round_trippers.go:580]     Content-Type: application/json
	I0109 00:28:34.023363   15272 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b65f6d3f-e21a-46b0-805d-9b71486b6c1c
	I0109 00:28:34.023363   15272 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 160c1d93-30c8-4413-9986-7aa0d5f0fdd1
	I0109 00:28:34.023363   15272 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:28:34 GMT
	I0109 00:28:34.024204   15272 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-173500","uid":"5e5c58e3-1ae8-4346-8766-7537fea36975","resourceVersion":"1823","creationTimestamp":"2024-01-09T00:05:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-173500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-173500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_09T00_05_33_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-01-09T00:05:27Z","fieldsType":"FieldsV1","f [truncated 5358 chars]
	I0109 00:28:34.500323   15272 round_trippers.go:463] GET https://172.24.109.120:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-173500
	I0109 00:28:34.500447   15272 round_trippers.go:469] Request Headers:
	I0109 00:28:34.500447   15272 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:28:34.500447   15272 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0109 00:28:34.504839   15272 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0109 00:28:34.504839   15272 round_trippers.go:577] Response Headers:
	I0109 00:28:34.504839   15272 round_trippers.go:580]     Content-Type: application/json
	I0109 00:28:34.504839   15272 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b65f6d3f-e21a-46b0-805d-9b71486b6c1c
	I0109 00:28:34.504839   15272 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 160c1d93-30c8-4413-9986-7aa0d5f0fdd1
	I0109 00:28:34.504839   15272 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:28:34 GMT
	I0109 00:28:34.504839   15272 round_trippers.go:580]     Audit-Id: 47cdd00f-b341-4de9-8e29-54e25b448a67
	I0109 00:28:34.504839   15272 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:28:34.505466   15272 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-173500","namespace":"kube-system","uid":"5c089ac2-fe84-48d8-9727-a932903b646d","resourceVersion":"1772","creationTimestamp":"2024-01-09T00:28:19Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.24.109.120:8443","kubernetes.io/config.hash":"3af26441278f10d0a9196ab55837c292","kubernetes.io/config.mirror":"3af26441278f10d0a9196ab55837c292","kubernetes.io/config.seen":"2024-01-09T00:28:04.947424101Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-173500","uid":"5e5c58e3-1ae8-4346-8766-7537fea36975","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-09T00:28:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.k
ubernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernete [truncated 7634 chars]
	I0109 00:28:34.506785   15272 round_trippers.go:463] GET https://172.24.109.120:8443/api/v1/nodes/multinode-173500
	I0109 00:28:34.506897   15272 round_trippers.go:469] Request Headers:
	I0109 00:28:34.506897   15272 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:28:34.506897   15272 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0109 00:28:34.510334   15272 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0109 00:28:34.510334   15272 round_trippers.go:577] Response Headers:
	I0109 00:28:34.510334   15272 round_trippers.go:580]     Content-Type: application/json
	I0109 00:28:34.510334   15272 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b65f6d3f-e21a-46b0-805d-9b71486b6c1c
	I0109 00:28:34.510842   15272 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 160c1d93-30c8-4413-9986-7aa0d5f0fdd1
	I0109 00:28:34.510842   15272 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:28:34 GMT
	I0109 00:28:34.510842   15272 round_trippers.go:580]     Audit-Id: 05af92c8-e192-42db-97f5-8fc43561f6f8
	I0109 00:28:34.510842   15272 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:28:34.511016   15272 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-173500","uid":"5e5c58e3-1ae8-4346-8766-7537fea36975","resourceVersion":"1823","creationTimestamp":"2024-01-09T00:05:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-173500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-173500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_09T00_05_33_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-01-09T00:05:27Z","fieldsType":"FieldsV1","f [truncated 5358 chars]
	I0109 00:28:34.999986   15272 round_trippers.go:463] GET https://172.24.109.120:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-173500
	I0109 00:28:34.999986   15272 round_trippers.go:469] Request Headers:
	I0109 00:28:34.999986   15272 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:28:34.999986   15272 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0109 00:28:35.009547   15272 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0109 00:28:35.009547   15272 round_trippers.go:577] Response Headers:
	I0109 00:28:35.009547   15272 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:28:35.009547   15272 round_trippers.go:580]     Content-Type: application/json
	I0109 00:28:35.009547   15272 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b65f6d3f-e21a-46b0-805d-9b71486b6c1c
	I0109 00:28:35.009547   15272 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 160c1d93-30c8-4413-9986-7aa0d5f0fdd1
	I0109 00:28:35.009547   15272 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:28:35 GMT
	I0109 00:28:35.009547   15272 round_trippers.go:580]     Audit-Id: db3fc8ee-fa20-4e37-bd35-f18567e12cf3
	I0109 00:28:35.009547   15272 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-173500","namespace":"kube-system","uid":"5c089ac2-fe84-48d8-9727-a932903b646d","resourceVersion":"1772","creationTimestamp":"2024-01-09T00:28:19Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.24.109.120:8443","kubernetes.io/config.hash":"3af26441278f10d0a9196ab55837c292","kubernetes.io/config.mirror":"3af26441278f10d0a9196ab55837c292","kubernetes.io/config.seen":"2024-01-09T00:28:04.947424101Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-173500","uid":"5e5c58e3-1ae8-4346-8766-7537fea36975","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-09T00:28:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.k
ubernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernete [truncated 7634 chars]
	I0109 00:28:35.010804   15272 round_trippers.go:463] GET https://172.24.109.120:8443/api/v1/nodes/multinode-173500
	I0109 00:28:35.010909   15272 round_trippers.go:469] Request Headers:
	I0109 00:28:35.010909   15272 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:28:35.010909   15272 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0109 00:28:35.013093   15272 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0109 00:28:35.013093   15272 round_trippers.go:577] Response Headers:
	I0109 00:28:35.013093   15272 round_trippers.go:580]     Audit-Id: c84aeecf-18cb-4aa2-a72c-2866076fbee2
	I0109 00:28:35.013093   15272 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:28:35.013093   15272 round_trippers.go:580]     Content-Type: application/json
	I0109 00:28:35.013093   15272 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b65f6d3f-e21a-46b0-805d-9b71486b6c1c
	I0109 00:28:35.014039   15272 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 160c1d93-30c8-4413-9986-7aa0d5f0fdd1
	I0109 00:28:35.014039   15272 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:28:35 GMT
	I0109 00:28:35.014247   15272 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-173500","uid":"5e5c58e3-1ae8-4346-8766-7537fea36975","resourceVersion":"1823","creationTimestamp":"2024-01-09T00:05:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-173500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-173500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_09T00_05_33_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-01-09T00:05:27Z","fieldsType":"FieldsV1","f [truncated 5358 chars]
	I0109 00:28:35.014834   15272 pod_ready.go:102] pod "kube-apiserver-multinode-173500" in "kube-system" namespace has status "Ready":"False"
	I0109 00:28:35.503632   15272 round_trippers.go:463] GET https://172.24.109.120:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-173500
	I0109 00:28:35.503696   15272 round_trippers.go:469] Request Headers:
	I0109 00:28:35.503696   15272 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:28:35.503696   15272 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0109 00:28:35.508673   15272 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0109 00:28:35.508673   15272 round_trippers.go:577] Response Headers:
	I0109 00:28:35.508673   15272 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:28:35 GMT
	I0109 00:28:35.508673   15272 round_trippers.go:580]     Audit-Id: 5b08a41a-adb8-474c-9fbe-e379efe9a53b
	I0109 00:28:35.508673   15272 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:28:35.508673   15272 round_trippers.go:580]     Content-Type: application/json
	I0109 00:28:35.508673   15272 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b65f6d3f-e21a-46b0-805d-9b71486b6c1c
	I0109 00:28:35.508673   15272 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 160c1d93-30c8-4413-9986-7aa0d5f0fdd1
	I0109 00:28:35.510074   15272 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-173500","namespace":"kube-system","uid":"5c089ac2-fe84-48d8-9727-a932903b646d","resourceVersion":"1830","creationTimestamp":"2024-01-09T00:28:19Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.24.109.120:8443","kubernetes.io/config.hash":"3af26441278f10d0a9196ab55837c292","kubernetes.io/config.mirror":"3af26441278f10d0a9196ab55837c292","kubernetes.io/config.seen":"2024-01-09T00:28:04.947424101Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-173500","uid":"5e5c58e3-1ae8-4346-8766-7537fea36975","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-09T00:28:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.k
ubernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernete [truncated 7409 chars]
	I0109 00:28:35.511088   15272 round_trippers.go:463] GET https://172.24.109.120:8443/api/v1/nodes/multinode-173500
	I0109 00:28:35.511193   15272 round_trippers.go:469] Request Headers:
	I0109 00:28:35.511193   15272 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:28:35.511193   15272 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0109 00:28:35.517013   15272 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0109 00:28:35.517013   15272 round_trippers.go:577] Response Headers:
	I0109 00:28:35.517013   15272 round_trippers.go:580]     Content-Type: application/json
	I0109 00:28:35.517013   15272 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b65f6d3f-e21a-46b0-805d-9b71486b6c1c
	I0109 00:28:35.517013   15272 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 160c1d93-30c8-4413-9986-7aa0d5f0fdd1
	I0109 00:28:35.517013   15272 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:28:35 GMT
	I0109 00:28:35.517013   15272 round_trippers.go:580]     Audit-Id: ce198a41-28cb-411a-bdfc-43e56b605b88
	I0109 00:28:35.517013   15272 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:28:35.517013   15272 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-173500","uid":"5e5c58e3-1ae8-4346-8766-7537fea36975","resourceVersion":"1835","creationTimestamp":"2024-01-09T00:05:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-173500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-173500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_09T00_05_33_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-01-09T00:05:27Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0109 00:28:35.518114   15272 pod_ready.go:92] pod "kube-apiserver-multinode-173500" in "kube-system" namespace has status "Ready":"True"
	I0109 00:28:35.518183   15272 pod_ready.go:81] duration metric: took 2.5209594s waiting for pod "kube-apiserver-multinode-173500" in "kube-system" namespace to be "Ready" ...
	I0109 00:28:35.518183   15272 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-173500" in "kube-system" namespace to be "Ready" ...
	I0109 00:28:35.518183   15272 round_trippers.go:463] GET https://172.24.109.120:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-173500
	I0109 00:28:35.518183   15272 round_trippers.go:469] Request Headers:
	I0109 00:28:35.518183   15272 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:28:35.518183   15272 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0109 00:28:35.522588   15272 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0109 00:28:35.522588   15272 round_trippers.go:577] Response Headers:
	I0109 00:28:35.522588   15272 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:28:35 GMT
	I0109 00:28:35.522588   15272 round_trippers.go:580]     Audit-Id: c146cadb-6f80-45c4-b928-2e0bb62c3454
	I0109 00:28:35.522588   15272 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:28:35.522588   15272 round_trippers.go:580]     Content-Type: application/json
	I0109 00:28:35.522588   15272 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b65f6d3f-e21a-46b0-805d-9b71486b6c1c
	I0109 00:28:35.522588   15272 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 160c1d93-30c8-4413-9986-7aa0d5f0fdd1
	I0109 00:28:35.522588   15272 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-173500","namespace":"kube-system","uid":"a0252ea5-5d6a-4303-b7e6-151481d4cd8a","resourceVersion":"1796","creationTimestamp":"2024-01-09T00:05:31Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"f6b180d5a2686dc98b0355b6df7f53ea","kubernetes.io/config.mirror":"f6b180d5a2686dc98b0355b6df7f53ea","kubernetes.io/config.seen":"2024-01-09T00:05:31.606504770Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-173500","uid":"5e5c58e3-1ae8-4346-8766-7537fea36975","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-09T00:05:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.
io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".": [truncated 7179 chars]
	I0109 00:28:35.523494   15272 round_trippers.go:463] GET https://172.24.109.120:8443/api/v1/nodes/multinode-173500
	I0109 00:28:35.523494   15272 round_trippers.go:469] Request Headers:
	I0109 00:28:35.523494   15272 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:28:35.523494   15272 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0109 00:28:35.526659   15272 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0109 00:28:35.527435   15272 round_trippers.go:577] Response Headers:
	I0109 00:28:35.527435   15272 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b65f6d3f-e21a-46b0-805d-9b71486b6c1c
	I0109 00:28:35.527482   15272 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 160c1d93-30c8-4413-9986-7aa0d5f0fdd1
	I0109 00:28:35.527482   15272 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:28:35 GMT
	I0109 00:28:35.527482   15272 round_trippers.go:580]     Audit-Id: 4df09479-b61c-4ed1-aef5-4f241d618ada
	I0109 00:28:35.527482   15272 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:28:35.527482   15272 round_trippers.go:580]     Content-Type: application/json
	I0109 00:28:35.527734   15272 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-173500","uid":"5e5c58e3-1ae8-4346-8766-7537fea36975","resourceVersion":"1835","creationTimestamp":"2024-01-09T00:05:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-173500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-173500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_09T00_05_33_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-01-09T00:05:27Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0109 00:28:35.527980   15272 pod_ready.go:92] pod "kube-controller-manager-multinode-173500" in "kube-system" namespace has status "Ready":"True"
	I0109 00:28:35.527980   15272 pod_ready.go:81] duration metric: took 9.797ms waiting for pod "kube-controller-manager-multinode-173500" in "kube-system" namespace to be "Ready" ...
	I0109 00:28:35.527980   15272 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-4h4sv" in "kube-system" namespace to be "Ready" ...
	I0109 00:28:35.527980   15272 round_trippers.go:463] GET https://172.24.109.120:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4h4sv
	I0109 00:28:35.527980   15272 round_trippers.go:469] Request Headers:
	I0109 00:28:35.527980   15272 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:28:35.527980   15272 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0109 00:28:35.530631   15272 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0109 00:28:35.530631   15272 round_trippers.go:577] Response Headers:
	I0109 00:28:35.530631   15272 round_trippers.go:580]     Content-Type: application/json
	I0109 00:28:35.530631   15272 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b65f6d3f-e21a-46b0-805d-9b71486b6c1c
	I0109 00:28:35.530631   15272 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 160c1d93-30c8-4413-9986-7aa0d5f0fdd1
	I0109 00:28:35.530631   15272 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:28:35 GMT
	I0109 00:28:35.530631   15272 round_trippers.go:580]     Audit-Id: 2ef6fc69-d55a-4042-9c3b-bb9a844bb9b7
	I0109 00:28:35.530631   15272 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:28:35.532298   15272 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-4h4sv","generateName":"kube-proxy-","namespace":"kube-system","uid":"a45861ba-73e0-452f-a535-f66e154ea1c6","resourceVersion":"592","creationTimestamp":"2024-01-09T00:08:43Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"ec2434b9-012c-4df1-b401-04556fed7700","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-09T00:08:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ec2434b9-012c-4df1-b401-04556fed7700\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5541 chars]
	I0109 00:28:35.533065   15272 round_trippers.go:463] GET https://172.24.109.120:8443/api/v1/nodes/multinode-173500-m02
	I0109 00:28:35.533143   15272 round_trippers.go:469] Request Headers:
	I0109 00:28:35.533143   15272 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0109 00:28:35.533143   15272 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:28:35.536386   15272 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0109 00:28:35.536459   15272 round_trippers.go:577] Response Headers:
	I0109 00:28:35.536459   15272 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:28:35 GMT
	I0109 00:28:35.536459   15272 round_trippers.go:580]     Audit-Id: c715e16c-4a05-4059-a894-9864b3c9a04a
	I0109 00:28:35.536459   15272 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:28:35.536459   15272 round_trippers.go:580]     Content-Type: application/json
	I0109 00:28:35.536459   15272 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b65f6d3f-e21a-46b0-805d-9b71486b6c1c
	I0109 00:28:35.536459   15272 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 160c1d93-30c8-4413-9986-7aa0d5f0fdd1
	I0109 00:28:35.536876   15272 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-173500-m02","uid":"2696f851-45f3-47f4-953f-d03a5dc2fac0","resourceVersion":"1573","creationTimestamp":"2024-01-09T00:08:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-173500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-173500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_09T00_23_52_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-09T00:08:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:anno
tations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager": [truncated 3819 chars]
	I0109 00:28:35.537327   15272 pod_ready.go:92] pod "kube-proxy-4h4sv" in "kube-system" namespace has status "Ready":"True"
	I0109 00:28:35.537351   15272 pod_ready.go:81] duration metric: took 9.3715ms waiting for pod "kube-proxy-4h4sv" in "kube-system" namespace to be "Ready" ...
	I0109 00:28:35.537351   15272 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-mj6ks" in "kube-system" namespace to be "Ready" ...
	I0109 00:28:35.548918   15272 round_trippers.go:463] GET https://172.24.109.120:8443/api/v1/namespaces/kube-system/pods/kube-proxy-mj6ks
	I0109 00:28:35.549040   15272 round_trippers.go:469] Request Headers:
	I0109 00:28:35.549040   15272 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0109 00:28:35.549040   15272 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:28:35.551265   15272 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0109 00:28:35.551265   15272 round_trippers.go:577] Response Headers:
	I0109 00:28:35.551265   15272 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b65f6d3f-e21a-46b0-805d-9b71486b6c1c
	I0109 00:28:35.551265   15272 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 160c1d93-30c8-4413-9986-7aa0d5f0fdd1
	I0109 00:28:35.551265   15272 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:28:35 GMT
	I0109 00:28:35.551265   15272 round_trippers.go:580]     Audit-Id: 2d767a85-5d43-485d-9db8-4a34b2fc44af
	I0109 00:28:35.552087   15272 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:28:35.552087   15272 round_trippers.go:580]     Content-Type: application/json
	I0109 00:28:35.552473   15272 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-mj6ks","generateName":"kube-proxy-","namespace":"kube-system","uid":"bd23c4c8-d363-4a3f-b750-a3de2346a3bb","resourceVersion":"1587","creationTimestamp":"2024-01-09T00:13:24Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"ec2434b9-012c-4df1-b401-04556fed7700","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-09T00:13:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ec2434b9-012c-4df1-b401-04556fed7700\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5747 chars]
	I0109 00:28:35.751810   15272 request.go:629] Waited for 199.337ms due to client-side throttling, not priority and fairness, request: GET:https://172.24.109.120:8443/api/v1/nodes/multinode-173500-m03
	I0109 00:28:35.751810   15272 round_trippers.go:463] GET https://172.24.109.120:8443/api/v1/nodes/multinode-173500-m03
	I0109 00:28:35.751810   15272 round_trippers.go:469] Request Headers:
	I0109 00:28:35.751810   15272 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:28:35.751810   15272 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0109 00:28:35.756599   15272 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0109 00:28:35.756599   15272 round_trippers.go:577] Response Headers:
	I0109 00:28:35.756599   15272 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:28:35 GMT
	I0109 00:28:35.756599   15272 round_trippers.go:580]     Audit-Id: 059c2e17-8259-49a9-9759-c2d966f467df
	I0109 00:28:35.756599   15272 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:28:35.757485   15272 round_trippers.go:580]     Content-Type: application/json
	I0109 00:28:35.757485   15272 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b65f6d3f-e21a-46b0-805d-9b71486b6c1c
	I0109 00:28:35.757485   15272 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 160c1d93-30c8-4413-9986-7aa0d5f0fdd1
	I0109 00:28:35.757801   15272 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-173500-m03","uid":"9d8a783a-d01b-498d-94ae-1e3f65e7667c","resourceVersion":"1603","creationTimestamp":"2024-01-09T00:23:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-173500-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-173500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_09T00_23_52_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-09T00:23:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:anno
tations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-det [truncated 3636 chars]
	I0109 00:28:35.759038   15272 pod_ready.go:92] pod "kube-proxy-mj6ks" in "kube-system" namespace has status "Ready":"True"
	I0109 00:28:35.759161   15272 pod_ready.go:81] duration metric: took 221.8096ms waiting for pod "kube-proxy-mj6ks" in "kube-system" namespace to be "Ready" ...
	I0109 00:28:35.759161   15272 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-qrtm6" in "kube-system" namespace to be "Ready" ...
	I0109 00:28:35.954348   15272 request.go:629] Waited for 194.8019ms due to client-side throttling, not priority and fairness, request: GET:https://172.24.109.120:8443/api/v1/namespaces/kube-system/pods/kube-proxy-qrtm6
	I0109 00:28:35.954572   15272 round_trippers.go:463] GET https://172.24.109.120:8443/api/v1/namespaces/kube-system/pods/kube-proxy-qrtm6
	I0109 00:28:35.954572   15272 round_trippers.go:469] Request Headers:
	I0109 00:28:35.954572   15272 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0109 00:28:35.954572   15272 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:28:35.960085   15272 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0109 00:28:35.960085   15272 round_trippers.go:577] Response Headers:
	I0109 00:28:35.960085   15272 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b65f6d3f-e21a-46b0-805d-9b71486b6c1c
	I0109 00:28:35.960085   15272 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 160c1d93-30c8-4413-9986-7aa0d5f0fdd1
	I0109 00:28:35.960085   15272 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:28:35 GMT
	I0109 00:28:35.960566   15272 round_trippers.go:580]     Audit-Id: 4506f8da-372f-45e2-9215-bbaddb1a4674
	I0109 00:28:35.960566   15272 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:28:35.960566   15272 round_trippers.go:580]     Content-Type: application/json
	I0109 00:28:35.961953   15272 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-qrtm6","generateName":"kube-proxy-","namespace":"kube-system","uid":"37d066e0-6ff3-4f22-abc3-6bddfa64736e","resourceVersion":"1833","creationTimestamp":"2024-01-09T00:05:45Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"ec2434b9-012c-4df1-b401-04556fed7700","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-09T00:05:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ec2434b9-012c-4df1-b401-04556fed7700\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5743 chars]
	I0109 00:28:36.156805   15272 request.go:629] Waited for 194.0074ms due to client-side throttling, not priority and fairness, request: GET:https://172.24.109.120:8443/api/v1/nodes/multinode-173500
	I0109 00:28:36.157065   15272 round_trippers.go:463] GET https://172.24.109.120:8443/api/v1/nodes/multinode-173500
	I0109 00:28:36.157110   15272 round_trippers.go:469] Request Headers:
	I0109 00:28:36.157110   15272 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:28:36.157110   15272 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0109 00:28:36.160696   15272 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0109 00:28:36.160696   15272 round_trippers.go:577] Response Headers:
	I0109 00:28:36.160696   15272 round_trippers.go:580]     Audit-Id: 6f9c05dd-484b-4ef2-b456-ad817c8443f1
	I0109 00:28:36.161214   15272 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:28:36.161214   15272 round_trippers.go:580]     Content-Type: application/json
	I0109 00:28:36.161214   15272 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b65f6d3f-e21a-46b0-805d-9b71486b6c1c
	I0109 00:28:36.161214   15272 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 160c1d93-30c8-4413-9986-7aa0d5f0fdd1
	I0109 00:28:36.161214   15272 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:28:36 GMT
	I0109 00:28:36.161487   15272 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-173500","uid":"5e5c58e3-1ae8-4346-8766-7537fea36975","resourceVersion":"1835","creationTimestamp":"2024-01-09T00:05:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-173500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-173500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_09T00_05_33_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-01-09T00:05:27Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0109 00:28:36.162068   15272 pod_ready.go:92] pod "kube-proxy-qrtm6" in "kube-system" namespace has status "Ready":"True"
	I0109 00:28:36.162068   15272 pod_ready.go:81] duration metric: took 402.9067ms waiting for pod "kube-proxy-qrtm6" in "kube-system" namespace to be "Ready" ...
	I0109 00:28:36.162139   15272 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-173500" in "kube-system" namespace to be "Ready" ...
	I0109 00:28:36.359841   15272 request.go:629] Waited for 197.3842ms due to client-side throttling, not priority and fairness, request: GET:https://172.24.109.120:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-173500
	I0109 00:28:36.360238   15272 round_trippers.go:463] GET https://172.24.109.120:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-173500
	I0109 00:28:36.360303   15272 round_trippers.go:469] Request Headers:
	I0109 00:28:36.360303   15272 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:28:36.360303   15272 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0109 00:28:36.364016   15272 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0109 00:28:36.364016   15272 round_trippers.go:577] Response Headers:
	I0109 00:28:36.364016   15272 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:28:36.364016   15272 round_trippers.go:580]     Content-Type: application/json
	I0109 00:28:36.364016   15272 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b65f6d3f-e21a-46b0-805d-9b71486b6c1c
	I0109 00:28:36.364016   15272 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 160c1d93-30c8-4413-9986-7aa0d5f0fdd1
	I0109 00:28:36.364381   15272 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:28:36 GMT
	I0109 00:28:36.364381   15272 round_trippers.go:580]     Audit-Id: 586f6628-bb6d-4d11-a63b-4659061bb668
	I0109 00:28:36.364889   15272 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-173500","namespace":"kube-system","uid":"31d8cdf6-292f-4b3c-87c5-951fc34d0ea4","resourceVersion":"1829","creationTimestamp":"2024-01-09T00:05:30Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"70306498a200a6bbe0aa0b41e240d63b","kubernetes.io/config.mirror":"70306498a200a6bbe0aa0b41e240d63b","kubernetes.io/config.seen":"2024-01-09T00:05:21.481168866Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-173500","uid":"5e5c58e3-1ae8-4346-8766-7537fea36975","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-09T00:05:30Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{}
,"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{ [truncated 4909 chars]
	I0109 00:28:36.548919   15272 request.go:629] Waited for 183.8335ms due to client-side throttling, not priority and fairness, request: GET:https://172.24.109.120:8443/api/v1/nodes/multinode-173500
	I0109 00:28:36.549255   15272 round_trippers.go:463] GET https://172.24.109.120:8443/api/v1/nodes/multinode-173500
	I0109 00:28:36.549255   15272 round_trippers.go:469] Request Headers:
	I0109 00:28:36.549316   15272 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0109 00:28:36.549316   15272 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:28:36.553690   15272 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0109 00:28:36.554663   15272 round_trippers.go:577] Response Headers:
	I0109 00:28:36.554721   15272 round_trippers.go:580]     Audit-Id: 45257db1-1c70-4e68-90d1-5911917c411d
	I0109 00:28:36.554721   15272 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:28:36.554721   15272 round_trippers.go:580]     Content-Type: application/json
	I0109 00:28:36.554721   15272 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b65f6d3f-e21a-46b0-805d-9b71486b6c1c
	I0109 00:28:36.554721   15272 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 160c1d93-30c8-4413-9986-7aa0d5f0fdd1
	I0109 00:28:36.554721   15272 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:28:36 GMT
	I0109 00:28:36.555843   15272 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-173500","uid":"5e5c58e3-1ae8-4346-8766-7537fea36975","resourceVersion":"1835","creationTimestamp":"2024-01-09T00:05:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-173500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-173500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_09T00_05_33_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-01-09T00:05:27Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0109 00:28:36.556583   15272 pod_ready.go:92] pod "kube-scheduler-multinode-173500" in "kube-system" namespace has status "Ready":"True"
	I0109 00:28:36.556703   15272 pod_ready.go:81] duration metric: took 394.5645ms waiting for pod "kube-scheduler-multinode-173500" in "kube-system" namespace to be "Ready" ...
	I0109 00:28:36.556703   15272 pod_ready.go:38] duration metric: took 3.5995245s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0109 00:28:36.556811   15272 api_server.go:52] waiting for apiserver process to appear ...
	I0109 00:28:36.572050   15272 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0109 00:28:36.590608   15272 command_runner.go:130] > 1838
	I0109 00:28:36.591438   15272 api_server.go:72] duration metric: took 18.2753157s to wait for apiserver process to appear ...
	I0109 00:28:36.591438   15272 api_server.go:88] waiting for apiserver healthz status ...
	I0109 00:28:36.591438   15272 api_server.go:253] Checking apiserver healthz at https://172.24.109.120:8443/healthz ...
	I0109 00:28:36.600224   15272 api_server.go:279] https://172.24.109.120:8443/healthz returned 200:
	ok
	I0109 00:28:36.600463   15272 round_trippers.go:463] GET https://172.24.109.120:8443/version
	I0109 00:28:36.600463   15272 round_trippers.go:469] Request Headers:
	I0109 00:28:36.600463   15272 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:28:36.600463   15272 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0109 00:28:36.601664   15272 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0109 00:28:36.601664   15272 round_trippers.go:577] Response Headers:
	I0109 00:28:36.601664   15272 round_trippers.go:580]     Content-Length: 264
	I0109 00:28:36.601664   15272 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:28:36 GMT
	I0109 00:28:36.601664   15272 round_trippers.go:580]     Audit-Id: 800ac1b8-3469-4a2e-a908-456fbf37c4a4
	I0109 00:28:36.601664   15272 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:28:36.602449   15272 round_trippers.go:580]     Content-Type: application/json
	I0109 00:28:36.602449   15272 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b65f6d3f-e21a-46b0-805d-9b71486b6c1c
	I0109 00:28:36.602449   15272 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 160c1d93-30c8-4413-9986-7aa0d5f0fdd1
	I0109 00:28:36.602449   15272 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "28",
	  "gitVersion": "v1.28.4",
	  "gitCommit": "bae2c62678db2b5053817bc97181fcc2e8388103",
	  "gitTreeState": "clean",
	  "buildDate": "2023-11-15T16:48:54Z",
	  "goVersion": "go1.20.11",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0109 00:28:36.602546   15272 api_server.go:141] control plane version: v1.28.4
	I0109 00:28:36.602546   15272 api_server.go:131] duration metric: took 11.1086ms to wait for apiserver health ...
	I0109 00:28:36.602546   15272 system_pods.go:43] waiting for kube-system pods to appear ...
	I0109 00:28:36.751421   15272 request.go:629] Waited for 148.7771ms due to client-side throttling, not priority and fairness, request: GET:https://172.24.109.120:8443/api/v1/namespaces/kube-system/pods
	I0109 00:28:36.751669   15272 round_trippers.go:463] GET https://172.24.109.120:8443/api/v1/namespaces/kube-system/pods
	I0109 00:28:36.751669   15272 round_trippers.go:469] Request Headers:
	I0109 00:28:36.751669   15272 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:28:36.751669   15272 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0109 00:28:36.761118   15272 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0109 00:28:36.761118   15272 round_trippers.go:577] Response Headers:
	I0109 00:28:36.761118   15272 round_trippers.go:580]     Audit-Id: 86da45dc-ef91-4477-b8c4-d278cda81392
	I0109 00:28:36.761118   15272 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:28:36.761118   15272 round_trippers.go:580]     Content-Type: application/json
	I0109 00:28:36.761118   15272 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b65f6d3f-e21a-46b0-805d-9b71486b6c1c
	I0109 00:28:36.761118   15272 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 160c1d93-30c8-4413-9986-7aa0d5f0fdd1
	I0109 00:28:36.761118   15272 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:28:36 GMT
	I0109 00:28:36.763826   15272 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1838"},"items":[{"metadata":{"name":"coredns-5dd5756b68-bkss9","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"463fb6c6-1e85-419f-9c13-96e58a2ec22e","resourceVersion":"1809","creationTimestamp":"2024-01-09T00:05:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"391af85f-9c35-497b-9b4f-c347a35d4a42","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-09T00:05:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"391af85f-9c35-497b-9b4f-c347a35d4a42\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 82555 chars]
	I0109 00:28:36.768039   15272 system_pods.go:59] 12 kube-system pods found
	I0109 00:28:36.768039   15272 system_pods.go:61] "coredns-5dd5756b68-bkss9" [463fb6c6-1e85-419f-9c13-96e58a2ec22e] Running
	I0109 00:28:36.768099   15272 system_pods.go:61] "etcd-multinode-173500" [43da51b9-2249-4c4d-a9c0-4c899270d870] Running
	I0109 00:28:36.768099   15272 system_pods.go:61] "kindnet-6nz87" [73ad6ec4-cbfb-4b93-888c-3d430f3c7bf2] Running
	I0109 00:28:36.768099   15272 system_pods.go:61] "kindnet-ht547" [711faf1a-9f77-487c-bd84-1e227ab9c51a] Running
	I0109 00:28:36.768099   15272 system_pods.go:61] "kindnet-t72cs" [63893803-de87-4df9-ac98-3772bd46603c] Running
	I0109 00:28:36.768099   15272 system_pods.go:61] "kube-apiserver-multinode-173500" [5c089ac2-fe84-48d8-9727-a932903b646d] Running
	I0109 00:28:36.768099   15272 system_pods.go:61] "kube-controller-manager-multinode-173500" [a0252ea5-5d6a-4303-b7e6-151481d4cd8a] Running
	I0109 00:28:36.768151   15272 system_pods.go:61] "kube-proxy-4h4sv" [a45861ba-73e0-452f-a535-f66e154ea1c6] Running
	I0109 00:28:36.768151   15272 system_pods.go:61] "kube-proxy-mj6ks" [bd23c4c8-d363-4a3f-b750-a3de2346a3bb] Running
	I0109 00:28:36.768151   15272 system_pods.go:61] "kube-proxy-qrtm6" [37d066e0-6ff3-4f22-abc3-6bddfa64736e] Running
	I0109 00:28:36.768151   15272 system_pods.go:61] "kube-scheduler-multinode-173500" [31d8cdf6-292f-4b3c-87c5-951fc34d0ea4] Running
	I0109 00:28:36.768151   15272 system_pods.go:61] "storage-provisioner" [936240bb-4bdd-4681-91a9-cb458c623805] Running
	I0109 00:28:36.768151   15272 system_pods.go:74] duration metric: took 165.6045ms to wait for pod list to return data ...
	I0109 00:28:36.768151   15272 default_sa.go:34] waiting for default service account to be created ...
	I0109 00:28:36.955393   15272 request.go:629] Waited for 187.056ms due to client-side throttling, not priority and fairness, request: GET:https://172.24.109.120:8443/api/v1/namespaces/default/serviceaccounts
	I0109 00:28:36.955741   15272 round_trippers.go:463] GET https://172.24.109.120:8443/api/v1/namespaces/default/serviceaccounts
	I0109 00:28:36.955741   15272 round_trippers.go:469] Request Headers:
	I0109 00:28:36.955793   15272 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:28:36.955793   15272 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0109 00:28:36.960020   15272 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0109 00:28:36.960020   15272 round_trippers.go:577] Response Headers:
	I0109 00:28:36.960020   15272 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:28:36 GMT
	I0109 00:28:36.961039   15272 round_trippers.go:580]     Audit-Id: 42e9d292-87d5-4d40-bd7d-4ed39783ad5a
	I0109 00:28:36.961071   15272 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:28:36.961071   15272 round_trippers.go:580]     Content-Type: application/json
	I0109 00:28:36.961071   15272 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b65f6d3f-e21a-46b0-805d-9b71486b6c1c
	I0109 00:28:36.961071   15272 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 160c1d93-30c8-4413-9986-7aa0d5f0fdd1
	I0109 00:28:36.961071   15272 round_trippers.go:580]     Content-Length: 262
	I0109 00:28:36.961071   15272 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"1838"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"a9cc6a7c-f512-49f6-8485-edb39bd8695b","resourceVersion":"311","creationTimestamp":"2024-01-09T00:05:44Z"}}]}
	I0109 00:28:36.961388   15272 default_sa.go:45] found service account: "default"
	I0109 00:28:36.961496   15272 default_sa.go:55] duration metric: took 193.287ms for default service account to be created ...
	I0109 00:28:36.961496   15272 system_pods.go:116] waiting for k8s-apps to be running ...
	I0109 00:28:37.157172   15272 request.go:629] Waited for 195.676ms due to client-side throttling, not priority and fairness, request: GET:https://172.24.109.120:8443/api/v1/namespaces/kube-system/pods
	I0109 00:28:37.157342   15272 round_trippers.go:463] GET https://172.24.109.120:8443/api/v1/namespaces/kube-system/pods
	I0109 00:28:37.157342   15272 round_trippers.go:469] Request Headers:
	I0109 00:28:37.157555   15272 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:28:37.157555   15272 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0109 00:28:37.164895   15272 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0109 00:28:37.164895   15272 round_trippers.go:577] Response Headers:
	I0109 00:28:37.164895   15272 round_trippers.go:580]     Content-Type: application/json
	I0109 00:28:37.165143   15272 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b65f6d3f-e21a-46b0-805d-9b71486b6c1c
	I0109 00:28:37.165143   15272 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 160c1d93-30c8-4413-9986-7aa0d5f0fdd1
	I0109 00:28:37.165143   15272 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:28:37 GMT
	I0109 00:28:37.165143   15272 round_trippers.go:580]     Audit-Id: 4ab64dd9-3c79-4448-912a-1678bf5f75b6
	I0109 00:28:37.165143   15272 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:28:37.167267   15272 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1838"},"items":[{"metadata":{"name":"coredns-5dd5756b68-bkss9","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"463fb6c6-1e85-419f-9c13-96e58a2ec22e","resourceVersion":"1809","creationTimestamp":"2024-01-09T00:05:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"391af85f-9c35-497b-9b4f-c347a35d4a42","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-09T00:05:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"391af85f-9c35-497b-9b4f-c347a35d4a42\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 82555 chars]
	I0109 00:28:37.171238   15272 system_pods.go:86] 12 kube-system pods found
	I0109 00:28:37.171304   15272 system_pods.go:89] "coredns-5dd5756b68-bkss9" [463fb6c6-1e85-419f-9c13-96e58a2ec22e] Running
	I0109 00:28:37.171304   15272 system_pods.go:89] "etcd-multinode-173500" [43da51b9-2249-4c4d-a9c0-4c899270d870] Running
	I0109 00:28:37.171304   15272 system_pods.go:89] "kindnet-6nz87" [73ad6ec4-cbfb-4b93-888c-3d430f3c7bf2] Running
	I0109 00:28:37.171304   15272 system_pods.go:89] "kindnet-ht547" [711faf1a-9f77-487c-bd84-1e227ab9c51a] Running
	I0109 00:28:37.171304   15272 system_pods.go:89] "kindnet-t72cs" [63893803-de87-4df9-ac98-3772bd46603c] Running
	I0109 00:28:37.171304   15272 system_pods.go:89] "kube-apiserver-multinode-173500" [5c089ac2-fe84-48d8-9727-a932903b646d] Running
	I0109 00:28:37.171304   15272 system_pods.go:89] "kube-controller-manager-multinode-173500" [a0252ea5-5d6a-4303-b7e6-151481d4cd8a] Running
	I0109 00:28:37.171304   15272 system_pods.go:89] "kube-proxy-4h4sv" [a45861ba-73e0-452f-a535-f66e154ea1c6] Running
	I0109 00:28:37.171304   15272 system_pods.go:89] "kube-proxy-mj6ks" [bd23c4c8-d363-4a3f-b750-a3de2346a3bb] Running
	I0109 00:28:37.171304   15272 system_pods.go:89] "kube-proxy-qrtm6" [37d066e0-6ff3-4f22-abc3-6bddfa64736e] Running
	I0109 00:28:37.171304   15272 system_pods.go:89] "kube-scheduler-multinode-173500" [31d8cdf6-292f-4b3c-87c5-951fc34d0ea4] Running
	I0109 00:28:37.171304   15272 system_pods.go:89] "storage-provisioner" [936240bb-4bdd-4681-91a9-cb458c623805] Running
	I0109 00:28:37.171304   15272 system_pods.go:126] duration metric: took 209.8079ms to wait for k8s-apps to be running ...
	I0109 00:28:37.171304   15272 system_svc.go:44] waiting for kubelet service to be running ....
	I0109 00:28:37.184180   15272 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0109 00:28:37.206772   15272 system_svc.go:56] duration metric: took 35.4678ms WaitForService to wait for kubelet.
	I0109 00:28:37.207036   15272 kubeadm.go:581] duration metric: took 18.8909923s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0109 00:28:37.207036   15272 node_conditions.go:102] verifying NodePressure condition ...
	I0109 00:28:37.361206   15272 request.go:629] Waited for 153.9876ms due to client-side throttling, not priority and fairness, request: GET:https://172.24.109.120:8443/api/v1/nodes
	I0109 00:28:37.361295   15272 round_trippers.go:463] GET https://172.24.109.120:8443/api/v1/nodes
	I0109 00:28:37.361591   15272 round_trippers.go:469] Request Headers:
	I0109 00:28:37.361729   15272 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0109 00:28:37.361729   15272 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:28:37.367568   15272 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0109 00:28:37.367629   15272 round_trippers.go:577] Response Headers:
	I0109 00:28:37.367629   15272 round_trippers.go:580]     Audit-Id: d2ca5be5-3390-4cfe-af53-c9aa55fe2780
	I0109 00:28:37.367629   15272 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:28:37.367629   15272 round_trippers.go:580]     Content-Type: application/json
	I0109 00:28:37.367629   15272 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b65f6d3f-e21a-46b0-805d-9b71486b6c1c
	I0109 00:28:37.367629   15272 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 160c1d93-30c8-4413-9986-7aa0d5f0fdd1
	I0109 00:28:37.367629   15272 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:28:37 GMT
	I0109 00:28:37.367629   15272 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"1838"},"items":[{"metadata":{"name":"multinode-173500","uid":"5e5c58e3-1ae8-4346-8766-7537fea36975","resourceVersion":"1835","creationTimestamp":"2024-01-09T00:05:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-173500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-173500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_09T00_05_33_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFi
elds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","tim [truncated 14731 chars]
	I0109 00:28:37.369889   15272 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0109 00:28:37.369889   15272 node_conditions.go:123] node cpu capacity is 2
	I0109 00:28:37.369889   15272 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0109 00:28:37.369889   15272 node_conditions.go:123] node cpu capacity is 2
	I0109 00:28:37.369889   15272 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0109 00:28:37.369889   15272 node_conditions.go:123] node cpu capacity is 2
	I0109 00:28:37.369889   15272 node_conditions.go:105] duration metric: took 162.8532ms to run NodePressure ...
	I0109 00:28:37.369889   15272 start.go:228] waiting for startup goroutines ...
	I0109 00:28:37.369889   15272 start.go:233] waiting for cluster config update ...
	I0109 00:28:37.369889   15272 start.go:242] writing updated cluster config ...
	I0109 00:28:37.384585   15272 config.go:182] Loaded profile config "multinode-173500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0109 00:28:37.385133   15272 profile.go:148] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-173500\config.json ...
	I0109 00:28:37.392945   15272 out.go:177] * Starting worker node multinode-173500-m02 in cluster multinode-173500
	I0109 00:28:37.395094   15272 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0109 00:28:37.395094   15272 cache.go:56] Caching tarball of preloaded images
	I0109 00:28:37.395094   15272 preload.go:174] Found C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0109 00:28:37.395767   15272 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on docker
	I0109 00:28:37.396102   15272 profile.go:148] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-173500\config.json ...
	I0109 00:28:37.398782   15272 start.go:365] acquiring machines lock for multinode-173500-m02: {Name:mk1756dfe9622c208593bf9b718faf6c1651aea2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0109 00:28:37.398782   15272 start.go:369] acquired machines lock for "multinode-173500-m02" in 0s
	I0109 00:28:37.399338   15272 start.go:96] Skipping create...Using existing machine configuration
	I0109 00:28:37.399379   15272 fix.go:54] fixHost starting: m02
	I0109 00:28:37.399708   15272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-173500-m02 ).state
	I0109 00:28:39.577919   15272 main.go:141] libmachine: [stdout =====>] : Off
	
	I0109 00:28:39.577919   15272 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:28:39.577919   15272 fix.go:102] recreateIfNeeded on multinode-173500-m02: state=Stopped err=<nil>
	W0109 00:28:39.577919   15272 fix.go:128] unexpected machine state, will restart: <nil>
	I0109 00:28:39.583296   15272 out.go:177] * Restarting existing hyperv VM for "multinode-173500-m02" ...
	I0109 00:28:39.585584   15272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-173500-m02
	I0109 00:28:42.762854   15272 main.go:141] libmachine: [stdout =====>] : 
	I0109 00:28:42.762923   15272 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:28:42.762923   15272 main.go:141] libmachine: Waiting for host to start...
	I0109 00:28:42.762975   15272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-173500-m02 ).state
	I0109 00:28:45.119301   15272 main.go:141] libmachine: [stdout =====>] : Running
	
	I0109 00:28:45.119412   15272 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:28:45.119412   15272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-173500-m02 ).networkadapters[0]).ipaddresses[0]
	I0109 00:28:47.727166   15272 main.go:141] libmachine: [stdout =====>] : 
	I0109 00:28:47.727166   15272 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:28:48.729901   15272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-173500-m02 ).state
	I0109 00:28:51.004550   15272 main.go:141] libmachine: [stdout =====>] : Running
	
	I0109 00:28:51.004862   15272 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:28:51.005069   15272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-173500-m02 ).networkadapters[0]).ipaddresses[0]
	I0109 00:28:53.660076   15272 main.go:141] libmachine: [stdout =====>] : 
	I0109 00:28:53.660076   15272 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:28:54.660430   15272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-173500-m02 ).state
	I0109 00:28:56.920798   15272 main.go:141] libmachine: [stdout =====>] : Running
	
	I0109 00:28:56.920871   15272 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:28:56.920871   15272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-173500-m02 ).networkadapters[0]).ipaddresses[0]
	I0109 00:28:59.495358   15272 main.go:141] libmachine: [stdout =====>] : 
	I0109 00:28:59.495411   15272 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:29:00.496020   15272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-173500-m02 ).state
	I0109 00:29:02.775841   15272 main.go:141] libmachine: [stdout =====>] : Running
	
	I0109 00:29:02.775841   15272 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:29:02.776088   15272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-173500-m02 ).networkadapters[0]).ipaddresses[0]
	I0109 00:29:05.386612   15272 main.go:141] libmachine: [stdout =====>] : 
	I0109 00:29:05.386612   15272 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:29:06.390567   15272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-173500-m02 ).state
	I0109 00:29:08.637523   15272 main.go:141] libmachine: [stdout =====>] : Running
	
	I0109 00:29:08.637736   15272 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:29:08.637870   15272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-173500-m02 ).networkadapters[0]).ipaddresses[0]
	I0109 00:29:11.301517   15272 main.go:141] libmachine: [stdout =====>] : 172.24.111.157
	
	I0109 00:29:11.301912   15272 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:29:11.304510   15272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-173500-m02 ).state
	I0109 00:29:13.497706   15272 main.go:141] libmachine: [stdout =====>] : Running
	
	I0109 00:29:13.497939   15272 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:29:13.497939   15272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-173500-m02 ).networkadapters[0]).ipaddresses[0]
	I0109 00:29:16.089562   15272 main.go:141] libmachine: [stdout =====>] : 172.24.111.157
	
	I0109 00:29:16.089862   15272 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:29:16.090128   15272 profile.go:148] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-173500\config.json ...
	I0109 00:29:16.092894   15272 machine.go:88] provisioning docker machine ...
	I0109 00:29:16.092972   15272 buildroot.go:166] provisioning hostname "multinode-173500-m02"
	I0109 00:29:16.092972   15272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-173500-m02 ).state
	I0109 00:29:18.286737   15272 main.go:141] libmachine: [stdout =====>] : Running
	
	I0109 00:29:18.286737   15272 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:29:18.286821   15272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-173500-m02 ).networkadapters[0]).ipaddresses[0]
	I0109 00:29:20.945545   15272 main.go:141] libmachine: [stdout =====>] : 172.24.111.157
	
	I0109 00:29:20.945545   15272 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:29:20.951745   15272 main.go:141] libmachine: Using SSH client type: native
	I0109 00:29:20.952715   15272 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa46120] 0xa48c60 <nil>  [] 0s} 172.24.111.157 22 <nil> <nil>}
	I0109 00:29:20.952801   15272 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-173500-m02 && echo "multinode-173500-m02" | sudo tee /etc/hostname
	I0109 00:29:21.117188   15272 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-173500-m02
	
	I0109 00:29:21.117266   15272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-173500-m02 ).state
	I0109 00:29:23.319941   15272 main.go:141] libmachine: [stdout =====>] : Running
	
	I0109 00:29:23.320145   15272 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:29:23.320364   15272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-173500-m02 ).networkadapters[0]).ipaddresses[0]
	I0109 00:29:25.972137   15272 main.go:141] libmachine: [stdout =====>] : 172.24.111.157
	
	I0109 00:29:25.972310   15272 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:29:25.979464   15272 main.go:141] libmachine: Using SSH client type: native
	I0109 00:29:25.981241   15272 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa46120] 0xa48c60 <nil>  [] 0s} 172.24.111.157 22 <nil> <nil>}
	I0109 00:29:25.981241   15272 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-173500-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-173500-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-173500-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0109 00:29:26.133009   15272 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0109 00:29:26.133009   15272 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube1\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube1\minikube-integration\.minikube}
	I0109 00:29:26.133009   15272 buildroot.go:174] setting up certificates
	I0109 00:29:26.133009   15272 provision.go:83] configureAuth start
	I0109 00:29:26.133009   15272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-173500-m02 ).state
	I0109 00:29:28.340541   15272 main.go:141] libmachine: [stdout =====>] : Running
	
	I0109 00:29:28.340797   15272 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:29:28.340797   15272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-173500-m02 ).networkadapters[0]).ipaddresses[0]
	I0109 00:29:31.027653   15272 main.go:141] libmachine: [stdout =====>] : 172.24.111.157
	
	I0109 00:29:31.027950   15272 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:29:31.027950   15272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-173500-m02 ).state
	I0109 00:29:33.240556   15272 main.go:141] libmachine: [stdout =====>] : Running
	
	I0109 00:29:33.240556   15272 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:29:33.240647   15272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-173500-m02 ).networkadapters[0]).ipaddresses[0]
	I0109 00:29:35.834183   15272 main.go:141] libmachine: [stdout =====>] : 172.24.111.157
	
	I0109 00:29:35.834368   15272 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:29:35.834368   15272 provision.go:138] copyHostCerts
	I0109 00:29:35.834587   15272 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem
	I0109 00:29:35.835112   15272 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem, removing ...
	I0109 00:29:35.835112   15272 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cert.pem
	I0109 00:29:35.835112   15272 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0109 00:29:35.836803   15272 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem
	I0109 00:29:35.837161   15272 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem, removing ...
	I0109 00:29:35.837192   15272 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\key.pem
	I0109 00:29:35.837557   15272 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem (1675 bytes)
	I0109 00:29:35.838642   15272 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem
	I0109 00:29:35.838895   15272 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem, removing ...
	I0109 00:29:35.839007   15272 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.pem
	I0109 00:29:35.839249   15272 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0109 00:29:35.840225   15272 provision.go:112] generating server cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-173500-m02 san=[172.24.111.157 172.24.111.157 localhost 127.0.0.1 minikube multinode-173500-m02]
	I0109 00:29:36.125186   15272 provision.go:172] copyRemoteCerts
	I0109 00:29:36.140853   15272 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0109 00:29:36.140853   15272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-173500-m02 ).state
	I0109 00:29:38.303588   15272 main.go:141] libmachine: [stdout =====>] : Running
	
	I0109 00:29:38.303588   15272 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:29:38.303691   15272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-173500-m02 ).networkadapters[0]).ipaddresses[0]
	I0109 00:29:40.931147   15272 main.go:141] libmachine: [stdout =====>] : 172.24.111.157
	
	I0109 00:29:40.931147   15272 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:29:40.931477   15272 sshutil.go:53] new ssh client: &{IP:172.24.111.157 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-173500-m02\id_rsa Username:docker}
	I0109 00:29:41.042198   15272 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.9012129s)
	I0109 00:29:41.042259   15272 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0109 00:29:41.042777   15272 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0109 00:29:41.086473   15272 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0109 00:29:41.086473   15272 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1237 bytes)
	I0109 00:29:41.127730   15272 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0109 00:29:41.127730   15272 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0109 00:29:41.172560   15272 provision.go:86] duration metric: configureAuth took 15.0394719s
	I0109 00:29:41.172650   15272 buildroot.go:189] setting minikube options for container-runtime
	I0109 00:29:41.173594   15272 config.go:182] Loaded profile config "multinode-173500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0109 00:29:41.173685   15272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-173500-m02 ).state
	I0109 00:29:43.361592   15272 main.go:141] libmachine: [stdout =====>] : Running
	
	I0109 00:29:43.361592   15272 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:29:43.361720   15272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-173500-m02 ).networkadapters[0]).ipaddresses[0]
	I0109 00:29:46.004121   15272 main.go:141] libmachine: [stdout =====>] : 172.24.111.157
	
	I0109 00:29:46.004322   15272 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:29:46.010533   15272 main.go:141] libmachine: Using SSH client type: native
	I0109 00:29:46.011306   15272 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa46120] 0xa48c60 <nil>  [] 0s} 172.24.111.157 22 <nil> <nil>}
	I0109 00:29:46.011306   15272 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0109 00:29:46.154457   15272 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0109 00:29:46.154542   15272 buildroot.go:70] root file system type: tmpfs
	I0109 00:29:46.154618   15272 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0109 00:29:46.154618   15272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-173500-m02 ).state
	I0109 00:29:48.373298   15272 main.go:141] libmachine: [stdout =====>] : Running
	
	I0109 00:29:48.373298   15272 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:29:48.373397   15272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-173500-m02 ).networkadapters[0]).ipaddresses[0]
	I0109 00:29:51.066630   15272 main.go:141] libmachine: [stdout =====>] : 172.24.111.157
	
	I0109 00:29:51.066938   15272 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:29:51.073715   15272 main.go:141] libmachine: Using SSH client type: native
	I0109 00:29:51.074698   15272 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa46120] 0xa48c60 <nil>  [] 0s} 172.24.111.157 22 <nil> <nil>}
	I0109 00:29:51.074911   15272 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.24.109.120"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0109 00:29:51.242233   15272 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.24.109.120
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0109 00:29:51.242366   15272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-173500-m02 ).state
	I0109 00:29:53.432117   15272 main.go:141] libmachine: [stdout =====>] : Running
	
	I0109 00:29:53.432117   15272 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:29:53.432481   15272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-173500-m02 ).networkadapters[0]).ipaddresses[0]
	I0109 00:29:55.997212   15272 main.go:141] libmachine: [stdout =====>] : 172.24.111.157
	
	I0109 00:29:55.997352   15272 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:29:56.004079   15272 main.go:141] libmachine: Using SSH client type: native
	I0109 00:29:56.005460   15272 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa46120] 0xa48c60 <nil>  [] 0s} 172.24.111.157 22 <nil> <nil>}
	I0109 00:29:56.005460   15272 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0109 00:29:57.312410   15272 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0109 00:29:57.312410   15272 machine.go:91] provisioned docker machine in 41.2195118s
	I0109 00:29:57.312410   15272 start.go:300] post-start starting for "multinode-173500-m02" (driver="hyperv")
	I0109 00:29:57.312410   15272 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0109 00:29:57.326563   15272 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0109 00:29:57.326563   15272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-173500-m02 ).state
	I0109 00:29:59.521278   15272 main.go:141] libmachine: [stdout =====>] : Running
	
	I0109 00:29:59.521455   15272 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:29:59.521729   15272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-173500-m02 ).networkadapters[0]).ipaddresses[0]
	I0109 00:30:02.164699   15272 main.go:141] libmachine: [stdout =====>] : 172.24.111.157
	
	I0109 00:30:02.164910   15272 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:30:02.165171   15272 sshutil.go:53] new ssh client: &{IP:172.24.111.157 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-173500-m02\id_rsa Username:docker}
	I0109 00:30:02.276584   15272 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.9500207s)
	I0109 00:30:02.292174   15272 ssh_runner.go:195] Run: cat /etc/os-release
	I0109 00:30:02.298959   15272 command_runner.go:130] > NAME=Buildroot
	I0109 00:30:02.298959   15272 command_runner.go:130] > VERSION=2021.02.12-1-gae27a7b-dirty
	I0109 00:30:02.298959   15272 command_runner.go:130] > ID=buildroot
	I0109 00:30:02.298959   15272 command_runner.go:130] > VERSION_ID=2021.02.12
	I0109 00:30:02.298959   15272 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
	I0109 00:30:02.299331   15272 info.go:137] Remote host: Buildroot 2021.02.12
	I0109 00:30:02.299331   15272 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\addons for local assets ...
	I0109 00:30:02.300032   15272 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\files for local assets ...
	I0109 00:30:02.301498   15272 filesync.go:149] local asset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\142882.pem -> 142882.pem in /etc/ssl/certs
	I0109 00:30:02.301498   15272 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\142882.pem -> /etc/ssl/certs/142882.pem
	I0109 00:30:02.316666   15272 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0109 00:30:02.333827   15272 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\142882.pem --> /etc/ssl/certs/142882.pem (1708 bytes)
	I0109 00:30:02.375661   15272 start.go:303] post-start completed in 5.0632505s
	I0109 00:30:02.375661   15272 fix.go:56] fixHost completed within 1m24.9762731s
	I0109 00:30:02.375661   15272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-173500-m02 ).state
	I0109 00:30:04.564657   15272 main.go:141] libmachine: [stdout =====>] : Running
	
	I0109 00:30:04.564742   15272 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:30:04.564830   15272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-173500-m02 ).networkadapters[0]).ipaddresses[0]
	I0109 00:30:07.191342   15272 main.go:141] libmachine: [stdout =====>] : 172.24.111.157
	
	I0109 00:30:07.191486   15272 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:30:07.197497   15272 main.go:141] libmachine: Using SSH client type: native
	I0109 00:30:07.198335   15272 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa46120] 0xa48c60 <nil>  [] 0s} 172.24.111.157 22 <nil> <nil>}
	I0109 00:30:07.198335   15272 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0109 00:30:07.338755   15272 main.go:141] libmachine: SSH cmd err, output: <nil>: 1704760207.333519426
	
	I0109 00:30:07.338755   15272 fix.go:206] guest clock: 1704760207.333519426
	I0109 00:30:07.338755   15272 fix.go:219] Guest: 2024-01-09 00:30:07.333519426 +0000 UTC Remote: 2024-01-09 00:30:02.3756614 +0000 UTC m=+236.760321701 (delta=4.957858026s)
	I0109 00:30:07.338755   15272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-173500-m02 ).state
	I0109 00:30:09.499713   15272 main.go:141] libmachine: [stdout =====>] : Running
	
	I0109 00:30:09.499713   15272 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:30:09.499713   15272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-173500-m02 ).networkadapters[0]).ipaddresses[0]
	I0109 00:30:12.123647   15272 main.go:141] libmachine: [stdout =====>] : 172.24.111.157
	
	I0109 00:30:12.123647   15272 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:30:12.130401   15272 main.go:141] libmachine: Using SSH client type: native
	I0109 00:30:12.131119   15272 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa46120] 0xa48c60 <nil>  [] 0s} 172.24.111.157 22 <nil> <nil>}
	I0109 00:30:12.131119   15272 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1704760207
	I0109 00:30:12.280877   15272 main.go:141] libmachine: SSH cmd err, output: <nil>: Tue Jan  9 00:30:07 UTC 2024
	
	I0109 00:30:12.280877   15272 fix.go:226] clock set: Tue Jan  9 00:30:07 UTC 2024
	 (err=<nil>)
	I0109 00:30:12.280877   15272 start.go:83] releasing machines lock for "multinode-173500-m02", held for 1m34.8820854s
	I0109 00:30:12.281163   15272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-173500-m02 ).state
	I0109 00:30:14.444218   15272 main.go:141] libmachine: [stdout =====>] : Running
	
	I0109 00:30:14.444218   15272 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:30:14.444218   15272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-173500-m02 ).networkadapters[0]).ipaddresses[0]
	I0109 00:30:17.027633   15272 main.go:141] libmachine: [stdout =====>] : 172.24.111.157
	
	I0109 00:30:17.027809   15272 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:30:17.029739   15272 out.go:177] * Found network options:
	I0109 00:30:17.033510   15272 out.go:177]   - NO_PROXY=172.24.109.120
	W0109 00:30:17.035601   15272 proxy.go:119] fail to check proxy env: Error ip not in block
	I0109 00:30:17.038721   15272 out.go:177]   - NO_PROXY=172.24.109.120
	W0109 00:30:17.042782   15272 proxy.go:119] fail to check proxy env: Error ip not in block
	W0109 00:30:17.044396   15272 proxy.go:119] fail to check proxy env: Error ip not in block
	I0109 00:30:17.046684   15272 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0109 00:30:17.046684   15272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-173500-m02 ).state
	I0109 00:30:17.058939   15272 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0109 00:30:17.058939   15272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-173500-m02 ).state
	I0109 00:30:19.284576   15272 main.go:141] libmachine: [stdout =====>] : Running
	
	I0109 00:30:19.284783   15272 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:30:19.284576   15272 main.go:141] libmachine: [stdout =====>] : Running
	
	I0109 00:30:19.284882   15272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-173500-m02 ).networkadapters[0]).ipaddresses[0]
	I0109 00:30:19.284882   15272 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:30:19.285086   15272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-173500-m02 ).networkadapters[0]).ipaddresses[0]
	I0109 00:30:21.984934   15272 main.go:141] libmachine: [stdout =====>] : 172.24.111.157
	
	I0109 00:30:21.985157   15272 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:30:21.985388   15272 sshutil.go:53] new ssh client: &{IP:172.24.111.157 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-173500-m02\id_rsa Username:docker}
	I0109 00:30:22.012960   15272 main.go:141] libmachine: [stdout =====>] : 172.24.111.157
	
	I0109 00:30:22.013084   15272 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:30:22.013323   15272 sshutil.go:53] new ssh client: &{IP:172.24.111.157 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-173500-m02\id_rsa Username:docker}
	I0109 00:30:22.181636   15272 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0109 00:30:22.181636   15272 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	I0109 00:30:22.181750   15272 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (5.1349511s)
	I0109 00:30:22.181750   15272 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (5.1228113s)
	W0109 00:30:22.181750   15272 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0109 00:30:22.198679   15272 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0109 00:30:22.224037   15272 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0109 00:30:22.224037   15272 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0109 00:30:22.224169   15272 start.go:475] detecting cgroup driver to use...
	I0109 00:30:22.224420   15272 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0109 00:30:22.255923   15272 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0109 00:30:22.271433   15272 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0109 00:30:22.311501   15272 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0109 00:30:22.328635   15272 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0109 00:30:22.342294   15272 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0109 00:30:22.373662   15272 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0109 00:30:22.406793   15272 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0109 00:30:22.437314   15272 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0109 00:30:22.474101   15272 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0109 00:30:22.506106   15272 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0109 00:30:22.535418   15272 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0109 00:30:22.550688   15272 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0109 00:30:22.564669   15272 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0109 00:30:22.595523   15272 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0109 00:30:22.763807   15272 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0109 00:30:22.789479   15272 start.go:475] detecting cgroup driver to use...
	I0109 00:30:22.803596   15272 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0109 00:30:22.824242   15272 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0109 00:30:22.824242   15272 command_runner.go:130] > [Unit]
	I0109 00:30:22.824342   15272 command_runner.go:130] > Description=Docker Application Container Engine
	I0109 00:30:22.824342   15272 command_runner.go:130] > Documentation=https://docs.docker.com
	I0109 00:30:22.824342   15272 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0109 00:30:22.824342   15272 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0109 00:30:22.824342   15272 command_runner.go:130] > StartLimitBurst=3
	I0109 00:30:22.824342   15272 command_runner.go:130] > StartLimitIntervalSec=60
	I0109 00:30:22.824342   15272 command_runner.go:130] > [Service]
	I0109 00:30:22.824342   15272 command_runner.go:130] > Type=notify
	I0109 00:30:22.824342   15272 command_runner.go:130] > Restart=on-failure
	I0109 00:30:22.824342   15272 command_runner.go:130] > Environment=NO_PROXY=172.24.109.120
	I0109 00:30:22.824342   15272 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0109 00:30:22.824342   15272 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0109 00:30:22.824342   15272 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0109 00:30:22.824342   15272 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0109 00:30:22.824342   15272 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0109 00:30:22.824342   15272 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0109 00:30:22.824342   15272 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0109 00:30:22.824342   15272 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0109 00:30:22.824342   15272 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0109 00:30:22.824342   15272 command_runner.go:130] > ExecStart=
	I0109 00:30:22.824342   15272 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0109 00:30:22.824342   15272 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0109 00:30:22.824342   15272 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0109 00:30:22.824342   15272 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0109 00:30:22.824342   15272 command_runner.go:130] > LimitNOFILE=infinity
	I0109 00:30:22.824342   15272 command_runner.go:130] > LimitNPROC=infinity
	I0109 00:30:22.824342   15272 command_runner.go:130] > LimitCORE=infinity
	I0109 00:30:22.824342   15272 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0109 00:30:22.824342   15272 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0109 00:30:22.824342   15272 command_runner.go:130] > TasksMax=infinity
	I0109 00:30:22.824342   15272 command_runner.go:130] > TimeoutStartSec=0
	I0109 00:30:22.824342   15272 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0109 00:30:22.824342   15272 command_runner.go:130] > Delegate=yes
	I0109 00:30:22.824342   15272 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0109 00:30:22.824342   15272 command_runner.go:130] > KillMode=process
	I0109 00:30:22.824342   15272 command_runner.go:130] > [Install]
	I0109 00:30:22.824342   15272 command_runner.go:130] > WantedBy=multi-user.target
	I0109 00:30:22.842554   15272 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0109 00:30:22.872829   15272 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0109 00:30:22.916239   15272 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0109 00:30:22.951525   15272 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0109 00:30:22.984607   15272 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0109 00:30:23.048939   15272 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0109 00:30:23.073444   15272 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0109 00:30:23.102522   15272 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0109 00:30:23.120616   15272 ssh_runner.go:195] Run: which cri-dockerd
	I0109 00:30:23.125971   15272 command_runner.go:130] > /usr/bin/cri-dockerd
	I0109 00:30:23.141575   15272 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0109 00:30:23.158155   15272 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0109 00:30:23.201496   15272 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0109 00:30:23.379012   15272 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0109 00:30:23.540649   15272 docker.go:560] configuring docker to use "cgroupfs" as cgroup driver...
	I0109 00:30:23.540681   15272 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0109 00:30:23.586217   15272 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0109 00:30:23.753832   15272 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0109 00:30:25.415147   15272 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.6613144s)
	I0109 00:30:25.429957   15272 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0109 00:30:25.611502   15272 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0109 00:30:25.777258   15272 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0109 00:30:25.949773   15272 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0109 00:30:26.127881   15272 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0109 00:30:26.173468   15272 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0109 00:30:26.359584   15272 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I0109 00:30:26.467093   15272 start.go:522] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0109 00:30:26.483334   15272 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0109 00:30:26.491034   15272 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0109 00:30:26.491034   15272 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0109 00:30:26.491034   15272 command_runner.go:130] > Device: 16h/22d	Inode: 901         Links: 1
	I0109 00:30:26.491034   15272 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0109 00:30:26.491034   15272 command_runner.go:130] > Access: 2024-01-09 00:30:26.357538478 +0000
	I0109 00:30:26.491034   15272 command_runner.go:130] > Modify: 2024-01-09 00:30:26.357538478 +0000
	I0109 00:30:26.491034   15272 command_runner.go:130] > Change: 2024-01-09 00:30:26.362538478 +0000
	I0109 00:30:26.491034   15272 command_runner.go:130] >  Birth: -
	I0109 00:30:26.491034   15272 start.go:543] Will wait 60s for crictl version
	I0109 00:30:26.507221   15272 ssh_runner.go:195] Run: which crictl
	I0109 00:30:26.512165   15272 command_runner.go:130] > /usr/bin/crictl
	I0109 00:30:26.526544   15272 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0109 00:30:26.603592   15272 command_runner.go:130] > Version:  0.1.0
	I0109 00:30:26.603710   15272 command_runner.go:130] > RuntimeName:  docker
	I0109 00:30:26.603710   15272 command_runner.go:130] > RuntimeVersion:  24.0.7
	I0109 00:30:26.603710   15272 command_runner.go:130] > RuntimeApiVersion:  v1
	I0109 00:30:26.604102   15272 start.go:559] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  24.0.7
	RuntimeApiVersion:  v1
	I0109 00:30:26.614991   15272 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0109 00:30:26.650555   15272 command_runner.go:130] > 24.0.7
	I0109 00:30:26.662197   15272 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0109 00:30:26.694166   15272 command_runner.go:130] > 24.0.7
	I0109 00:30:26.698965   15272 out.go:204] * Preparing Kubernetes v1.28.4 on Docker 24.0.7 ...
	I0109 00:30:26.702770   15272 out.go:177]   - env NO_PROXY=172.24.109.120
	I0109 00:30:26.704426   15272 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0109 00:30:26.709338   15272 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0109 00:30:26.709338   15272 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0109 00:30:26.709338   15272 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0109 00:30:26.709338   15272 ip.go:207] Found interface: {Index:13 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:c4:61:0a Flags:up|broadcast|multicast|running}
	I0109 00:30:26.712343   15272 ip.go:210] interface addr: fe80::3fa5:15f5:46dc:dc8f/64
	I0109 00:30:26.712343   15272 ip.go:210] interface addr: 172.24.96.1/20
	I0109 00:30:26.725349   15272 ssh_runner.go:195] Run: grep 172.24.96.1	host.minikube.internal$ /etc/hosts
	I0109 00:30:26.731400   15272 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.24.96.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0109 00:30:26.751524   15272 certs.go:56] Setting up C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-173500 for IP: 172.24.111.157
	I0109 00:30:26.751524   15272 certs.go:190] acquiring lock for shared ca certs: {Name:mk7484196a709b348d442d7deac4228c8c4b804e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0109 00:30:26.752339   15272 certs.go:199] skipping minikubeCA CA generation: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key
	I0109 00:30:26.752701   15272 certs.go:199] skipping proxyClientCA CA generation: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key
	I0109 00:30:26.752985   15272 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0109 00:30:26.753394   15272 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0109 00:30:26.753716   15272 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0109 00:30:26.754159   15272 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0109 00:30:26.755125   15272 certs.go:437] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\14288.pem (1338 bytes)
	W0109 00:30:26.755710   15272 certs.go:433] ignoring C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\14288_empty.pem, impossibly tiny 0 bytes
	I0109 00:30:26.755840   15272 certs.go:437] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I0109 00:30:26.756339   15272 certs.go:437] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0109 00:30:26.756787   15272 certs.go:437] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0109 00:30:26.757234   15272 certs.go:437] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I0109 00:30:26.758218   15272 certs.go:437] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\142882.pem (1708 bytes)
	I0109 00:30:26.758521   15272 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\142882.pem -> /usr/share/ca-certificates/142882.pem
	I0109 00:30:26.758812   15272 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0109 00:30:26.759166   15272 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\14288.pem -> /usr/share/ca-certificates/14288.pem
	I0109 00:30:26.761952   15272 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0109 00:30:26.804351   15272 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0109 00:30:26.849692   15272 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0109 00:30:26.889234   15272 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0109 00:30:26.928297   15272 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\142882.pem --> /usr/share/ca-certificates/142882.pem (1708 bytes)
	I0109 00:30:26.965433   15272 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0109 00:30:27.004528   15272 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\14288.pem --> /usr/share/ca-certificates/14288.pem (1338 bytes)
	I0109 00:30:27.064739   15272 ssh_runner.go:195] Run: openssl version
	I0109 00:30:27.075725   15272 command_runner.go:130] > OpenSSL 1.1.1n  15 Mar 2022
	I0109 00:30:27.089857   15272 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/142882.pem && ln -fs /usr/share/ca-certificates/142882.pem /etc/ssl/certs/142882.pem"
	I0109 00:30:27.120877   15272 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/142882.pem
	I0109 00:30:27.126834   15272 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Jan  8 23:11 /usr/share/ca-certificates/142882.pem
	I0109 00:30:27.126834   15272 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan  8 23:11 /usr/share/ca-certificates/142882.pem
	I0109 00:30:27.140484   15272 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/142882.pem
	I0109 00:30:27.149579   15272 command_runner.go:130] > 3ec20f2e
	I0109 00:30:27.163459   15272 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/142882.pem /etc/ssl/certs/3ec20f2e.0"
	I0109 00:30:27.194542   15272 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0109 00:30:27.225734   15272 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0109 00:30:27.232884   15272 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Jan  8 22:56 /usr/share/ca-certificates/minikubeCA.pem
	I0109 00:30:27.232884   15272 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan  8 22:56 /usr/share/ca-certificates/minikubeCA.pem
	I0109 00:30:27.250094   15272 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0109 00:30:27.257324   15272 command_runner.go:130] > b5213941
	I0109 00:30:27.273114   15272 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0109 00:30:27.306616   15272 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14288.pem && ln -fs /usr/share/ca-certificates/14288.pem /etc/ssl/certs/14288.pem"
	I0109 00:30:27.336824   15272 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14288.pem
	I0109 00:30:27.344058   15272 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Jan  8 23:11 /usr/share/ca-certificates/14288.pem
	I0109 00:30:27.344058   15272 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan  8 23:11 /usr/share/ca-certificates/14288.pem
	I0109 00:30:27.358092   15272 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14288.pem
	I0109 00:30:27.365849   15272 command_runner.go:130] > 51391683
	I0109 00:30:27.379246   15272 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14288.pem /etc/ssl/certs/51391683.0"
	I0109 00:30:27.411405   15272 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0109 00:30:27.416973   15272 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0109 00:30:27.416973   15272 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0109 00:30:27.428433   15272 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0109 00:30:27.465860   15272 command_runner.go:130] > cgroupfs
	I0109 00:30:27.465860   15272 cni.go:84] Creating CNI manager for ""
	I0109 00:30:27.465860   15272 cni.go:136] 3 nodes found, recommending kindnet
	I0109 00:30:27.465860   15272 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0109 00:30:27.465860   15272 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.24.111.157 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-173500 NodeName:multinode-173500-m02 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.24.109.120"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.24.111.157 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0109 00:30:27.465860   15272 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.24.111.157
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "multinode-173500-m02"
	  kubeletExtraArgs:
	    node-ip: 172.24.111.157
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.24.109.120"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0109 00:30:27.465860   15272 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=multinode-173500-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.24.111.157
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:multinode-173500 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0109 00:30:27.480427   15272 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0109 00:30:27.498139   15272 command_runner.go:130] > kubeadm
	I0109 00:30:27.498139   15272 command_runner.go:130] > kubectl
	I0109 00:30:27.498139   15272 command_runner.go:130] > kubelet
	I0109 00:30:27.498206   15272 binaries.go:44] Found k8s binaries, skipping transfer
	I0109 00:30:27.511860   15272 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I0109 00:30:27.527124   15272 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (383 bytes)
	I0109 00:30:27.553750   15272 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0109 00:30:27.597538   15272 ssh_runner.go:195] Run: grep 172.24.109.120	control-plane.minikube.internal$ /etc/hosts
	I0109 00:30:27.603559   15272 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.24.109.120	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0109 00:30:27.620478   15272 host.go:66] Checking if "multinode-173500" exists ...
	I0109 00:30:27.621606   15272 config.go:182] Loaded profile config "multinode-173500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0109 00:30:27.621606   15272 start.go:304] JoinCluster: &{Name:multinode-173500 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17806/minikube-v1.32.1-1702708929-17806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704751654-17830@sha256:cabd32f8d9e8d804966eb117ed5366660f6363a4d1415f0b5480de6e396be617 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion
:v1.28.4 ClusterName:multinode-173500 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:172.24.109.120 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.24.111.157 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.24.100.87 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingr
ess:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:do
cker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I0109 00:30:27.621835   15272 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0109 00:30:27.621948   15272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-173500 ).state
	I0109 00:30:29.784396   15272 main.go:141] libmachine: [stdout =====>] : Running
	
	I0109 00:30:29.784583   15272 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:30:29.784678   15272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-173500 ).networkadapters[0]).ipaddresses[0]
	I0109 00:30:32.381616   15272 main.go:141] libmachine: [stdout =====>] : 172.24.109.120
	
	I0109 00:30:32.381616   15272 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:30:32.381862   15272 sshutil.go:53] new ssh client: &{IP:172.24.109.120 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-173500\id_rsa Username:docker}
	I0109 00:30:32.583582   15272 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token o4ugah.wbuog6qrdb131mae --discovery-token-ca-cert-hash sha256:6a12e94bf3397e7db59fa944f4e20c2c2c34b5794397b381e3c5134eb1900391 
	I0109 00:30:32.583664   15272 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm token create --print-join-command --ttl=0": (4.9617753s)
	I0109 00:30:32.583664   15272 start.go:317] removing existing worker node "m02" before attempting to rejoin cluster: &{Name:m02 IP:172.24.111.157 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0109 00:30:32.583664   15272 host.go:66] Checking if "multinode-173500" exists ...
	I0109 00:30:32.598099   15272 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl drain multinode-173500-m02 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data --delete-local-data
	I0109 00:30:32.598099   15272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-173500 ).state
	I0109 00:30:34.805509   15272 main.go:141] libmachine: [stdout =====>] : Running
	
	I0109 00:30:34.805509   15272 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:30:34.805778   15272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-173500 ).networkadapters[0]).ipaddresses[0]
	I0109 00:30:37.401513   15272 main.go:141] libmachine: [stdout =====>] : 172.24.109.120
	
	I0109 00:30:37.401513   15272 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:30:37.401787   15272 sshutil.go:53] new ssh client: &{IP:172.24.109.120 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-173500\id_rsa Username:docker}
	I0109 00:30:37.596318   15272 command_runner.go:130] ! Flag --delete-local-data has been deprecated, This option is deprecated and will be deleted. Use --delete-emptydir-data.
	I0109 00:30:37.681494   15272 command_runner.go:130] ! Warning: ignoring DaemonSet-managed Pods: kube-system/kindnet-t72cs, kube-system/kube-proxy-4h4sv
	I0109 00:30:40.724522   15272 command_runner.go:130] > node/multinode-173500-m02 cordoned
	I0109 00:30:40.724889   15272 command_runner.go:130] > pod "busybox-5bc68d56bd-txtnl" has DeletionTimestamp older than 1 seconds, skipping
	I0109 00:30:40.724889   15272 command_runner.go:130] > node/multinode-173500-m02 drained
	I0109 00:30:40.728297   15272 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl drain multinode-173500-m02 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data --delete-local-data: (8.1301977s)
	I0109 00:30:40.728297   15272 node.go:108] successfully drained node "m02"
	I0109 00:30:40.728922   15272 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0109 00:30:40.730290   15272 kapi.go:59] client config for multinode-173500: &rest.Config{Host:"https://172.24.109.120:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\multinode-173500\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\multinode-173500\\client.key", CAFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CA
Data:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1e2c140), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0109 00:30:40.730557   15272 request.go:1212] Request Body: {"kind":"DeleteOptions","apiVersion":"v1"}
	I0109 00:30:40.731335   15272 round_trippers.go:463] DELETE https://172.24.109.120:8443/api/v1/nodes/multinode-173500-m02
	I0109 00:30:40.731335   15272 round_trippers.go:469] Request Headers:
	I0109 00:30:40.731335   15272 round_trippers.go:473]     Content-Type: application/json
	I0109 00:30:40.731335   15272 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0109 00:30:40.731335   15272 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:30:40.750379   15272 round_trippers.go:574] Response Status: 200 OK in 19 milliseconds
	I0109 00:30:40.750379   15272 round_trippers.go:577] Response Headers:
	I0109 00:30:40.750379   15272 round_trippers.go:580]     Audit-Id: b5cb8855-8e00-4202-9c10-d1bda015852b
	I0109 00:30:40.751314   15272 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:30:40.751314   15272 round_trippers.go:580]     Content-Type: application/json
	I0109 00:30:40.751314   15272 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b65f6d3f-e21a-46b0-805d-9b71486b6c1c
	I0109 00:30:40.751348   15272 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 160c1d93-30c8-4413-9986-7aa0d5f0fdd1
	I0109 00:30:40.751348   15272 round_trippers.go:580]     Content-Length: 171
	I0109 00:30:40.751374   15272 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:30:40 GMT
	I0109 00:30:40.751374   15272 request.go:1212] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Success","details":{"name":"multinode-173500-m02","kind":"nodes","uid":"2696f851-45f3-47f4-953f-d03a5dc2fac0"}}
	I0109 00:30:40.751516   15272 node.go:124] successfully deleted node "m02"
	I0109 00:30:40.751539   15272 start.go:321] successfully removed existing worker node "m02" from cluster: &{Name:m02 IP:172.24.111.157 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0109 00:30:40.751567   15272 start.go:325] trying to join worker node "m02" to cluster: &{Name:m02 IP:172.24.111.157 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0109 00:30:40.751567   15272 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm join control-plane.minikube.internal:8443 --token o4ugah.wbuog6qrdb131mae --discovery-token-ca-cert-hash sha256:6a12e94bf3397e7db59fa944f4e20c2c2c34b5794397b381e3c5134eb1900391 --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-173500-m02"
	I0109 00:30:41.032352   15272 command_runner.go:130] > [preflight] Running pre-flight checks
	I0109 00:30:41.673258   15272 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I0109 00:30:41.674049   15272 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I0109 00:30:41.728343   15272 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0109 00:30:41.730604   15272 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0109 00:30:41.730975   15272 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0109 00:30:41.898856   15272 command_runner.go:130] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
	I0109 00:30:43.435338   15272 command_runner.go:130] > This node has joined the cluster:
	I0109 00:30:43.436021   15272 command_runner.go:130] > * Certificate signing request was sent to apiserver and a response was received.
	I0109 00:30:43.436021   15272 command_runner.go:130] > * The Kubelet was informed of the new secure connection details.
	I0109 00:30:43.436067   15272 command_runner.go:130] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I0109 00:30:43.440380   15272 command_runner.go:130] ! W0109 00:30:41.009737    1365 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I0109 00:30:43.440421   15272 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0109 00:30:43.440454   15272 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm join control-plane.minikube.internal:8443 --token o4ugah.wbuog6qrdb131mae --discovery-token-ca-cert-hash sha256:6a12e94bf3397e7db59fa944f4e20c2c2c34b5794397b381e3c5134eb1900391 --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-173500-m02": (2.6888867s)
	I0109 00:30:43.440454   15272 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0109 00:30:43.711250   15272 command_runner.go:130] ! Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /usr/lib/systemd/system/kubelet.service.
	I0109 00:30:43.970316   15272 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a minikube.k8s.io/name=multinode-173500 minikube.k8s.io/updated_at=2024_01_09T00_30_43_0700 minikube.k8s.io/primary=false "-l minikube.k8s.io/primary!=true" --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:30:44.147085   15272 command_runner.go:130] > node/multinode-173500-m02 labeled
	I0109 00:30:44.147218   15272 command_runner.go:130] > node/multinode-173500-m03 labeled
	I0109 00:30:44.147218   15272 start.go:306] JoinCluster complete in 16.5256106s
	I0109 00:30:44.147218   15272 cni.go:84] Creating CNI manager for ""
	I0109 00:30:44.147218   15272 cni.go:136] 3 nodes found, recommending kindnet
	I0109 00:30:44.162958   15272 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0109 00:30:44.171953   15272 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0109 00:30:44.171953   15272 command_runner.go:130] >   Size: 2685752   	Blocks: 5248       IO Block: 4096   regular file
	I0109 00:30:44.171953   15272 command_runner.go:130] > Device: 11h/17d	Inode: 3544        Links: 1
	I0109 00:30:44.171953   15272 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0109 00:30:44.171953   15272 command_runner.go:130] > Access: 2024-01-09 00:26:43.947705700 +0000
	I0109 00:30:44.171953   15272 command_runner.go:130] > Modify: 2023-12-16 11:53:47.000000000 +0000
	I0109 00:30:44.171953   15272 command_runner.go:130] > Change: 2024-01-09 00:26:31.489000000 +0000
	I0109 00:30:44.172937   15272 command_runner.go:130] >  Birth: -
	I0109 00:30:44.172937   15272 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I0109 00:30:44.172937   15272 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0109 00:30:44.217540   15272 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0109 00:30:44.656521   15272 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I0109 00:30:44.656668   15272 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I0109 00:30:44.656668   15272 command_runner.go:130] > serviceaccount/kindnet unchanged
	I0109 00:30:44.656668   15272 command_runner.go:130] > daemonset.apps/kindnet configured
	I0109 00:30:44.657458   15272 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0109 00:30:44.658192   15272 kapi.go:59] client config for multinode-173500: &rest.Config{Host:"https://172.24.109.120:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\multinode-173500\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\multinode-173500\\client.key", CAFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CA
Data:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1e2c140), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0109 00:30:44.658692   15272 round_trippers.go:463] GET https://172.24.109.120:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0109 00:30:44.658692   15272 round_trippers.go:469] Request Headers:
	I0109 00:30:44.658692   15272 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:30:44.658692   15272 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0109 00:30:44.666753   15272 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0109 00:30:44.666753   15272 round_trippers.go:577] Response Headers:
	I0109 00:30:44.666753   15272 round_trippers.go:580]     Audit-Id: 3cc26e8f-61e9-49da-9767-4832e6b0d4e7
	I0109 00:30:44.666753   15272 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:30:44.666753   15272 round_trippers.go:580]     Content-Type: application/json
	I0109 00:30:44.666753   15272 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b65f6d3f-e21a-46b0-805d-9b71486b6c1c
	I0109 00:30:44.666753   15272 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 160c1d93-30c8-4413-9986-7aa0d5f0fdd1
	I0109 00:30:44.666753   15272 round_trippers.go:580]     Content-Length: 292
	I0109 00:30:44.666753   15272 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:30:44 GMT
	I0109 00:30:44.666753   15272 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"40c365d2-2414-4cb8-9731-fc615f6d2dcd","resourceVersion":"1814","creationTimestamp":"2024-01-09T00:05:31Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0109 00:30:44.666753   15272 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-173500" context rescaled to 1 replicas
	I0109 00:30:44.666753   15272 start.go:223] Will wait 6m0s for node &{Name:m02 IP:172.24.111.157 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0109 00:30:44.671638   15272 out.go:177] * Verifying Kubernetes components...
	I0109 00:30:44.687646   15272 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0109 00:30:44.710645   15272 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0109 00:30:44.711644   15272 kapi.go:59] client config for multinode-173500: &rest.Config{Host:"https://172.24.109.120:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\multinode-173500\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\multinode-173500\\client.key", CAFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CA
Data:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1e2c140), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0109 00:30:44.711644   15272 node_ready.go:35] waiting up to 6m0s for node "multinode-173500-m02" to be "Ready" ...
	I0109 00:30:44.712649   15272 round_trippers.go:463] GET https://172.24.109.120:8443/api/v1/nodes/multinode-173500-m02
	I0109 00:30:44.712649   15272 round_trippers.go:469] Request Headers:
	I0109 00:30:44.712649   15272 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:30:44.712649   15272 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0109 00:30:44.716646   15272 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0109 00:30:44.716646   15272 round_trippers.go:577] Response Headers:
	I0109 00:30:44.716646   15272 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:30:44 GMT
	I0109 00:30:44.716646   15272 round_trippers.go:580]     Audit-Id: d591fe2c-ed8d-4549-9091-09fe84c48d0a
	I0109 00:30:44.716646   15272 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:30:44.716646   15272 round_trippers.go:580]     Content-Type: application/json
	I0109 00:30:44.716829   15272 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b65f6d3f-e21a-46b0-805d-9b71486b6c1c
	I0109 00:30:44.716829   15272 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 160c1d93-30c8-4413-9986-7aa0d5f0fdd1
	I0109 00:30:44.717244   15272 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-173500-m02","uid":"5e797fe4-8400-423e-ad46-5d1f64335887","resourceVersion":"1998","creationTimestamp":"2024-01-09T00:30:42Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-173500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-173500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_09T00_30_43_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-09T00:30:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:me
tadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec [truncated 3782 chars]
	I0109 00:30:45.220005   15272 round_trippers.go:463] GET https://172.24.109.120:8443/api/v1/nodes/multinode-173500-m02
	I0109 00:30:45.220138   15272 round_trippers.go:469] Request Headers:
	I0109 00:30:45.220138   15272 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:30:45.220287   15272 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0109 00:30:45.225704   15272 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0109 00:30:45.225704   15272 round_trippers.go:577] Response Headers:
	I0109 00:30:45.225704   15272 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:30:45.225704   15272 round_trippers.go:580]     Content-Type: application/json
	I0109 00:30:45.225704   15272 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b65f6d3f-e21a-46b0-805d-9b71486b6c1c
	I0109 00:30:45.225704   15272 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 160c1d93-30c8-4413-9986-7aa0d5f0fdd1
	I0109 00:30:45.225704   15272 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:30:45 GMT
	I0109 00:30:45.225704   15272 round_trippers.go:580]     Audit-Id: c12ecfe5-7b90-4438-8fa7-72f5eab5caf7
	I0109 00:30:45.225704   15272 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-173500-m02","uid":"5e797fe4-8400-423e-ad46-5d1f64335887","resourceVersion":"1998","creationTimestamp":"2024-01-09T00:30:42Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-173500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-173500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_09T00_30_43_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-09T00:30:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:me
tadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec [truncated 3782 chars]
	I0109 00:30:45.725159   15272 round_trippers.go:463] GET https://172.24.109.120:8443/api/v1/nodes/multinode-173500-m02
	I0109 00:30:45.725234   15272 round_trippers.go:469] Request Headers:
	I0109 00:30:45.725234   15272 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:30:45.725302   15272 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0109 00:30:45.729065   15272 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0109 00:30:45.729065   15272 round_trippers.go:577] Response Headers:
	I0109 00:30:45.729183   15272 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:30:45.729183   15272 round_trippers.go:580]     Content-Type: application/json
	I0109 00:30:45.729183   15272 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b65f6d3f-e21a-46b0-805d-9b71486b6c1c
	I0109 00:30:45.729183   15272 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 160c1d93-30c8-4413-9986-7aa0d5f0fdd1
	I0109 00:30:45.729183   15272 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:30:45 GMT
	I0109 00:30:45.729183   15272 round_trippers.go:580]     Audit-Id: 1c858047-7378-4795-83fa-1cbcd858cec3
	I0109 00:30:45.729388   15272 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-173500-m02","uid":"5e797fe4-8400-423e-ad46-5d1f64335887","resourceVersion":"2006","creationTimestamp":"2024-01-09T00:30:42Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-173500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-173500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_09T00_30_43_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-09T00:30:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:anno
tations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-det [truncated 3891 chars]
	I0109 00:30:46.212921   15272 round_trippers.go:463] GET https://172.24.109.120:8443/api/v1/nodes/multinode-173500-m02
	I0109 00:30:46.212921   15272 round_trippers.go:469] Request Headers:
	I0109 00:30:46.212921   15272 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:30:46.212921   15272 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0109 00:30:46.218684   15272 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0109 00:30:46.218684   15272 round_trippers.go:577] Response Headers:
	I0109 00:30:46.218955   15272 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:30:46.218955   15272 round_trippers.go:580]     Content-Type: application/json
	I0109 00:30:46.218955   15272 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b65f6d3f-e21a-46b0-805d-9b71486b6c1c
	I0109 00:30:46.218955   15272 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 160c1d93-30c8-4413-9986-7aa0d5f0fdd1
	I0109 00:30:46.218955   15272 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:30:46 GMT
	I0109 00:30:46.218955   15272 round_trippers.go:580]     Audit-Id: 4fef0360-33b2-4d8b-bdbd-98d01eb23780
	I0109 00:30:46.219112   15272 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-173500-m02","uid":"5e797fe4-8400-423e-ad46-5d1f64335887","resourceVersion":"2006","creationTimestamp":"2024-01-09T00:30:42Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-173500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-173500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_09T00_30_43_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-09T00:30:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:anno
tations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-det [truncated 3891 chars]
	I0109 00:30:46.715674   15272 round_trippers.go:463] GET https://172.24.109.120:8443/api/v1/nodes/multinode-173500-m02
	I0109 00:30:46.715674   15272 round_trippers.go:469] Request Headers:
	I0109 00:30:46.715674   15272 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:30:46.715910   15272 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0109 00:30:46.719221   15272 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0109 00:30:46.720229   15272 round_trippers.go:577] Response Headers:
	I0109 00:30:46.720229   15272 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:30:46.720229   15272 round_trippers.go:580]     Content-Type: application/json
	I0109 00:30:46.720229   15272 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b65f6d3f-e21a-46b0-805d-9b71486b6c1c
	I0109 00:30:46.720229   15272 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 160c1d93-30c8-4413-9986-7aa0d5f0fdd1
	I0109 00:30:46.720229   15272 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:30:46 GMT
	I0109 00:30:46.720229   15272 round_trippers.go:580]     Audit-Id: cd6817c6-8c73-4f22-9ff1-c16874fd989b
	I0109 00:30:46.720432   15272 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-173500-m02","uid":"5e797fe4-8400-423e-ad46-5d1f64335887","resourceVersion":"2006","creationTimestamp":"2024-01-09T00:30:42Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-173500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-173500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_09T00_30_43_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-09T00:30:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:anno
tations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-det [truncated 3891 chars]
	I0109 00:30:46.721024   15272 node_ready.go:58] node "multinode-173500-m02" has status "Ready":"False"
	I0109 00:30:47.218596   15272 round_trippers.go:463] GET https://172.24.109.120:8443/api/v1/nodes/multinode-173500-m02
	I0109 00:30:47.218596   15272 round_trippers.go:469] Request Headers:
	I0109 00:30:47.218596   15272 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:30:47.218596   15272 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0109 00:30:47.225020   15272 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0109 00:30:47.225020   15272 round_trippers.go:577] Response Headers:
	I0109 00:30:47.225020   15272 round_trippers.go:580]     Audit-Id: e6614d03-0e74-4cd3-8c01-e81399d8f9e6
	I0109 00:30:47.225020   15272 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:30:47.225020   15272 round_trippers.go:580]     Content-Type: application/json
	I0109 00:30:47.225020   15272 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b65f6d3f-e21a-46b0-805d-9b71486b6c1c
	I0109 00:30:47.225020   15272 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 160c1d93-30c8-4413-9986-7aa0d5f0fdd1
	I0109 00:30:47.225020   15272 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:30:47 GMT
	I0109 00:30:47.225843   15272 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-173500-m02","uid":"5e797fe4-8400-423e-ad46-5d1f64335887","resourceVersion":"2006","creationTimestamp":"2024-01-09T00:30:42Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-173500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-173500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_09T00_30_43_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-09T00:30:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:anno
tations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-det [truncated 3891 chars]
	I0109 00:30:47.721912   15272 round_trippers.go:463] GET https://172.24.109.120:8443/api/v1/nodes/multinode-173500-m02
	I0109 00:30:47.721912   15272 round_trippers.go:469] Request Headers:
	I0109 00:30:47.722252   15272 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:30:47.722252   15272 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0109 00:30:47.727622   15272 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0109 00:30:47.727622   15272 round_trippers.go:577] Response Headers:
	I0109 00:30:47.727622   15272 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 160c1d93-30c8-4413-9986-7aa0d5f0fdd1
	I0109 00:30:47.727622   15272 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:30:47 GMT
	I0109 00:30:47.727622   15272 round_trippers.go:580]     Audit-Id: d7f25ae5-d62c-4151-9817-eedd79b32a7f
	I0109 00:30:47.727817   15272 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:30:47.727817   15272 round_trippers.go:580]     Content-Type: application/json
	I0109 00:30:47.727817   15272 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b65f6d3f-e21a-46b0-805d-9b71486b6c1c
	I0109 00:30:47.728056   15272 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-173500-m02","uid":"5e797fe4-8400-423e-ad46-5d1f64335887","resourceVersion":"2006","creationTimestamp":"2024-01-09T00:30:42Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-173500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-173500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_09T00_30_43_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-09T00:30:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:anno
tations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-det [truncated 3891 chars]
	I0109 00:30:48.222732   15272 round_trippers.go:463] GET https://172.24.109.120:8443/api/v1/nodes/multinode-173500-m02
	I0109 00:30:48.222732   15272 round_trippers.go:469] Request Headers:
	I0109 00:30:48.222732   15272 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:30:48.222732   15272 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0109 00:30:48.227361   15272 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0109 00:30:48.227361   15272 round_trippers.go:577] Response Headers:
	I0109 00:30:48.227361   15272 round_trippers.go:580]     Audit-Id: 89c493ae-636a-4a42-b368-a72964af7f4c
	I0109 00:30:48.227361   15272 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:30:48.227361   15272 round_trippers.go:580]     Content-Type: application/json
	I0109 00:30:48.227361   15272 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b65f6d3f-e21a-46b0-805d-9b71486b6c1c
	I0109 00:30:48.227361   15272 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 160c1d93-30c8-4413-9986-7aa0d5f0fdd1
	I0109 00:30:48.227703   15272 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:30:48 GMT
	I0109 00:30:48.228041   15272 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-173500-m02","uid":"5e797fe4-8400-423e-ad46-5d1f64335887","resourceVersion":"2006","creationTimestamp":"2024-01-09T00:30:42Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-173500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-173500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_09T00_30_43_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-09T00:30:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:anno
tations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-det [truncated 3891 chars]
	I0109 00:30:48.716408   15272 round_trippers.go:463] GET https://172.24.109.120:8443/api/v1/nodes/multinode-173500-m02
	I0109 00:30:48.716408   15272 round_trippers.go:469] Request Headers:
	I0109 00:30:48.716497   15272 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:30:48.716497   15272 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0109 00:30:48.720869   15272 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0109 00:30:48.720869   15272 round_trippers.go:577] Response Headers:
	I0109 00:30:48.721368   15272 round_trippers.go:580]     Audit-Id: 141cfd0e-a4d3-41a8-aa8f-512137f92470
	I0109 00:30:48.721368   15272 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:30:48.721368   15272 round_trippers.go:580]     Content-Type: application/json
	I0109 00:30:48.721368   15272 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b65f6d3f-e21a-46b0-805d-9b71486b6c1c
	I0109 00:30:48.721368   15272 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 160c1d93-30c8-4413-9986-7aa0d5f0fdd1
	I0109 00:30:48.721368   15272 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:30:48 GMT
	I0109 00:30:48.721674   15272 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-173500-m02","uid":"5e797fe4-8400-423e-ad46-5d1f64335887","resourceVersion":"2006","creationTimestamp":"2024-01-09T00:30:42Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-173500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-173500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_09T00_30_43_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-09T00:30:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:anno
tations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-det [truncated 3891 chars]
	I0109 00:30:48.722370   15272 node_ready.go:58] node "multinode-173500-m02" has status "Ready":"False"
	I0109 00:30:49.224247   15272 round_trippers.go:463] GET https://172.24.109.120:8443/api/v1/nodes/multinode-173500-m02
	I0109 00:30:49.224305   15272 round_trippers.go:469] Request Headers:
	I0109 00:30:49.224342   15272 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:30:49.224342   15272 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0109 00:30:49.232068   15272 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0109 00:30:49.232068   15272 round_trippers.go:577] Response Headers:
	I0109 00:30:49.232068   15272 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b65f6d3f-e21a-46b0-805d-9b71486b6c1c
	I0109 00:30:49.232068   15272 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 160c1d93-30c8-4413-9986-7aa0d5f0fdd1
	I0109 00:30:49.232068   15272 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:30:49 GMT
	I0109 00:30:49.232068   15272 round_trippers.go:580]     Audit-Id: 6a751ac4-9998-481e-963e-ee1716cfbb72
	I0109 00:30:49.232068   15272 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:30:49.232068   15272 round_trippers.go:580]     Content-Type: application/json
	I0109 00:30:49.232621   15272 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-173500-m02","uid":"5e797fe4-8400-423e-ad46-5d1f64335887","resourceVersion":"2006","creationTimestamp":"2024-01-09T00:30:42Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-173500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-173500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_09T00_30_43_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-09T00:30:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:anno
tations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-det [truncated 3891 chars]
	I0109 00:30:49.714528   15272 round_trippers.go:463] GET https://172.24.109.120:8443/api/v1/nodes/multinode-173500-m02
	I0109 00:30:49.714528   15272 round_trippers.go:469] Request Headers:
	I0109 00:30:49.714528   15272 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:30:49.714528   15272 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0109 00:30:49.719097   15272 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0109 00:30:49.719343   15272 round_trippers.go:577] Response Headers:
	I0109 00:30:49.719343   15272 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 160c1d93-30c8-4413-9986-7aa0d5f0fdd1
	I0109 00:30:49.719414   15272 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:30:49 GMT
	I0109 00:30:49.719414   15272 round_trippers.go:580]     Audit-Id: b2ff800c-f85d-4813-84a6-7e8a94361207
	I0109 00:30:49.719414   15272 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:30:49.719414   15272 round_trippers.go:580]     Content-Type: application/json
	I0109 00:30:49.719414   15272 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b65f6d3f-e21a-46b0-805d-9b71486b6c1c
	I0109 00:30:49.719414   15272 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-173500-m02","uid":"5e797fe4-8400-423e-ad46-5d1f64335887","resourceVersion":"2006","creationTimestamp":"2024-01-09T00:30:42Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-173500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-173500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_09T00_30_43_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-09T00:30:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:anno
tations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-det [truncated 3891 chars]
	I0109 00:30:50.215553   15272 round_trippers.go:463] GET https://172.24.109.120:8443/api/v1/nodes/multinode-173500-m02
	I0109 00:30:50.215659   15272 round_trippers.go:469] Request Headers:
	I0109 00:30:50.215659   15272 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:30:50.215659   15272 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0109 00:30:50.222158   15272 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0109 00:30:50.222158   15272 round_trippers.go:577] Response Headers:
	I0109 00:30:50.222158   15272 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 160c1d93-30c8-4413-9986-7aa0d5f0fdd1
	I0109 00:30:50.222158   15272 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:30:50 GMT
	I0109 00:30:50.222158   15272 round_trippers.go:580]     Audit-Id: e15c8ecc-31ea-4b6f-a7b3-170f1ceaad52
	I0109 00:30:50.222158   15272 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:30:50.222158   15272 round_trippers.go:580]     Content-Type: application/json
	I0109 00:30:50.222158   15272 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b65f6d3f-e21a-46b0-805d-9b71486b6c1c
	I0109 00:30:50.223016   15272 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-173500-m02","uid":"5e797fe4-8400-423e-ad46-5d1f64335887","resourceVersion":"2006","creationTimestamp":"2024-01-09T00:30:42Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-173500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-173500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_09T00_30_43_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-09T00:30:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:anno
tations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-det [truncated 3891 chars]
	I0109 00:30:50.717613   15272 round_trippers.go:463] GET https://172.24.109.120:8443/api/v1/nodes/multinode-173500-m02
	I0109 00:30:50.717613   15272 round_trippers.go:469] Request Headers:
	I0109 00:30:50.717752   15272 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:30:50.717752   15272 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0109 00:30:50.722083   15272 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0109 00:30:50.722192   15272 round_trippers.go:577] Response Headers:
	I0109 00:30:50.722192   15272 round_trippers.go:580]     Audit-Id: d3ea3a3b-76c7-412b-a153-d0881803b619
	I0109 00:30:50.722192   15272 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:30:50.722192   15272 round_trippers.go:580]     Content-Type: application/json
	I0109 00:30:50.722192   15272 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b65f6d3f-e21a-46b0-805d-9b71486b6c1c
	I0109 00:30:50.722192   15272 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 160c1d93-30c8-4413-9986-7aa0d5f0fdd1
	I0109 00:30:50.722192   15272 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:30:50 GMT
	I0109 00:30:50.722511   15272 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-173500-m02","uid":"5e797fe4-8400-423e-ad46-5d1f64335887","resourceVersion":"2006","creationTimestamp":"2024-01-09T00:30:42Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-173500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-173500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_09T00_30_43_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-09T00:30:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:anno
tations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-det [truncated 3891 chars]
	I0109 00:30:50.723133   15272 node_ready.go:58] node "multinode-173500-m02" has status "Ready":"False"
	I0109 00:30:51.218988   15272 round_trippers.go:463] GET https://172.24.109.120:8443/api/v1/nodes/multinode-173500-m02
	I0109 00:30:51.218988   15272 round_trippers.go:469] Request Headers:
	I0109 00:30:51.219058   15272 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:30:51.219058   15272 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0109 00:30:51.223422   15272 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0109 00:30:51.223673   15272 round_trippers.go:577] Response Headers:
	I0109 00:30:51.223673   15272 round_trippers.go:580]     Audit-Id: ff0f7ea3-06f3-4976-b542-f13047b6422c
	I0109 00:30:51.223673   15272 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:30:51.223673   15272 round_trippers.go:580]     Content-Type: application/json
	I0109 00:30:51.223673   15272 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b65f6d3f-e21a-46b0-805d-9b71486b6c1c
	I0109 00:30:51.223673   15272 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 160c1d93-30c8-4413-9986-7aa0d5f0fdd1
	I0109 00:30:51.223778   15272 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:30:51 GMT
	I0109 00:30:51.223778   15272 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-173500-m02","uid":"5e797fe4-8400-423e-ad46-5d1f64335887","resourceVersion":"2006","creationTimestamp":"2024-01-09T00:30:42Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-173500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-173500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_09T00_30_43_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-09T00:30:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:anno
tations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-det [truncated 3891 chars]
	I0109 00:30:51.712714   15272 round_trippers.go:463] GET https://172.24.109.120:8443/api/v1/nodes/multinode-173500-m02
	I0109 00:30:51.712803   15272 round_trippers.go:469] Request Headers:
	I0109 00:30:51.712803   15272 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0109 00:30:51.712803   15272 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:30:51.716228   15272 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0109 00:30:51.716228   15272 round_trippers.go:577] Response Headers:
	I0109 00:30:51.716228   15272 round_trippers.go:580]     Audit-Id: f22c1f3f-6b4d-49a6-98ce-a2d668eeb2cb
	I0109 00:30:51.716228   15272 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:30:51.717005   15272 round_trippers.go:580]     Content-Type: application/json
	I0109 00:30:51.717005   15272 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b65f6d3f-e21a-46b0-805d-9b71486b6c1c
	I0109 00:30:51.717005   15272 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 160c1d93-30c8-4413-9986-7aa0d5f0fdd1
	I0109 00:30:51.717005   15272 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:30:51 GMT
	I0109 00:30:51.717286   15272 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-173500-m02","uid":"5e797fe4-8400-423e-ad46-5d1f64335887","resourceVersion":"2022","creationTimestamp":"2024-01-09T00:30:42Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-173500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-173500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_09T00_30_43_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-09T00:30:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:anno
tations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-det [truncated 3926 chars]
	I0109 00:30:51.717725   15272 node_ready.go:49] node "multinode-173500-m02" has status "Ready":"True"
	I0109 00:30:51.717725   15272 node_ready.go:38] duration metric: took 7.0060807s waiting for node "multinode-173500-m02" to be "Ready" ...
	I0109 00:30:51.717725   15272 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0109 00:30:51.717869   15272 round_trippers.go:463] GET https://172.24.109.120:8443/api/v1/namespaces/kube-system/pods
	I0109 00:30:51.718033   15272 round_trippers.go:469] Request Headers:
	I0109 00:30:51.718033   15272 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:30:51.718033   15272 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0109 00:30:51.723463   15272 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0109 00:30:51.723942   15272 round_trippers.go:577] Response Headers:
	I0109 00:30:51.723942   15272 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:30:51.723942   15272 round_trippers.go:580]     Content-Type: application/json
	I0109 00:30:51.723942   15272 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b65f6d3f-e21a-46b0-805d-9b71486b6c1c
	I0109 00:30:51.724011   15272 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 160c1d93-30c8-4413-9986-7aa0d5f0fdd1
	I0109 00:30:51.724011   15272 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:30:51 GMT
	I0109 00:30:51.724096   15272 round_trippers.go:580]     Audit-Id: 8a5ee13f-727e-4511-adcb-0b87e029c099
	I0109 00:30:51.727281   15272 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"2024"},"items":[{"metadata":{"name":"coredns-5dd5756b68-bkss9","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"463fb6c6-1e85-419f-9c13-96e58a2ec22e","resourceVersion":"1809","creationTimestamp":"2024-01-09T00:05:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"391af85f-9c35-497b-9b4f-c347a35d4a42","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-09T00:05:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"391af85f-9c35-497b-9b4f-c347a35d4a42\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 83425 chars]
	I0109 00:30:51.731164   15272 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-bkss9" in "kube-system" namespace to be "Ready" ...
	I0109 00:30:51.731324   15272 round_trippers.go:463] GET https://172.24.109.120:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-bkss9
	I0109 00:30:51.731324   15272 round_trippers.go:469] Request Headers:
	I0109 00:30:51.731324   15272 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:30:51.731424   15272 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0109 00:30:51.733697   15272 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0109 00:30:51.734700   15272 round_trippers.go:577] Response Headers:
	I0109 00:30:51.734700   15272 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 160c1d93-30c8-4413-9986-7aa0d5f0fdd1
	I0109 00:30:51.734700   15272 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:30:51 GMT
	I0109 00:30:51.734700   15272 round_trippers.go:580]     Audit-Id: 4562c795-f599-420e-a327-2fb4777fcdad
	I0109 00:30:51.734700   15272 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:30:51.734781   15272 round_trippers.go:580]     Content-Type: application/json
	I0109 00:30:51.734781   15272 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b65f6d3f-e21a-46b0-805d-9b71486b6c1c
	I0109 00:30:51.734979   15272 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-bkss9","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"463fb6c6-1e85-419f-9c13-96e58a2ec22e","resourceVersion":"1809","creationTimestamp":"2024-01-09T00:05:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"391af85f-9c35-497b-9b4f-c347a35d4a42","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-09T00:05:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"391af85f-9c35-497b-9b4f-c347a35d4a42\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6494 chars]
	I0109 00:30:51.735476   15272 round_trippers.go:463] GET https://172.24.109.120:8443/api/v1/nodes/multinode-173500
	I0109 00:30:51.735582   15272 round_trippers.go:469] Request Headers:
	I0109 00:30:51.735582   15272 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:30:51.735582   15272 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0109 00:30:51.737927   15272 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0109 00:30:51.737927   15272 round_trippers.go:577] Response Headers:
	I0109 00:30:51.737927   15272 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:30:51.737927   15272 round_trippers.go:580]     Content-Type: application/json
	I0109 00:30:51.737927   15272 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b65f6d3f-e21a-46b0-805d-9b71486b6c1c
	I0109 00:30:51.737927   15272 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 160c1d93-30c8-4413-9986-7aa0d5f0fdd1
	I0109 00:30:51.737927   15272 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:30:51 GMT
	I0109 00:30:51.738734   15272 round_trippers.go:580]     Audit-Id: 31d72e7a-e6aa-484f-9207-8a45f9fdbf95
	I0109 00:30:51.738961   15272 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-173500","uid":"5e5c58e3-1ae8-4346-8766-7537fea36975","resourceVersion":"1835","creationTimestamp":"2024-01-09T00:05:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-173500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-173500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_09T00_05_33_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-01-09T00:05:27Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0109 00:30:51.739357   15272 pod_ready.go:92] pod "coredns-5dd5756b68-bkss9" in "kube-system" namespace has status "Ready":"True"
	I0109 00:30:51.739357   15272 pod_ready.go:81] duration metric: took 8.1122ms waiting for pod "coredns-5dd5756b68-bkss9" in "kube-system" namespace to be "Ready" ...
	I0109 00:30:51.739516   15272 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-173500" in "kube-system" namespace to be "Ready" ...
	I0109 00:30:51.739605   15272 round_trippers.go:463] GET https://172.24.109.120:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-173500
	I0109 00:30:51.739605   15272 round_trippers.go:469] Request Headers:
	I0109 00:30:51.739643   15272 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0109 00:30:51.739643   15272 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:30:51.742872   15272 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0109 00:30:51.742969   15272 round_trippers.go:577] Response Headers:
	I0109 00:30:51.742969   15272 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:30:51 GMT
	I0109 00:30:51.742969   15272 round_trippers.go:580]     Audit-Id: 7d35fcc8-6651-4d3b-9f75-d3a0bb02de12
	I0109 00:30:51.742969   15272 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:30:51.742969   15272 round_trippers.go:580]     Content-Type: application/json
	I0109 00:30:51.742969   15272 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b65f6d3f-e21a-46b0-805d-9b71486b6c1c
	I0109 00:30:51.742969   15272 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 160c1d93-30c8-4413-9986-7aa0d5f0fdd1
	I0109 00:30:51.743166   15272 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-173500","namespace":"kube-system","uid":"43da51b9-2249-4c4d-a9c0-4c899270d870","resourceVersion":"1777","creationTimestamp":"2024-01-09T00:28:18Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.24.109.120:2379","kubernetes.io/config.hash":"d21425b7f4d2774c35dc812132e81582","kubernetes.io/config.mirror":"d21425b7f4d2774c35dc812132e81582","kubernetes.io/config.seen":"2024-01-09T00:28:04.947418401Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-173500","uid":"5e5c58e3-1ae8-4346-8766-7537fea36975","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-09T00:28:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise
-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/conf [truncated 5873 chars]
	I0109 00:30:51.743724   15272 round_trippers.go:463] GET https://172.24.109.120:8443/api/v1/nodes/multinode-173500
	I0109 00:30:51.743904   15272 round_trippers.go:469] Request Headers:
	I0109 00:30:51.743904   15272 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:30:51.743904   15272 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0109 00:30:51.751286   15272 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0109 00:30:51.751286   15272 round_trippers.go:577] Response Headers:
	I0109 00:30:51.751286   15272 round_trippers.go:580]     Audit-Id: eda42690-d82c-47b4-8148-1329a8c860b0
	I0109 00:30:51.751286   15272 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:30:51.751286   15272 round_trippers.go:580]     Content-Type: application/json
	I0109 00:30:51.751286   15272 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b65f6d3f-e21a-46b0-805d-9b71486b6c1c
	I0109 00:30:51.751286   15272 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 160c1d93-30c8-4413-9986-7aa0d5f0fdd1
	I0109 00:30:51.751286   15272 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:30:51 GMT
	I0109 00:30:51.752317   15272 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-173500","uid":"5e5c58e3-1ae8-4346-8766-7537fea36975","resourceVersion":"1835","creationTimestamp":"2024-01-09T00:05:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-173500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-173500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_09T00_05_33_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-01-09T00:05:27Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0109 00:30:51.752540   15272 pod_ready.go:92] pod "etcd-multinode-173500" in "kube-system" namespace has status "Ready":"True"
	I0109 00:30:51.752540   15272 pod_ready.go:81] duration metric: took 13.0238ms waiting for pod "etcd-multinode-173500" in "kube-system" namespace to be "Ready" ...
	I0109 00:30:51.752540   15272 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-173500" in "kube-system" namespace to be "Ready" ...
	I0109 00:30:51.752540   15272 round_trippers.go:463] GET https://172.24.109.120:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-173500
	I0109 00:30:51.752540   15272 round_trippers.go:469] Request Headers:
	I0109 00:30:51.752540   15272 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0109 00:30:51.752540   15272 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:30:51.760898   15272 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0109 00:30:51.760898   15272 round_trippers.go:577] Response Headers:
	I0109 00:30:51.760898   15272 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 160c1d93-30c8-4413-9986-7aa0d5f0fdd1
	I0109 00:30:51.760898   15272 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:30:51 GMT
	I0109 00:30:51.760898   15272 round_trippers.go:580]     Audit-Id: 29a90170-9e8b-406b-99cd-1a5603529e56
	I0109 00:30:51.760898   15272 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:30:51.760898   15272 round_trippers.go:580]     Content-Type: application/json
	I0109 00:30:51.760898   15272 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b65f6d3f-e21a-46b0-805d-9b71486b6c1c
	I0109 00:30:51.761519   15272 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-173500","namespace":"kube-system","uid":"5c089ac2-fe84-48d8-9727-a932903b646d","resourceVersion":"1830","creationTimestamp":"2024-01-09T00:28:19Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.24.109.120:8443","kubernetes.io/config.hash":"3af26441278f10d0a9196ab55837c292","kubernetes.io/config.mirror":"3af26441278f10d0a9196ab55837c292","kubernetes.io/config.seen":"2024-01-09T00:28:04.947424101Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-173500","uid":"5e5c58e3-1ae8-4346-8766-7537fea36975","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-09T00:28:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.k
ubernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernete [truncated 7409 chars]
	I0109 00:30:51.762115   15272 round_trippers.go:463] GET https://172.24.109.120:8443/api/v1/nodes/multinode-173500
	I0109 00:30:51.762115   15272 round_trippers.go:469] Request Headers:
	I0109 00:30:51.762115   15272 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:30:51.762115   15272 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0109 00:30:51.764747   15272 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0109 00:30:51.765820   15272 round_trippers.go:577] Response Headers:
	I0109 00:30:51.765820   15272 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:30:51 GMT
	I0109 00:30:51.765820   15272 round_trippers.go:580]     Audit-Id: e56a66c3-0b58-4b15-88ed-bde1d1234c31
	I0109 00:30:51.765820   15272 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:30:51.765820   15272 round_trippers.go:580]     Content-Type: application/json
	I0109 00:30:51.765820   15272 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b65f6d3f-e21a-46b0-805d-9b71486b6c1c
	I0109 00:30:51.765820   15272 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 160c1d93-30c8-4413-9986-7aa0d5f0fdd1
	I0109 00:30:51.765920   15272 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-173500","uid":"5e5c58e3-1ae8-4346-8766-7537fea36975","resourceVersion":"1835","creationTimestamp":"2024-01-09T00:05:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-173500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-173500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_09T00_05_33_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-01-09T00:05:27Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0109 00:30:51.766444   15272 pod_ready.go:92] pod "kube-apiserver-multinode-173500" in "kube-system" namespace has status "Ready":"True"
	I0109 00:30:51.766444   15272 pod_ready.go:81] duration metric: took 13.9043ms waiting for pod "kube-apiserver-multinode-173500" in "kube-system" namespace to be "Ready" ...
	I0109 00:30:51.766444   15272 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-173500" in "kube-system" namespace to be "Ready" ...
	I0109 00:30:51.766558   15272 round_trippers.go:463] GET https://172.24.109.120:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-173500
	I0109 00:30:51.766558   15272 round_trippers.go:469] Request Headers:
	I0109 00:30:51.766558   15272 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0109 00:30:51.766558   15272 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:30:51.769812   15272 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0109 00:30:51.769812   15272 round_trippers.go:577] Response Headers:
	I0109 00:30:51.769812   15272 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:30:51 GMT
	I0109 00:30:51.769812   15272 round_trippers.go:580]     Audit-Id: e9d5ccd9-8179-44fb-8b47-2667962a86f2
	I0109 00:30:51.769812   15272 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:30:51.769812   15272 round_trippers.go:580]     Content-Type: application/json
	I0109 00:30:51.769812   15272 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b65f6d3f-e21a-46b0-805d-9b71486b6c1c
	I0109 00:30:51.769812   15272 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 160c1d93-30c8-4413-9986-7aa0d5f0fdd1
	I0109 00:30:51.771140   15272 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-173500","namespace":"kube-system","uid":"a0252ea5-5d6a-4303-b7e6-151481d4cd8a","resourceVersion":"1796","creationTimestamp":"2024-01-09T00:05:31Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"f6b180d5a2686dc98b0355b6df7f53ea","kubernetes.io/config.mirror":"f6b180d5a2686dc98b0355b6df7f53ea","kubernetes.io/config.seen":"2024-01-09T00:05:31.606504770Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-173500","uid":"5e5c58e3-1ae8-4346-8766-7537fea36975","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-09T00:05:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.
io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".": [truncated 7179 chars]
	I0109 00:30:51.771727   15272 round_trippers.go:463] GET https://172.24.109.120:8443/api/v1/nodes/multinode-173500
	I0109 00:30:51.771727   15272 round_trippers.go:469] Request Headers:
	I0109 00:30:51.771809   15272 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:30:51.771809   15272 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0109 00:30:51.774163   15272 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0109 00:30:51.774163   15272 round_trippers.go:577] Response Headers:
	I0109 00:30:51.774163   15272 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:30:51.775050   15272 round_trippers.go:580]     Content-Type: application/json
	I0109 00:30:51.775050   15272 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b65f6d3f-e21a-46b0-805d-9b71486b6c1c
	I0109 00:30:51.775050   15272 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 160c1d93-30c8-4413-9986-7aa0d5f0fdd1
	I0109 00:30:51.775050   15272 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:30:51 GMT
	I0109 00:30:51.775050   15272 round_trippers.go:580]     Audit-Id: 2c5a8af0-b5cd-4833-a4d2-3e786999b33d
	I0109 00:30:51.775252   15272 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-173500","uid":"5e5c58e3-1ae8-4346-8766-7537fea36975","resourceVersion":"1835","creationTimestamp":"2024-01-09T00:05:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-173500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-173500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_09T00_05_33_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-01-09T00:05:27Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0109 00:30:51.775664   15272 pod_ready.go:92] pod "kube-controller-manager-multinode-173500" in "kube-system" namespace has status "Ready":"True"
	I0109 00:30:51.775730   15272 pod_ready.go:81] duration metric: took 9.2862ms waiting for pod "kube-controller-manager-multinode-173500" in "kube-system" namespace to be "Ready" ...
	I0109 00:30:51.775784   15272 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-4h4sv" in "kube-system" namespace to be "Ready" ...
	I0109 00:30:51.916412   15272 request.go:629] Waited for 140.3167ms due to client-side throttling, not priority and fairness, request: GET:https://172.24.109.120:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4h4sv
	I0109 00:30:51.916544   15272 round_trippers.go:463] GET https://172.24.109.120:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4h4sv
	I0109 00:30:51.916544   15272 round_trippers.go:469] Request Headers:
	I0109 00:30:51.916579   15272 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:30:51.916770   15272 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0109 00:30:51.921293   15272 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0109 00:30:51.921293   15272 round_trippers.go:577] Response Headers:
	I0109 00:30:51.921293   15272 round_trippers.go:580]     Audit-Id: 65481881-117f-4e71-923a-65423b6ea1c9
	I0109 00:30:51.921293   15272 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:30:51.921293   15272 round_trippers.go:580]     Content-Type: application/json
	I0109 00:30:51.921293   15272 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b65f6d3f-e21a-46b0-805d-9b71486b6c1c
	I0109 00:30:51.921293   15272 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 160c1d93-30c8-4413-9986-7aa0d5f0fdd1
	I0109 00:30:51.921902   15272 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:30:51 GMT
	I0109 00:30:51.922030   15272 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-4h4sv","generateName":"kube-proxy-","namespace":"kube-system","uid":"a45861ba-73e0-452f-a535-f66e154ea1c6","resourceVersion":"2014","creationTimestamp":"2024-01-09T00:08:43Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"ec2434b9-012c-4df1-b401-04556fed7700","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-09T00:08:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ec2434b9-012c-4df1-b401-04556fed7700\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5751 chars]
	I0109 00:30:52.117252   15272 request.go:629] Waited for 193.9898ms due to client-side throttling, not priority and fairness, request: GET:https://172.24.109.120:8443/api/v1/nodes/multinode-173500-m02
	I0109 00:30:52.117356   15272 round_trippers.go:463] GET https://172.24.109.120:8443/api/v1/nodes/multinode-173500-m02
	I0109 00:30:52.117356   15272 round_trippers.go:469] Request Headers:
	I0109 00:30:52.117356   15272 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:30:52.117356   15272 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0109 00:30:52.120315   15272 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0109 00:30:52.120315   15272 round_trippers.go:577] Response Headers:
	I0109 00:30:52.120315   15272 round_trippers.go:580]     Audit-Id: 66f04b2e-e6f5-4823-aa91-db70dab8408c
	I0109 00:30:52.120315   15272 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:30:52.120315   15272 round_trippers.go:580]     Content-Type: application/json
	I0109 00:30:52.120315   15272 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b65f6d3f-e21a-46b0-805d-9b71486b6c1c
	I0109 00:30:52.120315   15272 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 160c1d93-30c8-4413-9986-7aa0d5f0fdd1
	I0109 00:30:52.121320   15272 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:30:52 GMT
	I0109 00:30:52.121556   15272 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-173500-m02","uid":"5e797fe4-8400-423e-ad46-5d1f64335887","resourceVersion":"2022","creationTimestamp":"2024-01-09T00:30:42Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-173500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-173500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_09T00_30_43_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-09T00:30:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:anno
tations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-det [truncated 3926 chars]
	I0109 00:30:52.122484   15272 pod_ready.go:92] pod "kube-proxy-4h4sv" in "kube-system" namespace has status "Ready":"True"
	I0109 00:30:52.122557   15272 pod_ready.go:81] duration metric: took 346.6996ms waiting for pod "kube-proxy-4h4sv" in "kube-system" namespace to be "Ready" ...
	I0109 00:30:52.122557   15272 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-mj6ks" in "kube-system" namespace to be "Ready" ...
	I0109 00:30:52.319873   15272 request.go:629] Waited for 197.1837ms due to client-side throttling, not priority and fairness, request: GET:https://172.24.109.120:8443/api/v1/namespaces/kube-system/pods/kube-proxy-mj6ks
	I0109 00:30:52.319873   15272 round_trippers.go:463] GET https://172.24.109.120:8443/api/v1/namespaces/kube-system/pods/kube-proxy-mj6ks
	I0109 00:30:52.319873   15272 round_trippers.go:469] Request Headers:
	I0109 00:30:52.319873   15272 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:30:52.319873   15272 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0109 00:30:52.324587   15272 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0109 00:30:52.324587   15272 round_trippers.go:577] Response Headers:
	I0109 00:30:52.324587   15272 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b65f6d3f-e21a-46b0-805d-9b71486b6c1c
	I0109 00:30:52.325286   15272 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 160c1d93-30c8-4413-9986-7aa0d5f0fdd1
	I0109 00:30:52.325286   15272 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:30:52 GMT
	I0109 00:30:52.325286   15272 round_trippers.go:580]     Audit-Id: 63eb578d-a3c3-4218-9ec8-44ee471b9f6c
	I0109 00:30:52.325286   15272 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:30:52.325286   15272 round_trippers.go:580]     Content-Type: application/json
	I0109 00:30:52.325518   15272 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-mj6ks","generateName":"kube-proxy-","namespace":"kube-system","uid":"bd23c4c8-d363-4a3f-b750-a3de2346a3bb","resourceVersion":"1866","creationTimestamp":"2024-01-09T00:13:24Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"ec2434b9-012c-4df1-b401-04556fed7700","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-09T00:13:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ec2434b9-012c-4df1-b401-04556fed7700\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5972 chars]
	I0109 00:30:52.524732   15272 request.go:629] Waited for 198.452ms due to client-side throttling, not priority and fairness, request: GET:https://172.24.109.120:8443/api/v1/nodes/multinode-173500-m03
	I0109 00:30:52.524874   15272 round_trippers.go:463] GET https://172.24.109.120:8443/api/v1/nodes/multinode-173500-m03
	I0109 00:30:52.524953   15272 round_trippers.go:469] Request Headers:
	I0109 00:30:52.525035   15272 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0109 00:30:52.525035   15272 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:30:52.528477   15272 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0109 00:30:52.528477   15272 round_trippers.go:577] Response Headers:
	I0109 00:30:52.528477   15272 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:30:52 GMT
	I0109 00:30:52.528477   15272 round_trippers.go:580]     Audit-Id: ab14c57f-4c4d-4b62-bb59-37673876fe51
	I0109 00:30:52.528477   15272 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:30:52.528477   15272 round_trippers.go:580]     Content-Type: application/json
	I0109 00:30:52.528477   15272 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b65f6d3f-e21a-46b0-805d-9b71486b6c1c
	I0109 00:30:52.528908   15272 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 160c1d93-30c8-4413-9986-7aa0d5f0fdd1
	I0109 00:30:52.529071   15272 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-173500-m03","uid":"9d8a783a-d01b-498d-94ae-1e3f65e7667c","resourceVersion":"2000","creationTimestamp":"2024-01-09T00:23:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-173500-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-173500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_09T00_30_43_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-09T00:23:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:anno
tations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-det [truncated 4392 chars]
	I0109 00:30:52.529623   15272 pod_ready.go:97] node "multinode-173500-m03" hosting pod "kube-proxy-mj6ks" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-173500-m03" has status "Ready":"Unknown"
	I0109 00:30:52.529623   15272 pod_ready.go:81] duration metric: took 407.0665ms waiting for pod "kube-proxy-mj6ks" in "kube-system" namespace to be "Ready" ...
	E0109 00:30:52.529623   15272 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-173500-m03" hosting pod "kube-proxy-mj6ks" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-173500-m03" has status "Ready":"Unknown"
	I0109 00:30:52.529623   15272 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-qrtm6" in "kube-system" namespace to be "Ready" ...
	I0109 00:30:52.725164   15272 request.go:629] Waited for 195.1625ms due to client-side throttling, not priority and fairness, request: GET:https://172.24.109.120:8443/api/v1/namespaces/kube-system/pods/kube-proxy-qrtm6
	I0109 00:30:52.725401   15272 round_trippers.go:463] GET https://172.24.109.120:8443/api/v1/namespaces/kube-system/pods/kube-proxy-qrtm6
	I0109 00:30:52.725401   15272 round_trippers.go:469] Request Headers:
	I0109 00:30:52.725401   15272 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:30:52.725401   15272 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0109 00:30:52.732957   15272 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0109 00:30:52.732957   15272 round_trippers.go:577] Response Headers:
	I0109 00:30:52.732957   15272 round_trippers.go:580]     Audit-Id: b163b680-e840-441c-8223-012bf75695a1
	I0109 00:30:52.732957   15272 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:30:52.732957   15272 round_trippers.go:580]     Content-Type: application/json
	I0109 00:30:52.732957   15272 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b65f6d3f-e21a-46b0-805d-9b71486b6c1c
	I0109 00:30:52.732957   15272 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 160c1d93-30c8-4413-9986-7aa0d5f0fdd1
	I0109 00:30:52.732957   15272 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:30:52 GMT
	I0109 00:30:52.732957   15272 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-qrtm6","generateName":"kube-proxy-","namespace":"kube-system","uid":"37d066e0-6ff3-4f22-abc3-6bddfa64736e","resourceVersion":"1833","creationTimestamp":"2024-01-09T00:05:45Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"ec2434b9-012c-4df1-b401-04556fed7700","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-09T00:05:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ec2434b9-012c-4df1-b401-04556fed7700\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5743 chars]
	I0109 00:30:52.927189   15272 request.go:629] Waited for 192.9915ms due to client-side throttling, not priority and fairness, request: GET:https://172.24.109.120:8443/api/v1/nodes/multinode-173500
	I0109 00:30:52.927189   15272 round_trippers.go:463] GET https://172.24.109.120:8443/api/v1/nodes/multinode-173500
	I0109 00:30:52.927189   15272 round_trippers.go:469] Request Headers:
	I0109 00:30:52.927189   15272 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:30:52.927494   15272 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0109 00:30:52.932000   15272 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0109 00:30:52.932000   15272 round_trippers.go:577] Response Headers:
	I0109 00:30:52.932156   15272 round_trippers.go:580]     Audit-Id: c143bf4f-38be-4bab-bd32-8c884580310c
	I0109 00:30:52.932156   15272 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:30:52.932156   15272 round_trippers.go:580]     Content-Type: application/json
	I0109 00:30:52.932156   15272 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b65f6d3f-e21a-46b0-805d-9b71486b6c1c
	I0109 00:30:52.932156   15272 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 160c1d93-30c8-4413-9986-7aa0d5f0fdd1
	I0109 00:30:52.932156   15272 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:30:52 GMT
	I0109 00:30:52.932755   15272 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-173500","uid":"5e5c58e3-1ae8-4346-8766-7537fea36975","resourceVersion":"1835","creationTimestamp":"2024-01-09T00:05:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-173500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-173500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_09T00_05_33_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-01-09T00:05:27Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0109 00:30:52.933153   15272 pod_ready.go:92] pod "kube-proxy-qrtm6" in "kube-system" namespace has status "Ready":"True"
	I0109 00:30:52.933153   15272 pod_ready.go:81] duration metric: took 403.5293ms waiting for pod "kube-proxy-qrtm6" in "kube-system" namespace to be "Ready" ...
	I0109 00:30:52.933153   15272 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-173500" in "kube-system" namespace to be "Ready" ...
	I0109 00:30:53.115569   15272 request.go:629] Waited for 182.4164ms due to client-side throttling, not priority and fairness, request: GET:https://172.24.109.120:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-173500
	I0109 00:30:53.115569   15272 round_trippers.go:463] GET https://172.24.109.120:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-173500
	I0109 00:30:53.115569   15272 round_trippers.go:469] Request Headers:
	I0109 00:30:53.115569   15272 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0109 00:30:53.115569   15272 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:30:53.120177   15272 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0109 00:30:53.120177   15272 round_trippers.go:577] Response Headers:
	I0109 00:30:53.120310   15272 round_trippers.go:580]     Audit-Id: 2fadb41d-6486-480b-8884-e72c8e95c955
	I0109 00:30:53.120310   15272 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:30:53.120310   15272 round_trippers.go:580]     Content-Type: application/json
	I0109 00:30:53.120310   15272 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b65f6d3f-e21a-46b0-805d-9b71486b6c1c
	I0109 00:30:53.120310   15272 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 160c1d93-30c8-4413-9986-7aa0d5f0fdd1
	I0109 00:30:53.120310   15272 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:30:53 GMT
	I0109 00:30:53.120783   15272 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-173500","namespace":"kube-system","uid":"31d8cdf6-292f-4b3c-87c5-951fc34d0ea4","resourceVersion":"1829","creationTimestamp":"2024-01-09T00:05:30Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"70306498a200a6bbe0aa0b41e240d63b","kubernetes.io/config.mirror":"70306498a200a6bbe0aa0b41e240d63b","kubernetes.io/config.seen":"2024-01-09T00:05:21.481168866Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-173500","uid":"5e5c58e3-1ae8-4346-8766-7537fea36975","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-09T00:05:30Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{}
,"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{ [truncated 4909 chars]
	I0109 00:30:53.317578   15272 request.go:629] Waited for 196.0716ms due to client-side throttling, not priority and fairness, request: GET:https://172.24.109.120:8443/api/v1/nodes/multinode-173500
	I0109 00:30:53.317773   15272 round_trippers.go:463] GET https://172.24.109.120:8443/api/v1/nodes/multinode-173500
	I0109 00:30:53.317866   15272 round_trippers.go:469] Request Headers:
	I0109 00:30:53.317904   15272 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:30:53.317937   15272 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0109 00:30:53.322787   15272 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0109 00:30:53.322787   15272 round_trippers.go:577] Response Headers:
	I0109 00:30:53.322787   15272 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b65f6d3f-e21a-46b0-805d-9b71486b6c1c
	I0109 00:30:53.322787   15272 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 160c1d93-30c8-4413-9986-7aa0d5f0fdd1
	I0109 00:30:53.322908   15272 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:30:53 GMT
	I0109 00:30:53.322908   15272 round_trippers.go:580]     Audit-Id: c1b41569-3337-4ef8-8a7f-d229495216a2
	I0109 00:30:53.322908   15272 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:30:53.322908   15272 round_trippers.go:580]     Content-Type: application/json
	I0109 00:30:53.323456   15272 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-173500","uid":"5e5c58e3-1ae8-4346-8766-7537fea36975","resourceVersion":"1835","creationTimestamp":"2024-01-09T00:05:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-173500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-173500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_09T00_05_33_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-01-09T00:05:27Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0109 00:30:53.324140   15272 pod_ready.go:92] pod "kube-scheduler-multinode-173500" in "kube-system" namespace has status "Ready":"True"
	I0109 00:30:53.324218   15272 pod_ready.go:81] duration metric: took 391.0653ms waiting for pod "kube-scheduler-multinode-173500" in "kube-system" namespace to be "Ready" ...
	I0109 00:30:53.324321   15272 pod_ready.go:38] duration metric: took 1.6063485s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0109 00:30:53.324321   15272 system_svc.go:44] waiting for kubelet service to be running ....
	I0109 00:30:53.341626   15272 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0109 00:30:53.363420   15272 system_svc.go:56] duration metric: took 39.0997ms WaitForService to wait for kubelet.
	I0109 00:30:53.363420   15272 kubeadm.go:581] duration metric: took 8.6957761s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0109 00:30:53.363420   15272 node_conditions.go:102] verifying NodePressure condition ...
	I0109 00:30:53.519986   15272 request.go:629] Waited for 156.3343ms due to client-side throttling, not priority and fairness, request: GET:https://172.24.109.120:8443/api/v1/nodes
	I0109 00:30:53.520093   15272 round_trippers.go:463] GET https://172.24.109.120:8443/api/v1/nodes
	I0109 00:30:53.520093   15272 round_trippers.go:469] Request Headers:
	I0109 00:30:53.520093   15272 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0109 00:30:53.520093   15272 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:30:53.525572   15272 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0109 00:30:53.525572   15272 round_trippers.go:577] Response Headers:
	I0109 00:30:53.525572   15272 round_trippers.go:580]     Content-Type: application/json
	I0109 00:30:53.525572   15272 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b65f6d3f-e21a-46b0-805d-9b71486b6c1c
	I0109 00:30:53.526154   15272 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 160c1d93-30c8-4413-9986-7aa0d5f0fdd1
	I0109 00:30:53.526154   15272 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:30:53 GMT
	I0109 00:30:53.526154   15272 round_trippers.go:580]     Audit-Id: 8ab7c3ed-4243-4153-a648-b0d1899e17c9
	I0109 00:30:53.526154   15272 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:30:53.526732   15272 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"2027"},"items":[{"metadata":{"name":"multinode-173500","uid":"5e5c58e3-1ae8-4346-8766-7537fea36975","resourceVersion":"1835","creationTimestamp":"2024-01-09T00:05:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-173500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-173500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_09T00_05_33_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFi
elds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","tim [truncated 15594 chars]
	I0109 00:30:53.527632   15272 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0109 00:30:53.527757   15272 node_conditions.go:123] node cpu capacity is 2
	I0109 00:30:53.527757   15272 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0109 00:30:53.527757   15272 node_conditions.go:123] node cpu capacity is 2
	I0109 00:30:53.527757   15272 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0109 00:30:53.527757   15272 node_conditions.go:123] node cpu capacity is 2
	I0109 00:30:53.527757   15272 node_conditions.go:105] duration metric: took 164.3365ms to run NodePressure ...
	I0109 00:30:53.527757   15272 start.go:228] waiting for startup goroutines ...
	I0109 00:30:53.527867   15272 start.go:242] writing updated cluster config ...
	I0109 00:30:53.547608   15272 config.go:182] Loaded profile config "multinode-173500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0109 00:30:53.547912   15272 profile.go:148] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-173500\config.json ...
	I0109 00:30:53.557455   15272 out.go:177] * Starting worker node multinode-173500-m03 in cluster multinode-173500
	I0109 00:30:53.560292   15272 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0109 00:30:53.560292   15272 cache.go:56] Caching tarball of preloaded images
	I0109 00:30:53.560292   15272 preload.go:174] Found C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0109 00:30:53.560292   15272 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on docker
	I0109 00:30:53.560292   15272 profile.go:148] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-173500\config.json ...
	I0109 00:30:53.564614   15272 start.go:365] acquiring machines lock for multinode-173500-m03: {Name:mk1756dfe9622c208593bf9b718faf6c1651aea2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0109 00:30:53.564614   15272 start.go:369] acquired machines lock for "multinode-173500-m03" in 0s
	I0109 00:30:53.564614   15272 start.go:96] Skipping create...Using existing machine configuration
	I0109 00:30:53.564992   15272 fix.go:54] fixHost starting: m03
	I0109 00:30:53.565224   15272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-173500-m03 ).state
	I0109 00:30:55.701776   15272 main.go:141] libmachine: [stdout =====>] : Off
	
	I0109 00:30:55.701776   15272 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:30:55.702043   15272 fix.go:102] recreateIfNeeded on multinode-173500-m03: state=Stopped err=<nil>
	W0109 00:30:55.702043   15272 fix.go:128] unexpected machine state, will restart: <nil>
	I0109 00:30:55.705287   15272 out.go:177] * Restarting existing hyperv VM for "multinode-173500-m03" ...
	I0109 00:30:55.709308   15272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-173500-m03
	I0109 00:30:58.216780   15272 main.go:141] libmachine: [stdout =====>] : 
	I0109 00:30:58.216848   15272 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:30:58.216848   15272 main.go:141] libmachine: Waiting for host to start...
	I0109 00:30:58.216848   15272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-173500-m03 ).state
	I0109 00:31:00.489934   15272 main.go:141] libmachine: [stdout =====>] : Running
	
	I0109 00:31:00.490186   15272 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:31:00.490186   15272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-173500-m03 ).networkadapters[0]).ipaddresses[0]
	I0109 00:31:03.098159   15272 main.go:141] libmachine: [stdout =====>] : 
	I0109 00:31:03.098317   15272 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:31:04.101103   15272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-173500-m03 ).state
	I0109 00:31:06.314469   15272 main.go:141] libmachine: [stdout =====>] : Running
	
	I0109 00:31:06.314469   15272 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:31:06.314558   15272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-173500-m03 ).networkadapters[0]).ipaddresses[0]
	I0109 00:31:08.915067   15272 main.go:141] libmachine: [stdout =====>] : 
	I0109 00:31:08.915137   15272 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:31:09.930650   15272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-173500-m03 ).state
	I0109 00:31:12.191605   15272 main.go:141] libmachine: [stdout =====>] : Running
	
	I0109 00:31:12.191689   15272 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:31:12.191749   15272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-173500-m03 ).networkadapters[0]).ipaddresses[0]
	I0109 00:31:14.809701   15272 main.go:141] libmachine: [stdout =====>] : 
	I0109 00:31:14.809701   15272 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:31:15.814385   15272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-173500-m03 ).state
	I0109 00:31:18.043625   15272 main.go:141] libmachine: [stdout =====>] : Running
	
	I0109 00:31:18.043667   15272 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:31:18.043753   15272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-173500-m03 ).networkadapters[0]).ipaddresses[0]
	I0109 00:31:20.628015   15272 main.go:141] libmachine: [stdout =====>] : 
	I0109 00:31:20.628015   15272 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:31:21.631669   15272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-173500-m03 ).state
	I0109 00:31:23.877007   15272 main.go:141] libmachine: [stdout =====>] : Running
	
	I0109 00:31:23.877054   15272 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:31:23.877097   15272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-173500-m03 ).networkadapters[0]).ipaddresses[0]
	I0109 00:31:26.521241   15272 main.go:141] libmachine: [stdout =====>] : 172.24.101.30
	
	I0109 00:31:26.521500   15272 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:31:26.524461   15272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-173500-m03 ).state
	I0109 00:31:28.652900   15272 main.go:141] libmachine: [stdout =====>] : Running
	
	I0109 00:31:28.652900   15272 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:31:28.653276   15272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-173500-m03 ).networkadapters[0]).ipaddresses[0]
	I0109 00:31:31.289854   15272 main.go:141] libmachine: [stdout =====>] : 172.24.101.30
	
	I0109 00:31:31.289854   15272 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:31:31.290298   15272 profile.go:148] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-173500\config.json ...
	I0109 00:31:31.294000   15272 machine.go:88] provisioning docker machine ...
	I0109 00:31:31.294107   15272 buildroot.go:166] provisioning hostname "multinode-173500-m03"
	I0109 00:31:31.294214   15272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-173500-m03 ).state
	I0109 00:31:33.439160   15272 main.go:141] libmachine: [stdout =====>] : Running
	
	I0109 00:31:33.439160   15272 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:31:33.439160   15272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-173500-m03 ).networkadapters[0]).ipaddresses[0]
	I0109 00:31:36.005454   15272 main.go:141] libmachine: [stdout =====>] : 172.24.101.30
	
	I0109 00:31:36.005454   15272 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:31:36.011371   15272 main.go:141] libmachine: Using SSH client type: native
	I0109 00:31:36.012156   15272 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa46120] 0xa48c60 <nil>  [] 0s} 172.24.101.30 22 <nil> <nil>}
	I0109 00:31:36.012156   15272 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-173500-m03 && echo "multinode-173500-m03" | sudo tee /etc/hostname
	I0109 00:31:36.177233   15272 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-173500-m03
	
	I0109 00:31:36.177233   15272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-173500-m03 ).state
	I0109 00:31:38.314827   15272 main.go:141] libmachine: [stdout =====>] : Running
	
	I0109 00:31:38.315126   15272 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:31:38.315220   15272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-173500-m03 ).networkadapters[0]).ipaddresses[0]
	I0109 00:31:40.876972   15272 main.go:141] libmachine: [stdout =====>] : 172.24.101.30
	
	I0109 00:31:40.877222   15272 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:31:40.883078   15272 main.go:141] libmachine: Using SSH client type: native
	I0109 00:31:40.883902   15272 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa46120] 0xa48c60 <nil>  [] 0s} 172.24.101.30 22 <nil> <nil>}
	I0109 00:31:40.883902   15272 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-173500-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-173500-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-173500-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0109 00:31:41.039738   15272 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0109 00:31:41.039909   15272 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube1\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube1\minikube-integration\.minikube}
	I0109 00:31:41.039909   15272 buildroot.go:174] setting up certificates
	I0109 00:31:41.040018   15272 provision.go:83] configureAuth start
	I0109 00:31:41.040193   15272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-173500-m03 ).state
	I0109 00:31:43.189630   15272 main.go:141] libmachine: [stdout =====>] : Running
	
	I0109 00:31:43.189630   15272 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:31:43.189630   15272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-173500-m03 ).networkadapters[0]).ipaddresses[0]
	I0109 00:31:45.751399   15272 main.go:141] libmachine: [stdout =====>] : 172.24.101.30
	
	I0109 00:31:45.751399   15272 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:31:45.751599   15272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-173500-m03 ).state
	I0109 00:31:47.908281   15272 main.go:141] libmachine: [stdout =====>] : Running
	
	I0109 00:31:47.908281   15272 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:31:47.908281   15272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-173500-m03 ).networkadapters[0]).ipaddresses[0]
	I0109 00:31:50.461607   15272 main.go:141] libmachine: [stdout =====>] : 172.24.101.30
	
	I0109 00:31:50.461659   15272 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:31:50.461749   15272 provision.go:138] copyHostCerts
	I0109 00:31:50.461921   15272 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem
	I0109 00:31:50.461921   15272 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem, removing ...
	I0109 00:31:50.461921   15272 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.pem
	I0109 00:31:50.462988   15272 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0109 00:31:50.464088   15272 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem
	I0109 00:31:50.464118   15272 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem, removing ...
	I0109 00:31:50.464118   15272 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cert.pem
	I0109 00:31:50.464663   15272 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0109 00:31:50.465787   15272 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem
	I0109 00:31:50.465860   15272 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem, removing ...
	I0109 00:31:50.465860   15272 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\key.pem
	I0109 00:31:50.466554   15272 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem (1675 bytes)
	I0109 00:31:50.467996   15272 provision.go:112] generating server cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-173500-m03 san=[172.24.101.30 172.24.101.30 localhost 127.0.0.1 minikube multinode-173500-m03]
	I0109 00:31:50.542922   15272 provision.go:172] copyRemoteCerts
	I0109 00:31:50.557309   15272 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0109 00:31:50.557309   15272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-173500-m03 ).state
	I0109 00:31:52.720007   15272 main.go:141] libmachine: [stdout =====>] : Running
	
	I0109 00:31:52.720400   15272 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:31:52.720400   15272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-173500-m03 ).networkadapters[0]).ipaddresses[0]
	I0109 00:31:55.281807   15272 main.go:141] libmachine: [stdout =====>] : 172.24.101.30
	
	I0109 00:31:55.282159   15272 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:31:55.282388   15272 sshutil.go:53] new ssh client: &{IP:172.24.101.30 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-173500-m03\id_rsa Username:docker}
	I0109 00:31:55.390469   15272 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.8331598s)
	I0109 00:31:55.391483   15272 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0109 00:31:55.391923   15272 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0109 00:31:55.434454   15272 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0109 00:31:55.434968   15272 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1237 bytes)
	I0109 00:31:55.477684   15272 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0109 00:31:55.477684   15272 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0109 00:31:55.524030   15272 provision.go:86] duration metric: configureAuth took 14.4840105s
	I0109 00:31:55.524208   15272 buildroot.go:189] setting minikube options for container-runtime
	I0109 00:31:55.524912   15272 config.go:182] Loaded profile config "multinode-173500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0109 00:31:55.524978   15272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-173500-m03 ).state
	I0109 00:31:57.704612   15272 main.go:141] libmachine: [stdout =====>] : Running
	
	I0109 00:31:57.704612   15272 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:31:57.704698   15272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-173500-m03 ).networkadapters[0]).ipaddresses[0]
	I0109 00:32:00.362858   15272 main.go:141] libmachine: [stdout =====>] : 172.24.101.30
	
	I0109 00:32:00.362858   15272 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:32:00.369522   15272 main.go:141] libmachine: Using SSH client type: native
	I0109 00:32:00.370265   15272 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa46120] 0xa48c60 <nil>  [] 0s} 172.24.101.30 22 <nil> <nil>}
	I0109 00:32:00.370265   15272 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0109 00:32:00.511800   15272 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0109 00:32:00.511886   15272 buildroot.go:70] root file system type: tmpfs
	I0109 00:32:00.511959   15272 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0109 00:32:00.511959   15272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-173500-m03 ).state
	I0109 00:32:02.666728   15272 main.go:141] libmachine: [stdout =====>] : Running
	
	I0109 00:32:02.666836   15272 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:32:02.666836   15272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-173500-m03 ).networkadapters[0]).ipaddresses[0]
	I0109 00:32:05.275733   15272 main.go:141] libmachine: [stdout =====>] : 172.24.101.30
	
	I0109 00:32:05.275942   15272 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:32:05.281484   15272 main.go:141] libmachine: Using SSH client type: native
	I0109 00:32:05.282294   15272 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa46120] 0xa48c60 <nil>  [] 0s} 172.24.101.30 22 <nil> <nil>}
	I0109 00:32:05.282357   15272 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.24.109.120"
	Environment="NO_PROXY=172.24.109.120,172.24.111.157"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0109 00:32:05.447325   15272 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.24.109.120
	Environment=NO_PROXY=172.24.109.120,172.24.111.157
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0109 00:32:05.447325   15272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-173500-m03 ).state
	I0109 00:32:07.639997   15272 main.go:141] libmachine: [stdout =====>] : Running
	
	I0109 00:32:07.640450   15272 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:32:07.640563   15272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-173500-m03 ).networkadapters[0]).ipaddresses[0]
	I0109 00:32:10.221433   15272 main.go:141] libmachine: [stdout =====>] : 172.24.101.30
	
	I0109 00:32:10.221433   15272 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:32:10.230300   15272 main.go:141] libmachine: Using SSH client type: native
	I0109 00:32:10.231076   15272 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa46120] 0xa48c60 <nil>  [] 0s} 172.24.101.30 22 <nil> <nil>}
	I0109 00:32:10.231076   15272 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0109 00:32:11.481518   15272 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0109 00:32:11.481518   15272 machine.go:91] provisioned docker machine in 40.1874075s
	I0109 00:32:11.481518   15272 start.go:300] post-start starting for "multinode-173500-m03" (driver="hyperv")
	I0109 00:32:11.481518   15272 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0109 00:32:11.497813   15272 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0109 00:32:11.497813   15272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-173500-m03 ).state
	I0109 00:32:13.651655   15272 main.go:141] libmachine: [stdout =====>] : Running
	
	I0109 00:32:13.651765   15272 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:32:13.651765   15272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-173500-m03 ).networkadapters[0]).ipaddresses[0]
	I0109 00:32:16.230597   15272 main.go:141] libmachine: [stdout =====>] : 172.24.101.30
	
	I0109 00:32:16.230640   15272 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:32:16.231056   15272 sshutil.go:53] new ssh client: &{IP:172.24.101.30 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-173500-m03\id_rsa Username:docker}
	I0109 00:32:16.343128   15272 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.845314s)
	I0109 00:32:16.357813   15272 ssh_runner.go:195] Run: cat /etc/os-release
	I0109 00:32:16.364800   15272 command_runner.go:130] > NAME=Buildroot
	I0109 00:32:16.364800   15272 command_runner.go:130] > VERSION=2021.02.12-1-gae27a7b-dirty
	I0109 00:32:16.364800   15272 command_runner.go:130] > ID=buildroot
	I0109 00:32:16.364800   15272 command_runner.go:130] > VERSION_ID=2021.02.12
	I0109 00:32:16.364800   15272 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
	I0109 00:32:16.364800   15272 info.go:137] Remote host: Buildroot 2021.02.12
	I0109 00:32:16.364800   15272 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\addons for local assets ...
	I0109 00:32:16.365524   15272 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\files for local assets ...
	I0109 00:32:16.366717   15272 filesync.go:149] local asset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\142882.pem -> 142882.pem in /etc/ssl/certs
	I0109 00:32:16.366717   15272 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\142882.pem -> /etc/ssl/certs/142882.pem
	I0109 00:32:16.380396   15272 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0109 00:32:16.396227   15272 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\142882.pem --> /etc/ssl/certs/142882.pem (1708 bytes)
	I0109 00:32:16.438863   15272 start.go:303] post-start completed in 4.9573438s
	I0109 00:32:16.438863   15272 fix.go:56] fixHost completed within 1m22.8738626s
	I0109 00:32:16.438863   15272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-173500-m03 ).state
	I0109 00:32:18.654898   15272 main.go:141] libmachine: [stdout =====>] : Running
	
	I0109 00:32:18.654983   15272 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:32:18.654983   15272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-173500-m03 ).networkadapters[0]).ipaddresses[0]
	I0109 00:32:21.325147   15272 main.go:141] libmachine: [stdout =====>] : 172.24.101.30
	
	I0109 00:32:21.325147   15272 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:32:21.332375   15272 main.go:141] libmachine: Using SSH client type: native
	I0109 00:32:21.333050   15272 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa46120] 0xa48c60 <nil>  [] 0s} 172.24.101.30 22 <nil> <nil>}
	I0109 00:32:21.333050   15272 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0109 00:32:21.472157   15272 main.go:141] libmachine: SSH cmd err, output: <nil>: 1704760341.471060542
	
	I0109 00:32:21.472281   15272 fix.go:206] guest clock: 1704760341.471060542
	I0109 00:32:21.472281   15272 fix.go:219] Guest: 2024-01-09 00:32:21.471060542 +0000 UTC Remote: 2024-01-09 00:32:16.4388631 +0000 UTC m=+370.823510001 (delta=5.032197442s)
	I0109 00:32:21.472281   15272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-173500-m03 ).state
	I0109 00:32:23.679675   15272 main.go:141] libmachine: [stdout =====>] : Running
	
	I0109 00:32:23.679887   15272 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:32:23.679887   15272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-173500-m03 ).networkadapters[0]).ipaddresses[0]
	
	
	==> Docker <==
	-- Journal begins at Tue 2024-01-09 00:26:33 UTC, ends at Tue 2024-01-09 00:32:48 UTC. --
	Jan 09 00:28:19 multinode-173500 dockerd[1047]: time="2024-01-09T00:28:19.902155283Z" level=info msg="shim disconnected" id=789b5c23c558650eb68d1ab223cd9f89c56a676892681771b1885797a9cd3576 namespace=moby
	Jan 09 00:28:19 multinode-173500 dockerd[1047]: time="2024-01-09T00:28:19.902413108Z" level=warning msg="cleaning up after shim disconnected" id=789b5c23c558650eb68d1ab223cd9f89c56a676892681771b1885797a9cd3576 namespace=moby
	Jan 09 00:28:19 multinode-173500 dockerd[1047]: time="2024-01-09T00:28:19.902592726Z" level=info msg="cleaning up dead shim" namespace=moby
	Jan 09 00:28:28 multinode-173500 dockerd[1047]: time="2024-01-09T00:28:28.238102479Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jan 09 00:28:28 multinode-173500 dockerd[1047]: time="2024-01-09T00:28:28.238188184Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 09 00:28:28 multinode-173500 dockerd[1047]: time="2024-01-09T00:28:28.243638983Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jan 09 00:28:28 multinode-173500 dockerd[1047]: time="2024-01-09T00:28:28.243842294Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 09 00:28:28 multinode-173500 dockerd[1047]: time="2024-01-09T00:28:28.254738292Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jan 09 00:28:28 multinode-173500 dockerd[1047]: time="2024-01-09T00:28:28.255001406Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 09 00:28:28 multinode-173500 dockerd[1047]: time="2024-01-09T00:28:28.255275521Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jan 09 00:28:28 multinode-173500 dockerd[1047]: time="2024-01-09T00:28:28.255377927Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 09 00:28:28 multinode-173500 cri-dockerd[1260]: time="2024-01-09T00:28:28Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/30a92279e3889b6c79151934a8c3d971725294db60e0e4eb50e1234d7d77e978/resolv.conf as [nameserver 172.24.96.1]"
	Jan 09 00:28:29 multinode-173500 cri-dockerd[1260]: time="2024-01-09T00:28:29Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/f2730d13c477a56ce0d7a4d536e516593eaae715d119d4c201780969ce10ec83/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Jan 09 00:28:29 multinode-173500 dockerd[1047]: time="2024-01-09T00:28:29.205066923Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jan 09 00:28:29 multinode-173500 dockerd[1047]: time="2024-01-09T00:28:29.206255385Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 09 00:28:29 multinode-173500 dockerd[1047]: time="2024-01-09T00:28:29.206366190Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jan 09 00:28:29 multinode-173500 dockerd[1047]: time="2024-01-09T00:28:29.206542999Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 09 00:28:29 multinode-173500 dockerd[1047]: time="2024-01-09T00:28:29.353018633Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jan 09 00:28:29 multinode-173500 dockerd[1047]: time="2024-01-09T00:28:29.353372051Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 09 00:28:29 multinode-173500 dockerd[1047]: time="2024-01-09T00:28:29.353483257Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jan 09 00:28:29 multinode-173500 dockerd[1047]: time="2024-01-09T00:28:29.353818874Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 09 00:28:32 multinode-173500 dockerd[1047]: time="2024-01-09T00:28:32.211977519Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jan 09 00:28:32 multinode-173500 dockerd[1047]: time="2024-01-09T00:28:32.212069823Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 09 00:28:32 multinode-173500 dockerd[1047]: time="2024-01-09T00:28:32.212096724Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jan 09 00:28:32 multinode-173500 dockerd[1047]: time="2024-01-09T00:28:32.212113524Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	410bf2fc461a5       6e38f40d628db                                                                                         4 minutes ago       Running             storage-provisioner       2                   fcc555429d03a       storage-provisioner
	33acd06708d28       8c811b4aec35f                                                                                         4 minutes ago       Running             busybox                   1                   f2730d13c477a       busybox-5bc68d56bd-cfnc7
	edae9a6871d6c       ead0a4a53df89                                                                                         4 minutes ago       Running             coredns                   1                   30a92279e3889       coredns-5dd5756b68-bkss9
	22631434a6765       c7d1297425461                                                                                         4 minutes ago       Running             kindnet-cni               1                   ff29e0b4c57f8       kindnet-ht547
	789b5c23c5586       6e38f40d628db                                                                                         4 minutes ago       Exited              storage-provisioner       1                   fcc555429d03a       storage-provisioner
	c82085dacc50a       83f6cc407eed8                                                                                         4 minutes ago       Running             kube-proxy                1                   b92010bf9024d       kube-proxy-qrtm6
	16b9c1e5d915c       e3db313c6dbc0                                                                                         4 minutes ago       Running             kube-scheduler            1                   5e6a603d50ca3       kube-scheduler-multinode-173500
	bfad31284f8da       73deb9a3f7025                                                                                         4 minutes ago       Running             etcd                      0                   a069341c01a85       etcd-multinode-173500
	b1a75b4088867       d058aa5ab969c                                                                                         4 minutes ago       Running             kube-controller-manager   1                   b1765f6ae6442       kube-controller-manager-multinode-173500
	c9bf127dcb9fc       7fe0e6f37db33                                                                                         4 minutes ago       Running             kube-apiserver            0                   781aa3a664182       kube-apiserver-multinode-173500
	d90035f998d24       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   23 minutes ago      Exited              busybox                   0                   2f9750b321708       busybox-5bc68d56bd-cfnc7
	cc24fe03754e0       ead0a4a53df89                                                                                         26 minutes ago      Exited              coredns                   0                   ea6b136c3ff5d       coredns-5dd5756b68-bkss9
	73ce70f8eca1e       kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052              26 minutes ago      Exited              kindnet-cni               0                   f8bc35a82f652       kindnet-ht547
	9faec0fdff890       83f6cc407eed8                                                                                         27 minutes ago      Exited              kube-proxy                0                   4ab23b363c354       kube-proxy-qrtm6
	c6bc1bb3e368d       e3db313c6dbc0                                                                                         27 minutes ago      Exited              kube-scheduler            0                   414e36a1f442f       kube-scheduler-multinode-173500
	aa0ba9733b8d8       d058aa5ab969c                                                                                         27 minutes ago      Exited              kube-controller-manager   0                   1b9f9a6d5d523       kube-controller-manager-multinode-173500
	
	
	==> coredns [cc24fe03754e] <==
	[INFO] 10.244.1.2:59097 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.0000497s
	[INFO] 10.244.1.2:33857 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.0000706s
	[INFO] 10.244.1.2:51802 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.0000618s
	[INFO] 10.244.1.2:57262 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.0000549s
	[INFO] 10.244.1.2:52763 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.0001599s
	[INFO] 10.244.1.2:60132 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000068s
	[INFO] 10.244.1.2:52590 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0000511s
	[INFO] 10.244.0.3:37184 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0002542s
	[INFO] 10.244.0.3:36933 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.0000836s
	[INFO] 10.244.0.3:46781 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.0000592s
	[INFO] 10.244.0.3:43261 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0001146s
	[INFO] 10.244.1.2:36348 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0001666s
	[INFO] 10.244.1.2:44924 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000091s
	[INFO] 10.244.1.2:37397 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000104s
	[INFO] 10.244.1.2:47064 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0000527s
	[INFO] 10.244.0.3:58487 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0000938s
	[INFO] 10.244.0.3:38603 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.0001792s
	[INFO] 10.244.0.3:45614 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.0001165s
	[INFO] 10.244.0.3:36160 - 5 "PTR IN 1.96.24.172.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.0002146s
	[INFO] 10.244.1.2:39722 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0001182s
	[INFO] 10.244.1.2:60559 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.0001412s
	[INFO] 10.244.1.2:42442 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.0001273s
	[INFO] 10.244.1.2:38705 - 5 "PTR IN 1.96.24.172.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.0001219s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [edae9a6871d6] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 8a94475fd8f6b5be74d16a1164f3817e7e3c9c869aad283bf9dc9abd5dea1e10b4b9491d20650a72f422eaef0ab2bbcc33a356e2ff9bbbd28022709e05d1c5d7
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:46255 - 63130 "HINFO IN 8245829342486442493.7821373369032168286. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.033720935s
	
	
	==> describe nodes <==
	Name:               multinode-173500
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-173500
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a
	                    minikube.k8s.io/name=multinode-173500
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_01_09T00_05_33_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 09 Jan 2024 00:05:27 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-173500
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 09 Jan 2024 00:32:48 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 09 Jan 2024 00:28:32 +0000   Tue, 09 Jan 2024 00:05:24 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 09 Jan 2024 00:28:32 +0000   Tue, 09 Jan 2024 00:05:24 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 09 Jan 2024 00:28:32 +0000   Tue, 09 Jan 2024 00:05:24 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 09 Jan 2024 00:28:32 +0000   Tue, 09 Jan 2024 00:28:32 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.24.109.120
	  Hostname:    multinode-173500
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165980Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165980Ki
	  pods:               110
	System Info:
	  Machine ID:                 118ac87763aa407e9b019cec433b63df
	  System UUID:                0ef18d3b-01b0-a246-9e9a-8c597fba2d09
	  Boot ID:                    c57de472-5ada-4c44-a847-3a8ed90f80e4
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://24.0.7
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5bc68d56bd-cfnc7                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         23m
	  kube-system                 coredns-5dd5756b68-bkss9                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     27m
	  kube-system                 etcd-multinode-173500                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         4m31s
	  kube-system                 kindnet-ht547                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      27m
	  kube-system                 kube-apiserver-multinode-173500             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m30s
	  kube-system                 kube-controller-manager-multinode-173500    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27m
	  kube-system                 kube-proxy-qrtm6                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27m
	  kube-system                 kube-scheduler-multinode-173500             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27m
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         26m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 27m                    kube-proxy       
	  Normal  Starting                 4m32s                  kube-proxy       
	  Normal  NodeAllocatableEnforced  27m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  27m                    kubelet          Node multinode-173500 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    27m                    kubelet          Node multinode-173500 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     27m                    kubelet          Node multinode-173500 status is now: NodeHasSufficientPID
	  Normal  Starting                 27m                    kubelet          Starting kubelet.
	  Normal  RegisteredNode           27m                    node-controller  Node multinode-173500 event: Registered Node multinode-173500 in Controller
	  Normal  NodeReady                26m                    kubelet          Node multinode-173500 status is now: NodeReady
	  Normal  Starting                 4m45s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m44s (x8 over 4m44s)  kubelet          Node multinode-173500 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m44s (x8 over 4m44s)  kubelet          Node multinode-173500 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m44s (x7 over 4m44s)  kubelet          Node multinode-173500 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m44s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m25s                  node-controller  Node multinode-173500 event: Registered Node multinode-173500 in Controller
	
	
	Name:               multinode-173500-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-173500-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a
	                    minikube.k8s.io/name=multinode-173500
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_01_09T00_30_43_0700
	                    minikube.k8s.io/version=v1.32.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 09 Jan 2024 00:30:42 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-173500-m02
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 09 Jan 2024 00:32:44 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 09 Jan 2024 00:30:51 +0000   Tue, 09 Jan 2024 00:30:42 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 09 Jan 2024 00:30:51 +0000   Tue, 09 Jan 2024 00:30:42 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 09 Jan 2024 00:30:51 +0000   Tue, 09 Jan 2024 00:30:42 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 09 Jan 2024 00:30:51 +0000   Tue, 09 Jan 2024 00:30:51 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.24.111.157
	  Hostname:    multinode-173500-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165980Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165980Ki
	  pods:               110
	System Info:
	  Machine ID:                 524fcac37d07426d87db539f49c3a3cc
	  System UUID:                59ca1e55-1c20-9b4a-8413-0653325c9061
	  Boot ID:                    0d534edc-cd2d-4ed6-b22b-b160e6e34c07
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://24.0.7
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5bc68d56bd-qsv8j    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m12s
	  kube-system                 kindnet-t72cs               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      24m
	  kube-system                 kube-proxy-4h4sv            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         24m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From        Message
	  ----    ------                   ----                 ----        -------
	  Normal  Starting                 23m                  kube-proxy  
	  Normal  Starting                 2m4s                 kube-proxy  
	  Normal  NodeHasSufficientMemory  24m (x5 over 24m)    kubelet     Node multinode-173500-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    24m (x5 over 24m)    kubelet     Node multinode-173500-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     24m (x5 over 24m)    kubelet     Node multinode-173500-m02 status is now: NodeHasSufficientPID
	  Normal  NodeReady                23m                  kubelet     Node multinode-173500-m02 status is now: NodeReady
	  Normal  Starting                 2m7s                 kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m7s (x2 over 2m7s)  kubelet     Node multinode-173500-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m7s (x2 over 2m7s)  kubelet     Node multinode-173500-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m7s (x2 over 2m7s)  kubelet     Node multinode-173500-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m7s                 kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                118s                 kubelet     Node multinode-173500-m02 status is now: NodeReady
	
	
	Name:               multinode-173500-m03
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-173500-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a
	                    minikube.k8s.io/name=multinode-173500
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_01_09T00_30_43_0700
	                    minikube.k8s.io/version=v1.32.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 09 Jan 2024 00:23:50 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-173500-m03
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 09 Jan 2024 00:24:51 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Tue, 09 Jan 2024 00:24:02 +0000   Tue, 09 Jan 2024 00:29:05 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Tue, 09 Jan 2024 00:24:02 +0000   Tue, 09 Jan 2024 00:29:05 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Tue, 09 Jan 2024 00:24:02 +0000   Tue, 09 Jan 2024 00:29:05 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Tue, 09 Jan 2024 00:24:02 +0000   Tue, 09 Jan 2024 00:29:05 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  172.24.100.87
	  Hostname:    multinode-173500-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165980Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165980Ki
	  pods:               110
	System Info:
	  Machine ID:                 5b8e9ceb41dc4271aabcebf70b973f88
	  System UUID:                bc141344-2437-9743-9eae-056eaa495e71
	  Boot ID:                    11faa7d6-e0af-4a26-a72b-81ae883f9bf8
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://24.0.7
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-6nz87       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      19m
	  kube-system                 kube-proxy-mj6ks    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         19m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 19m                    kube-proxy       
	  Normal  Starting                 8m56s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  19m (x5 over 19m)      kubelet          Node multinode-173500-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    19m (x5 over 19m)      kubelet          Node multinode-173500-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     19m (x5 over 19m)      kubelet          Node multinode-173500-m03 status is now: NodeHasSufficientPID
	  Normal  NodeReady                19m                    kubelet          Node multinode-173500-m03 status is now: NodeReady
	  Normal  Starting                 8m59s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  8m59s (x2 over 8m59s)  kubelet          Node multinode-173500-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m59s (x2 over 8m59s)  kubelet          Node multinode-173500-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m59s (x2 over 8m59s)  kubelet          Node multinode-173500-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  8m59s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           8m55s                  node-controller  Node multinode-173500-m03 event: Registered Node multinode-173500-m03 in Controller
	  Normal  NodeReady                8m47s                  kubelet          Node multinode-173500-m03 status is now: NodeReady
	  Normal  RegisteredNode           4m25s                  node-controller  Node multinode-173500-m03 event: Registered Node multinode-173500-m03 in Controller
	  Normal  NodeNotReady             3m44s                  node-controller  Node multinode-173500-m03 status is now: NodeNotReady
	
	
	==> dmesg <==
	                "trace_clock=local"
	              on the kernel command line
	[  +1.289974] psmouse serio1: trackpoint: failed to get extended button data, assuming 3 buttons
	[  +1.096217] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +1.252743] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000112] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +8.299557] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[Jan 9 00:27] systemd-fstab-generator[649]: Ignoring "noauto" for root device
	[  +0.167637] systemd-fstab-generator[660]: Ignoring "noauto" for root device
	[ +25.814735] systemd-fstab-generator[968]: Ignoring "noauto" for root device
	[  +0.637649] systemd-fstab-generator[1008]: Ignoring "noauto" for root device
	[  +0.163143] systemd-fstab-generator[1019]: Ignoring "noauto" for root device
	[  +0.201383] systemd-fstab-generator[1032]: Ignoring "noauto" for root device
	[  +1.497713] kauditd_printk_skb: 28 callbacks suppressed
	[  +0.399124] systemd-fstab-generator[1205]: Ignoring "noauto" for root device
	[  +0.169773] systemd-fstab-generator[1216]: Ignoring "noauto" for root device
	[  +0.165331] systemd-fstab-generator[1227]: Ignoring "noauto" for root device
	[  +0.173505] systemd-fstab-generator[1238]: Ignoring "noauto" for root device
	[  +0.210719] systemd-fstab-generator[1252]: Ignoring "noauto" for root device
	[Jan 9 00:28] systemd-fstab-generator[1476]: Ignoring "noauto" for root device
	[  +0.905918] kauditd_printk_skb: 29 callbacks suppressed
	[ +20.348767] kauditd_printk_skb: 18 callbacks suppressed
	[Jan 9 00:30] hrtimer: interrupt took 3057680 ns
	
	
	==> etcd [bfad31284f8d] <==
	{"level":"info","ts":"2024-01-09T00:28:08.447624Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-01-09T00:28:08.447789Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-01-09T00:28:08.462297Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1a66b2354aff11e6 switched to configuration voters=(1902403834571854310)"}
	{"level":"info","ts":"2024-01-09T00:28:08.462519Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"e7775a1fec048288","local-member-id":"1a66b2354aff11e6","added-peer-id":"1a66b2354aff11e6","added-peer-peer-urls":["https://172.24.100.178:2380"]}
	{"level":"info","ts":"2024-01-09T00:28:08.462765Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"e7775a1fec048288","local-member-id":"1a66b2354aff11e6","cluster-version":"3.5"}
	{"level":"info","ts":"2024-01-09T00:28:08.462934Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-01-09T00:28:08.472684Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-01-09T00:28:08.473059Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"1a66b2354aff11e6","initial-advertise-peer-urls":["https://172.24.109.120:2380"],"listen-peer-urls":["https://172.24.109.120:2380"],"advertise-client-urls":["https://172.24.109.120:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://172.24.109.120:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-01-09T00:28:08.473093Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-01-09T00:28:08.473185Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"172.24.109.120:2380"}
	{"level":"info","ts":"2024-01-09T00:28:08.487299Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"172.24.109.120:2380"}
	{"level":"info","ts":"2024-01-09T00:28:10.07675Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1a66b2354aff11e6 is starting a new election at term 2"}
	{"level":"info","ts":"2024-01-09T00:28:10.076819Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1a66b2354aff11e6 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-01-09T00:28:10.076932Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1a66b2354aff11e6 received MsgPreVoteResp from 1a66b2354aff11e6 at term 2"}
	{"level":"info","ts":"2024-01-09T00:28:10.076959Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1a66b2354aff11e6 became candidate at term 3"}
	{"level":"info","ts":"2024-01-09T00:28:10.076967Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1a66b2354aff11e6 received MsgVoteResp from 1a66b2354aff11e6 at term 3"}
	{"level":"info","ts":"2024-01-09T00:28:10.076977Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1a66b2354aff11e6 became leader at term 3"}
	{"level":"info","ts":"2024-01-09T00:28:10.076985Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 1a66b2354aff11e6 elected leader 1a66b2354aff11e6 at term 3"}
	{"level":"info","ts":"2024-01-09T00:28:10.090989Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"1a66b2354aff11e6","local-member-attributes":"{Name:multinode-173500 ClientURLs:[https://172.24.109.120:2379]}","request-path":"/0/members/1a66b2354aff11e6/attributes","cluster-id":"e7775a1fec048288","publish-timeout":"7s"}
	{"level":"info","ts":"2024-01-09T00:28:10.090994Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-01-09T00:28:10.091334Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-01-09T00:28:10.092692Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"172.24.109.120:2379"}
	{"level":"info","ts":"2024-01-09T00:28:10.0929Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-01-09T00:28:10.092918Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-01-09T00:28:10.093822Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 00:32:49 up 6 min,  0 users,  load average: 0.52, 0.36, 0.17
	Linux multinode-173500 5.10.57 #1 SMP Sat Dec 16 11:03:54 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	
	==> kindnet [22631434a676] <==
	I0109 00:32:01.230489       1 main.go:250] Node multinode-173500-m03 has CIDR [10.244.3.0/24] 
	I0109 00:32:11.238705       1 main.go:223] Handling node with IPs: map[172.24.109.120:{}]
	I0109 00:32:11.238789       1 main.go:227] handling current node
	I0109 00:32:11.238803       1 main.go:223] Handling node with IPs: map[172.24.111.157:{}]
	I0109 00:32:11.238813       1 main.go:250] Node multinode-173500-m02 has CIDR [10.244.1.0/24] 
	I0109 00:32:11.239469       1 main.go:223] Handling node with IPs: map[172.24.100.87:{}]
	I0109 00:32:11.239505       1 main.go:250] Node multinode-173500-m03 has CIDR [10.244.3.0/24] 
	I0109 00:32:21.253158       1 main.go:223] Handling node with IPs: map[172.24.109.120:{}]
	I0109 00:32:21.253185       1 main.go:227] handling current node
	I0109 00:32:21.253237       1 main.go:223] Handling node with IPs: map[172.24.111.157:{}]
	I0109 00:32:21.253248       1 main.go:250] Node multinode-173500-m02 has CIDR [10.244.1.0/24] 
	I0109 00:32:21.253357       1 main.go:223] Handling node with IPs: map[172.24.100.87:{}]
	I0109 00:32:21.253365       1 main.go:250] Node multinode-173500-m03 has CIDR [10.244.3.0/24] 
	I0109 00:32:31.266961       1 main.go:223] Handling node with IPs: map[172.24.109.120:{}]
	I0109 00:32:31.267046       1 main.go:227] handling current node
	I0109 00:32:31.267061       1 main.go:223] Handling node with IPs: map[172.24.111.157:{}]
	I0109 00:32:31.267070       1 main.go:250] Node multinode-173500-m02 has CIDR [10.244.1.0/24] 
	I0109 00:32:31.267393       1 main.go:223] Handling node with IPs: map[172.24.100.87:{}]
	I0109 00:32:31.267564       1 main.go:250] Node multinode-173500-m03 has CIDR [10.244.3.0/24] 
	I0109 00:32:41.290189       1 main.go:223] Handling node with IPs: map[172.24.109.120:{}]
	I0109 00:32:41.290284       1 main.go:227] handling current node
	I0109 00:32:41.290304       1 main.go:223] Handling node with IPs: map[172.24.111.157:{}]
	I0109 00:32:41.290315       1 main.go:250] Node multinode-173500-m02 has CIDR [10.244.1.0/24] 
	I0109 00:32:41.290732       1 main.go:223] Handling node with IPs: map[172.24.100.87:{}]
	I0109 00:32:41.290868       1 main.go:250] Node multinode-173500-m03 has CIDR [10.244.3.0/24] 
	
	
	==> kindnet [73ce70f8eca1] <==
	I0109 00:24:09.159626       1 main.go:250] Node multinode-173500-m03 has CIDR [10.244.3.0/24] 
	I0109 00:24:19.169312       1 main.go:223] Handling node with IPs: map[172.24.100.178:{}]
	I0109 00:24:19.169410       1 main.go:227] handling current node
	I0109 00:24:19.169425       1 main.go:223] Handling node with IPs: map[172.24.108.84:{}]
	I0109 00:24:19.169434       1 main.go:250] Node multinode-173500-m02 has CIDR [10.244.1.0/24] 
	I0109 00:24:19.170161       1 main.go:223] Handling node with IPs: map[172.24.100.87:{}]
	I0109 00:24:19.170249       1 main.go:250] Node multinode-173500-m03 has CIDR [10.244.3.0/24] 
	I0109 00:24:29.186190       1 main.go:223] Handling node with IPs: map[172.24.100.178:{}]
	I0109 00:24:29.186234       1 main.go:227] handling current node
	I0109 00:24:29.186247       1 main.go:223] Handling node with IPs: map[172.24.108.84:{}]
	I0109 00:24:29.186254       1 main.go:250] Node multinode-173500-m02 has CIDR [10.244.1.0/24] 
	I0109 00:24:29.186621       1 main.go:223] Handling node with IPs: map[172.24.100.87:{}]
	I0109 00:24:29.186636       1 main.go:250] Node multinode-173500-m03 has CIDR [10.244.3.0/24] 
	I0109 00:24:39.194600       1 main.go:223] Handling node with IPs: map[172.24.100.178:{}]
	I0109 00:24:39.194696       1 main.go:227] handling current node
	I0109 00:24:39.194713       1 main.go:223] Handling node with IPs: map[172.24.108.84:{}]
	I0109 00:24:39.194722       1 main.go:250] Node multinode-173500-m02 has CIDR [10.244.1.0/24] 
	I0109 00:24:39.194862       1 main.go:223] Handling node with IPs: map[172.24.100.87:{}]
	I0109 00:24:39.194876       1 main.go:250] Node multinode-173500-m03 has CIDR [10.244.3.0/24] 
	I0109 00:24:49.210918       1 main.go:223] Handling node with IPs: map[172.24.100.178:{}]
	I0109 00:24:49.211132       1 main.go:227] handling current node
	I0109 00:24:49.211178       1 main.go:223] Handling node with IPs: map[172.24.108.84:{}]
	I0109 00:24:49.211240       1 main.go:250] Node multinode-173500-m02 has CIDR [10.244.1.0/24] 
	I0109 00:24:49.211746       1 main.go:223] Handling node with IPs: map[172.24.100.87:{}]
	I0109 00:24:49.211916       1 main.go:250] Node multinode-173500-m03 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [c9bf127dcb9f] <==
	I0109 00:28:11.864519       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0109 00:28:11.866121       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0109 00:28:12.046640       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0109 00:28:12.053704       1 shared_informer.go:318] Caches are synced for configmaps
	I0109 00:28:12.058985       1 aggregator.go:166] initial CRD sync complete...
	I0109 00:28:12.059041       1 autoregister_controller.go:141] Starting autoregister controller
	I0109 00:28:12.059049       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0109 00:28:12.096285       1 shared_informer.go:318] Caches are synced for node_authorizer
	I0109 00:28:12.133407       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0109 00:28:12.138451       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I0109 00:28:12.140146       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I0109 00:28:12.145434       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0109 00:28:12.146081       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0109 00:28:12.148231       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I0109 00:28:12.160343       1 cache.go:39] Caches are synced for autoregister controller
	I0109 00:28:12.845972       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0109 00:28:13.292360       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [172.24.100.178 172.24.109.120]
	I0109 00:28:13.294627       1 controller.go:624] quota admission added evaluator for: endpoints
	I0109 00:28:13.304048       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0109 00:28:16.104993       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0109 00:28:16.451582       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0109 00:28:16.473066       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0109 00:28:16.619732       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0109 00:28:16.631776       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	W0109 00:28:33.285457       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [172.24.109.120]
	
	
	==> kube-controller-manager [aa0ba9733b8d] <==
	I0109 00:09:31.535231       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="72.6µs"
	I0109 00:09:31.556951       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="65.4µs"
	I0109 00:09:34.653596       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="12.070002ms"
	I0109 00:09:34.654245       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="94.5µs"
	I0109 00:09:34.798017       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="16.779203ms"
	I0109 00:09:34.798369       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="273µs"
	I0109 00:13:24.167964       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-173500-m02"
	I0109 00:13:24.170315       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-173500-m03\" does not exist"
	I0109 00:13:24.207965       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-mj6ks"
	I0109 00:13:24.208454       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-6nz87"
	I0109 00:13:24.215233       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-173500-m03" podCIDRs=["10.244.2.0/24"]
	I0109 00:13:24.383750       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-173500-m03"
	I0109 00:13:24.383908       1 event.go:307] "Event occurred" object="multinode-173500-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-173500-m03 event: Registered Node multinode-173500-m03 in Controller"
	I0109 00:13:43.943752       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-173500-m03"
	I0109 00:21:24.522716       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-173500-m02"
	I0109 00:21:24.523908       1 event.go:307] "Event occurred" object="multinode-173500-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="NodeNotReady" message="Node multinode-173500-m03 status is now: NodeNotReady"
	I0109 00:21:24.550630       1 event.go:307] "Event occurred" object="kube-system/kube-proxy-mj6ks" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0109 00:21:24.570821       1 event.go:307] "Event occurred" object="kube-system/kindnet-6nz87" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0109 00:23:49.097950       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-173500-m02"
	I0109 00:23:49.603600       1 event.go:307] "Event occurred" object="multinode-173500-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RemovingNode" message="Node multinode-173500-m03 event: Removing Node multinode-173500-m03 from Controller"
	I0109 00:23:50.462493       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-173500-m02"
	I0109 00:23:50.462964       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-173500-m03\" does not exist"
	I0109 00:23:50.482467       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-173500-m03" podCIDRs=["10.244.3.0/24"]
	I0109 00:23:54.604577       1 event.go:307] "Event occurred" object="multinode-173500-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-173500-m03 event: Registered Node multinode-173500-m03 in Controller"
	I0109 00:24:02.784216       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-173500-m02"
	
	
	==> kube-controller-manager [b1a75b408886] <==
	I0109 00:29:05.350911       1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd-txtnl" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0109 00:29:05.371841       1 event.go:307] "Event occurred" object="kube-system/kindnet-6nz87" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0109 00:29:05.381455       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="37.303188ms"
	I0109 00:29:05.381721       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="128.8µs"
	I0109 00:29:05.391745       1 event.go:307] "Event occurred" object="kube-system/kindnet-t72cs" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0109 00:29:05.414535       1 event.go:307] "Event occurred" object="kube-system/kube-proxy-4h4sv" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0109 00:29:05.416069       1 event.go:307] "Event occurred" object="kube-system/kube-proxy-mj6ks" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0109 00:30:37.728180       1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-5bc68d56bd-qsv8j"
	I0109 00:30:37.749700       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="52.193917ms"
	I0109 00:30:37.769598       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="19.834992ms"
	I0109 00:30:37.769851       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="100.199µs"
	I0109 00:30:42.258469       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-173500-m02\" does not exist"
	I0109 00:30:42.262075       1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd-txtnl" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/busybox-5bc68d56bd-txtnl"
	I0109 00:30:42.280878       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-173500-m02" podCIDRs=["10.244.1.0/24"]
	I0109 00:30:43.186917       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="142.999µs"
	I0109 00:30:51.594510       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-173500-m02"
	I0109 00:30:51.628294       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="97.399µs"
	I0109 00:30:55.455067       1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd-txtnl" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/busybox-5bc68d56bd-txtnl"
	I0109 00:30:57.238589       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="79.2µs"
	I0109 00:30:57.508245       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="74.5µs"
	I0109 00:30:57.516872       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="241.599µs"
	I0109 00:31:00.334317       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="249.799µs"
	I0109 00:31:00.359009       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="50.9µs"
	I0109 00:31:02.595897       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="10.837355ms"
	I0109 00:31:02.597160       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="26.4µs"
	
	
	==> kube-proxy [9faec0fdff89] <==
	I0109 00:05:46.392694       1 server_others.go:69] "Using iptables proxy"
	I0109 00:05:46.408193       1 node.go:141] Successfully retrieved node IP: 172.24.100.178
	I0109 00:05:46.459651       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0109 00:05:46.459700       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0109 00:05:46.463149       1 server_others.go:152] "Using iptables Proxier"
	I0109 00:05:46.463194       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0109 00:05:46.463690       1 server.go:846] "Version info" version="v1.28.4"
	I0109 00:05:46.463707       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0109 00:05:46.465493       1 config.go:188] "Starting service config controller"
	I0109 00:05:46.465591       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0109 00:05:46.465632       1 config.go:97] "Starting endpoint slice config controller"
	I0109 00:05:46.465657       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0109 00:05:46.469493       1 config.go:315] "Starting node config controller"
	I0109 00:05:46.469531       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0109 00:05:46.566029       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0109 00:05:46.566037       1 shared_informer.go:318] Caches are synced for service config
	I0109 00:05:46.569916       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-proxy [c82085dacc50] <==
	I0109 00:28:15.954440       1 server_others.go:69] "Using iptables proxy"
	I0109 00:28:15.997760       1 node.go:141] Successfully retrieved node IP: 172.24.109.120
	I0109 00:28:16.238592       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0109 00:28:16.238792       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0109 00:28:16.243820       1 server_others.go:152] "Using iptables Proxier"
	I0109 00:28:16.244646       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0109 00:28:16.245184       1 server.go:846] "Version info" version="v1.28.4"
	I0109 00:28:16.245654       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0109 00:28:16.249376       1 config.go:188] "Starting service config controller"
	I0109 00:28:16.250674       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0109 00:28:16.250762       1 config.go:97] "Starting endpoint slice config controller"
	I0109 00:28:16.250793       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0109 00:28:16.254899       1 config.go:315] "Starting node config controller"
	I0109 00:28:16.254931       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0109 00:28:16.351570       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0109 00:28:16.351599       1 shared_informer.go:318] Caches are synced for service config
	I0109 00:28:16.354996       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [16b9c1e5d915] <==
	I0109 00:28:09.194073       1 serving.go:348] Generated self-signed cert in-memory
	W0109 00:28:12.026312       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0109 00:28:12.026666       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0109 00:28:12.026780       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0109 00:28:12.026946       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0109 00:28:12.069951       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.4"
	I0109 00:28:12.070121       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0109 00:28:12.074408       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0109 00:28:12.075018       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0109 00:28:12.076343       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0109 00:28:12.076596       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0109 00:28:12.177137       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [c6bc1bb3e368] <==
	E0109 00:05:28.459869       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0109 00:05:28.649547       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0109 00:05:28.649828       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0109 00:05:28.730526       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0109 00:05:28.730560       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0109 00:05:28.747358       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0109 00:05:28.747423       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0109 00:05:28.777226       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0109 00:05:28.777767       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0109 00:05:28.800761       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0109 00:05:28.800818       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0109 00:05:28.843807       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0109 00:05:28.844417       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0109 00:05:28.888984       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0109 00:05:28.889016       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0109 00:05:28.937776       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0109 00:05:28.937898       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0109 00:05:28.955882       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0109 00:05:28.956129       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0109 00:05:29.004492       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0109 00:05:29.004621       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0109 00:05:29.046692       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0109 00:05:29.046989       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0109 00:05:30.083101       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0109 00:24:55.763707       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	-- Journal begins at Tue 2024-01-09 00:26:33 UTC, ends at Tue 2024-01-09 00:32:49 UTC. --
	Jan 09 00:28:23 multinode-173500 kubelet[1482]: E0109 00:28:23.000528    1482 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-5bc68d56bd-cfnc7" podUID="e574852f-f9c9-4fde-9457-2f4309bfabf4"
	Jan 09 00:28:23 multinode-173500 kubelet[1482]: E0109 00:28:23.002042    1482 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-bkss9" podUID="463fb6c6-1e85-419f-9c13-96e58a2ec22e"
	Jan 09 00:28:25 multinode-173500 kubelet[1482]: E0109 00:28:25.009667    1482 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-bkss9" podUID="463fb6c6-1e85-419f-9c13-96e58a2ec22e"
	Jan 09 00:28:25 multinode-173500 kubelet[1482]: E0109 00:28:25.017331    1482 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-5bc68d56bd-cfnc7" podUID="e574852f-f9c9-4fde-9457-2f4309bfabf4"
	Jan 09 00:28:29 multinode-173500 kubelet[1482]: I0109 00:28:29.091832    1482 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f2730d13c477a56ce0d7a4d536e516593eaae715d119d4c201780969ce10ec83"
	Jan 09 00:28:29 multinode-173500 kubelet[1482]: I0109 00:28:29.339114    1482 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="30a92279e3889b6c79151934a8c3d971725294db60e0e4eb50e1234d7d77e978"
	Jan 09 00:28:32 multinode-173500 kubelet[1482]: I0109 00:28:32.000034    1482 scope.go:117] "RemoveContainer" containerID="789b5c23c558650eb68d1ab223cd9f89c56a676892681771b1885797a9cd3576"
	Jan 09 00:29:05 multinode-173500 kubelet[1482]: E0109 00:29:05.046709    1482 iptables.go:575] "Could not set up iptables canary" err=<
	Jan 09 00:29:05 multinode-173500 kubelet[1482]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jan 09 00:29:05 multinode-173500 kubelet[1482]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jan 09 00:29:05 multinode-173500 kubelet[1482]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jan 09 00:29:05 multinode-173500 kubelet[1482]: I0109 00:29:05.071349    1482 scope.go:117] "RemoveContainer" containerID="e4e40eb718ff1811cfffe281d5c6abadd3dea086fad69e9f27695c381a839f74"
	Jan 09 00:29:05 multinode-173500 kubelet[1482]: I0109 00:29:05.114021    1482 scope.go:117] "RemoveContainer" containerID="16fd62cddf8b27cf06e0c673b049da653b21821eb9e9d1f4b46dfae2af229480"
	Jan 09 00:30:05 multinode-173500 kubelet[1482]: E0109 00:30:05.047062    1482 iptables.go:575] "Could not set up iptables canary" err=<
	Jan 09 00:30:05 multinode-173500 kubelet[1482]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jan 09 00:30:05 multinode-173500 kubelet[1482]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jan 09 00:30:05 multinode-173500 kubelet[1482]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jan 09 00:31:05 multinode-173500 kubelet[1482]: E0109 00:31:05.047493    1482 iptables.go:575] "Could not set up iptables canary" err=<
	Jan 09 00:31:05 multinode-173500 kubelet[1482]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jan 09 00:31:05 multinode-173500 kubelet[1482]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jan 09 00:31:05 multinode-173500 kubelet[1482]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jan 09 00:32:05 multinode-173500 kubelet[1482]: E0109 00:32:05.050124    1482 iptables.go:575] "Could not set up iptables canary" err=<
	Jan 09 00:32:05 multinode-173500 kubelet[1482]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jan 09 00:32:05 multinode-173500 kubelet[1482]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jan 09 00:32:05 multinode-173500 kubelet[1482]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0109 00:32:40.772153    6624 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p multinode-173500 -n multinode-173500
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p multinode-173500 -n multinode-173500: (12.3090656s)
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-173500 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (503.23s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (479.03s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:133: (dbg) Run:  C:\Users\jenkins.minikube1\AppData\Local\Temp\minikube-v1.6.2.784782652.exe start -p running-upgrade-248700 --memory=2200 --vm-driver=hyperv
E0109 00:49:50.644895   14288 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ingress-addon-legacy-744200\client.crt: The system cannot find the path specified.
E0109 00:50:30.311235   14288 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-852800\client.crt: The system cannot find the path specified.
E0109 00:50:43.613224   14288 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-838800\client.crt: The system cannot find the path specified.
version_upgrade_test.go:133: (dbg) Done: C:\Users\jenkins.minikube1\AppData\Local\Temp\minikube-v1.6.2.784782652.exe start -p running-upgrade-248700 --memory=2200 --vm-driver=hyperv: (4m42.3089887s)
version_upgrade_test.go:143: (dbg) Run:  out/minikube-windows-amd64.exe start -p running-upgrade-248700 --memory=2200 --alsologtostderr -v=1 --driver=hyperv
E0109 00:53:46.844133   14288 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-838800\client.crt: The system cannot find the path specified.
version_upgrade_test.go:143: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p running-upgrade-248700 --memory=2200 --alsologtostderr -v=1 --driver=hyperv: exit status 90 (2m4.4175046s)

                                                
                                                
-- stdout --
	* [running-upgrade-248700] minikube v1.32.0 on Microsoft Windows 10 Enterprise N 10.0.19045.3803 Build 19045.3803
	  - KUBECONFIG=C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube1\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=17830
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	* Using the hyperv driver based on existing profile
	* Starting control plane node running-upgrade-248700 in cluster running-upgrade-248700
	* Updating the running hyperv "running-upgrade-248700" VM ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0109 00:53:40.356058    5580 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I0109 00:53:40.435220    5580 out.go:296] Setting OutFile to fd 1076 ...
	I0109 00:53:40.436080    5580 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0109 00:53:40.436080    5580 out.go:309] Setting ErrFile to fd 660...
	I0109 00:53:40.436080    5580 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0109 00:53:40.462718    5580 out.go:303] Setting JSON to false
	I0109 00:53:40.467779    5580 start.go:128] hostinfo: {"hostname":"minikube1","uptime":9115,"bootTime":1704752505,"procs":210,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.3803 Build 19045.3803","kernelVersion":"10.0.19045.3803 Build 19045.3803","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f3192dc9-6fb2-4797-bdaa-5f567903ef41"}
	W0109 00:53:40.467895    5580 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0109 00:53:40.471392    5580 out.go:177] * [running-upgrade-248700] minikube v1.32.0 on Microsoft Windows 10 Enterprise N 10.0.19045.3803 Build 19045.3803
	I0109 00:53:40.476560    5580 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0109 00:53:40.475496    5580 notify.go:220] Checking for updates...
	I0109 00:53:40.480317    5580 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0109 00:53:40.482779    5580 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube1\minikube-integration\.minikube
	I0109 00:53:40.484942    5580 out.go:177]   - MINIKUBE_LOCATION=17830
	I0109 00:53:40.492937    5580 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0109 00:53:40.498483    5580 config.go:182] Loaded profile config "running-upgrade-248700": Driver=, ContainerRuntime=docker, KubernetesVersion=v1.17.0
	I0109 00:53:40.498535    5580 start_flags.go:694] config upgrade: Driver=hyperv
	I0109 00:53:40.498590    5580 start_flags.go:706] config upgrade: KicBaseImage=gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704751654-17830@sha256:cabd32f8d9e8d804966eb117ed5366660f6363a4d1415f0b5480de6e396be617
	I0109 00:53:40.498731    5580 profile.go:148] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\running-upgrade-248700\config.json ...
	I0109 00:53:40.507041    5580 out.go:177] * Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	I0109 00:53:40.512488    5580 driver.go:392] Setting default libvirt URI to qemu:///system
	I0109 00:53:46.897764    5580 out.go:177] * Using the hyperv driver based on existing profile
	I0109 00:53:46.911406    5580 start.go:298] selected driver: hyperv
	I0109 00:53:46.912067    5580 start.go:902] validating driver "hyperv" against &{Name:running-upgrade-248700 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.6.0.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704751654-17830@sha256:cabd32f8d9e8d804966eb117ed5366660f6363a4d1415f0b5480de6e396be617 Memory:2200 CPUs:2 DiskSize:20000 VMDriver:hyperv Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.17.0
ClusterName: Namespace: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:minikube IP:172.24.105.72 Port:8443 KubernetesVersion:v1.17.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[] StartHostTimeout:0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:
SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:0s GPUs:}
	I0109 00:53:46.912412    5580 start.go:913] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0109 00:53:46.968062    5580 cni.go:84] Creating CNI manager for ""
	I0109 00:53:46.968062    5580 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0109 00:53:46.968062    5580 start_flags.go:323] config:
	{Name:running-upgrade-248700 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.6.0.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704751654-17830@sha256:cabd32f8d9e8d804966eb117ed5366660f6363a4d1415f0b5480de6e396be617 Memory:2200 CPUs:2 DiskSize:20000 VMDriver:hyperv Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.17.0 ClusterName: Namespace: APIServerName:minikubeCA APIServerNames:[] APIServerIPs
:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:minikube IP:172.24.105.72 Port:8443 KubernetesVersion:v1.17.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[] StartHostTimeout:0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:0s GPUs:}
	I0109 00:53:46.968621    5580 iso.go:125] acquiring lock: {Name:mk6e44fd4f974e035b521383471f58bfbae3f4f1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0109 00:53:46.975401    5580 out.go:177] * Starting control plane node running-upgrade-248700 in cluster running-upgrade-248700
	I0109 00:53:46.978154    5580 preload.go:132] Checking if preload exists for k8s version v1.17.0 and runtime docker
	W0109 00:53:47.019874    5580 preload.go:115] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.17.0/preloaded-images-k8s-v18-v1.17.0-docker-overlay2-amd64.tar.lz4 status code: 404
	I0109 00:53:47.021159    5580 profile.go:148] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\running-upgrade-248700\config.json ...
	I0109 00:53:47.021815    5580 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver:v1.17.0 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver_v1.17.0
	I0109 00:53:47.021815    5580 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause:3.1 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause_3.1
	I0109 00:53:47.021815    5580 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner:v5 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner_v5
	I0109 00:53:47.021815    5580 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager:v1.17.0 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager_v1.17.0
	I0109 00:53:47.021815    5580 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler:v1.17.0 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler_v1.17.0
	I0109 00:53:47.022099    5580 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd:3.4.3-0 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.4.3-0
	I0109 00:53:47.022157    5580 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns:1.6.5 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns_1.6.5
	I0109 00:53:47.024046    5580 start.go:365] acquiring machines lock for running-upgrade-248700: {Name:mk1756dfe9622c208593bf9b718faf6c1651aea2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0109 00:53:47.024046    5580 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy:v1.17.0 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.17.0
	I0109 00:53:47.225857    5580 cache.go:107] acquiring lock: {Name:mk7a50c4bf2c20bec1fff9de3ac74780139c1c4b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0109 00:53:47.225857    5580 cache.go:107] acquiring lock: {Name:mka7be082bbc64a256cc388eda31b6c9edba386f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0109 00:53:47.226688    5580 cache.go:107] acquiring lock: {Name:mkbbc88bc55edd0ef8bd1c53673fe74e0129caa1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0109 00:53:47.227260    5580 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.17.0
	I0109 00:53:47.227260    5580 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.17.0
	I0109 00:53:47.227583    5580 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.17.0
	I0109 00:53:47.231603    5580 cache.go:107] acquiring lock: {Name:mk4e8ee16ba5b475b341c78282e92381b8584a70 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0109 00:53:47.232194    5580 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.17.0
	I0109 00:53:47.240337    5580 cache.go:107] acquiring lock: {Name:mk67b634fe9a890edc5195da54a2f3093e0c8f30 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0109 00:53:47.240337    5580 cache.go:107] acquiring lock: {Name:mkcd99a49ef11cbbf53d95904dadb7eadb7e30f9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0109 00:53:47.240337    5580 cache.go:107] acquiring lock: {Name:mk69342e4f48cfcf5669830048d73215a892bfa9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0109 00:53:47.240337    5580 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube1\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner_v5 exists
	I0109 00:53:47.240337    5580 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\cache\\images\\amd64\\gcr.io\\k8s-minikube\\storage-provisioner_v5" took 218.4206ms
	I0109 00:53:47.240337    5580 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner_v5 succeeded
	I0109 00:53:47.240337    5580 image.go:134] retrieving image: registry.k8s.io/pause:3.1
	I0109 00:53:47.241084    5580 cache.go:107] acquiring lock: {Name:mkf253ced278c18e0b579f9f5e07f6a2fe7db678 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0109 00:53:47.241084    5580 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.5
	I0109 00:53:47.241084    5580 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.3-0
	I0109 00:53:47.248113    5580 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.17.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.17.0
	I0109 00:53:47.250087    5580 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.17.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.17.0
	I0109 00:53:47.255126    5580 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.5: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.5
	I0109 00:53:47.256083    5580 image.go:177] daemon lookup for registry.k8s.io/pause:3.1: Error response from daemon: No such image: registry.k8s.io/pause:3.1
	I0109 00:53:47.256083    5580 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.17.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.17.0
	I0109 00:53:47.256083    5580 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.3-0
	I0109 00:53:47.268113    5580 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.17.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.17.0
	W0109 00:53:47.374436    5580 image.go:187] authn lookup for registry.k8s.io/kube-scheduler:v1.17.0 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W0109 00:53:47.492053    5580 image.go:187] authn lookup for registry.k8s.io/kube-proxy:v1.17.0 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W0109 00:53:47.605615    5580 image.go:187] authn lookup for registry.k8s.io/coredns:1.6.5 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W0109 00:53:47.727698    5580 image.go:187] authn lookup for registry.k8s.io/pause:3.1 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W0109 00:53:47.851680    5580 image.go:187] authn lookup for registry.k8s.io/kube-apiserver:v1.17.0 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	I0109 00:53:47.880946    5580 cache.go:162] opening:  \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube1\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.17.0
	I0109 00:53:47.891946    5580 cache.go:162] opening:  \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube1\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler_v1.17.0
	I0109 00:53:47.939529    5580 cache.go:162] opening:  \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube1\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause_3.1
	I0109 00:53:47.942076    5580 cache.go:162] opening:  \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube1\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns_1.6.5
	W0109 00:53:47.948033    5580 image.go:187] authn lookup for registry.k8s.io/etcd:3.4.3-0 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W0109 00:53:48.069829    5580 image.go:187] authn lookup for registry.k8s.io/kube-controller-manager:v1.17.0 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	I0109 00:53:48.140130    5580 cache.go:162] opening:  \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube1\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver_v1.17.0
	I0109 00:53:48.167272    5580 cache.go:157] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube1\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause_3.1 exists
	I0109 00:53:48.167627    5580 cache.go:96] cache image "registry.k8s.io/pause:3.1" -> "C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\pause_3.1" took 1.1457626s
	I0109 00:53:48.167701    5580 cache.go:80] save to tar file registry.k8s.io/pause:3.1 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause_3.1 succeeded
	I0109 00:53:48.191663    5580 cache.go:162] opening:  \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube1\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.4.3-0
	I0109 00:53:48.384536    5580 cache.go:162] opening:  \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube1\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager_v1.17.0
	I0109 00:53:48.824216    5580 cache.go:157] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube1\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns_1.6.5 exists
	I0109 00:53:48.826452    5580 cache.go:96] cache image "registry.k8s.io/coredns:1.6.5" -> "C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\coredns_1.6.5" took 1.8039705s
	I0109 00:53:48.826583    5580 cache.go:80] save to tar file registry.k8s.io/coredns:1.6.5 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns_1.6.5 succeeded
	I0109 00:53:48.941739    5580 cache.go:157] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube1\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler_v1.17.0 exists
	I0109 00:53:48.941739    5580 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.17.0" -> "C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\kube-scheduler_v1.17.0" took 1.9192579s
	I0109 00:53:48.941739    5580 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.17.0 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler_v1.17.0 succeeded
	I0109 00:53:49.455601    5580 cache.go:157] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube1\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver_v1.17.0 exists
	I0109 00:53:49.457670    5580 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.17.0" -> "C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\kube-apiserver_v1.17.0" took 2.4358539s
	I0109 00:53:49.457670    5580 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.17.0 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver_v1.17.0 succeeded
	I0109 00:53:49.742613    5580 cache.go:157] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube1\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager_v1.17.0 exists
	I0109 00:53:49.743622    5580 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.17.0" -> "C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\kube-controller-manager_v1.17.0" took 2.7211411s
	I0109 00:53:49.743622    5580 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.17.0 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager_v1.17.0 succeeded
	I0109 00:53:49.810376    5580 cache.go:157] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube1\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.17.0 exists
	I0109 00:53:49.810915    5580 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.17.0" -> "C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\kube-proxy_v1.17.0" took 2.7868688s
	I0109 00:53:49.811040    5580 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.17.0 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.17.0 succeeded
	I0109 00:53:50.658021    5580 cache.go:157] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube1\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.4.3-0 exists
	I0109 00:53:50.658021    5580 cache.go:96] cache image "registry.k8s.io/etcd:3.4.3-0" -> "C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\etcd_3.4.3-0" took 3.6358629s
	I0109 00:53:50.659022    5580 cache.go:80] save to tar file registry.k8s.io/etcd:3.4.3-0 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.4.3-0 succeeded
	I0109 00:53:50.659022    5580 cache.go:87] Successfully saved all images to host disk.
	I0109 00:53:57.257527    5580 start.go:369] acquired machines lock for "running-upgrade-248700" in 10.23348s
	I0109 00:53:57.257830    5580 start.go:96] Skipping create...Using existing machine configuration
	I0109 00:53:57.257904    5580 fix.go:54] fixHost starting: minikube
	I0109 00:53:57.258753    5580 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM running-upgrade-248700 ).state
	I0109 00:54:00.190251    5580 main.go:141] libmachine: [stdout =====>] : Running
	
	I0109 00:54:00.190251    5580 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:54:00.190251    5580 fix.go:102] recreateIfNeeded on running-upgrade-248700: state=Running err=<nil>
	W0109 00:54:00.190251    5580 fix.go:128] unexpected machine state, will restart: <nil>
	I0109 00:54:00.196266    5580 out.go:177] * Updating the running hyperv "running-upgrade-248700" VM ...
	I0109 00:54:00.199257    5580 machine.go:88] provisioning docker machine ...
	I0109 00:54:00.199257    5580 buildroot.go:166] provisioning hostname "running-upgrade-248700"
	I0109 00:54:00.199257    5580 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM running-upgrade-248700 ).state
	I0109 00:54:02.872964    5580 main.go:141] libmachine: [stdout =====>] : Running
	
	I0109 00:54:02.873173    5580 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:54:02.873262    5580 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM running-upgrade-248700 ).networkadapters[0]).ipaddresses[0]
	I0109 00:54:05.928676    5580 main.go:141] libmachine: [stdout =====>] : 172.24.105.72
	
	I0109 00:54:05.928676    5580 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:54:05.934869    5580 main.go:141] libmachine: Using SSH client type: native
	I0109 00:54:05.935734    5580 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa46120] 0xa48c60 <nil>  [] 0s} 172.24.105.72 22 <nil> <nil>}
	I0109 00:54:05.935734    5580 main.go:141] libmachine: About to run SSH command:
	sudo hostname running-upgrade-248700 && echo "running-upgrade-248700" | sudo tee /etc/hostname
	I0109 00:54:06.086507    5580 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-248700
	
	I0109 00:54:06.087716    5580 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM running-upgrade-248700 ).state
	I0109 00:54:08.538306    5580 main.go:141] libmachine: [stdout =====>] : Running
	
	I0109 00:54:08.538545    5580 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:54:08.538816    5580 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM running-upgrade-248700 ).networkadapters[0]).ipaddresses[0]
	I0109 00:54:11.404865    5580 main.go:141] libmachine: [stdout =====>] : 172.24.105.72
	
	I0109 00:54:11.404865    5580 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:54:11.411519    5580 main.go:141] libmachine: Using SSH client type: native
	I0109 00:54:11.412272    5580 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa46120] 0xa48c60 <nil>  [] 0s} 172.24.105.72 22 <nil> <nil>}
	I0109 00:54:11.412368    5580 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\srunning-upgrade-248700' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 running-upgrade-248700/g' /etc/hosts;
				else 
					echo '127.0.1.1 running-upgrade-248700' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0109 00:54:11.555327    5580 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0109 00:54:11.555867    5580 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube1\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube1\minikube-integration\.minikube}
	I0109 00:54:11.555867    5580 buildroot.go:174] setting up certificates
	I0109 00:54:11.555991    5580 provision.go:83] configureAuth start
	I0109 00:54:11.556043    5580 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM running-upgrade-248700 ).state
	I0109 00:54:13.976613    5580 main.go:141] libmachine: [stdout =====>] : Running
	
	I0109 00:54:13.976697    5580 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:54:13.976697    5580 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM running-upgrade-248700 ).networkadapters[0]).ipaddresses[0]
	I0109 00:54:17.147820    5580 main.go:141] libmachine: [stdout =====>] : 172.24.105.72
	
	I0109 00:54:17.148150    5580 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:54:17.148150    5580 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM running-upgrade-248700 ).state
	I0109 00:54:19.541487    5580 main.go:141] libmachine: [stdout =====>] : Running
	
	I0109 00:54:19.541487    5580 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:54:19.541487    5580 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM running-upgrade-248700 ).networkadapters[0]).ipaddresses[0]
	I0109 00:54:22.339255    5580 main.go:141] libmachine: [stdout =====>] : 172.24.105.72
	
	I0109 00:54:22.339255    5580 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:54:22.339255    5580 provision.go:138] copyHostCerts
	I0109 00:54:22.339861    5580 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem, removing ...
	I0109 00:54:22.339930    5580 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.pem
	I0109 00:54:22.340631    5580 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0109 00:54:22.342360    5580 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem, removing ...
	I0109 00:54:22.342360    5580 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cert.pem
	I0109 00:54:22.343127    5580 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0109 00:54:22.344605    5580 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem, removing ...
	I0109 00:54:22.345397    5580 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\key.pem
	I0109 00:54:22.346127    5580 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem (1675 bytes)
	I0109 00:54:22.346830    5580 provision.go:112] generating server cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.running-upgrade-248700 san=[172.24.105.72 172.24.105.72 localhost 127.0.0.1 minikube running-upgrade-248700]
	I0109 00:54:22.846800    5580 provision.go:172] copyRemoteCerts
	I0109 00:54:22.860296    5580 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0109 00:54:22.860296    5580 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM running-upgrade-248700 ).state
	I0109 00:54:25.197769    5580 main.go:141] libmachine: [stdout =====>] : Running
	
	I0109 00:54:25.197769    5580 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:54:25.197769    5580 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM running-upgrade-248700 ).networkadapters[0]).ipaddresses[0]
	I0109 00:54:27.947978    5580 main.go:141] libmachine: [stdout =====>] : 172.24.105.72
	
	I0109 00:54:27.948216    5580 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:54:27.948478    5580 sshutil.go:53] new ssh client: &{IP:172.24.105.72 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\running-upgrade-248700\id_rsa Username:docker}
	I0109 00:54:28.067149    5580 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (5.2068528s)
	I0109 00:54:28.068153    5580 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0109 00:54:28.086882    5580 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1241 bytes)
	I0109 00:54:28.107066    5580 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0109 00:54:28.129022    5580 provision.go:86] duration metric: configureAuth took 16.5729776s
	I0109 00:54:28.129149    5580 buildroot.go:189] setting minikube options for container-runtime
	I0109 00:54:28.129876    5580 config.go:182] Loaded profile config "running-upgrade-248700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.17.0
	I0109 00:54:28.130001    5580 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM running-upgrade-248700 ).state
	I0109 00:54:30.421752    5580 main.go:141] libmachine: [stdout =====>] : Running
	
	I0109 00:54:30.421987    5580 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:54:30.421987    5580 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM running-upgrade-248700 ).networkadapters[0]).ipaddresses[0]
	I0109 00:54:33.258525    5580 main.go:141] libmachine: [stdout =====>] : 172.24.105.72
	
	I0109 00:54:33.258525    5580 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:54:33.264528    5580 main.go:141] libmachine: Using SSH client type: native
	I0109 00:54:33.264528    5580 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa46120] 0xa48c60 <nil>  [] 0s} 172.24.105.72 22 <nil> <nil>}
	I0109 00:54:33.264528    5580 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0109 00:54:33.425752    5580 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0109 00:54:33.425875    5580 buildroot.go:70] root file system type: tmpfs
	I0109 00:54:33.426030    5580 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0109 00:54:33.426137    5580 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM running-upgrade-248700 ).state
	I0109 00:54:35.733695    5580 main.go:141] libmachine: [stdout =====>] : Running
	
	I0109 00:54:35.733695    5580 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:54:35.733695    5580 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM running-upgrade-248700 ).networkadapters[0]).ipaddresses[0]
	I0109 00:54:38.534943    5580 main.go:141] libmachine: [stdout =====>] : 172.24.105.72
	
	I0109 00:54:38.534943    5580 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:54:38.539915    5580 main.go:141] libmachine: Using SSH client type: native
	I0109 00:54:38.540950    5580 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa46120] 0xa48c60 <nil>  [] 0s} 172.24.105.72 22 <nil> <nil>}
	I0109 00:54:38.540950    5580 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0109 00:54:38.710063    5580 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0109 00:54:38.710063    5580 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM running-upgrade-248700 ).state
	I0109 00:54:41.344203    5580 main.go:141] libmachine: [stdout =====>] : Running
	
	I0109 00:54:41.344310    5580 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:54:41.344423    5580 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM running-upgrade-248700 ).networkadapters[0]).ipaddresses[0]
	I0109 00:54:44.521896    5580 main.go:141] libmachine: [stdout =====>] : 172.24.105.72
	
	I0109 00:54:44.522102    5580 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:54:44.527028    5580 main.go:141] libmachine: Using SSH client type: native
	I0109 00:54:44.528025    5580 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa46120] 0xa48c60 <nil>  [] 0s} 172.24.105.72 22 <nil> <nil>}
	I0109 00:54:44.528025    5580 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0109 00:55:01.021751    5580 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service
	+++ /lib/systemd/system/docker.service.new
	@@ -3,9 +3,12 @@
	 Documentation=https://docs.docker.com
	 After=network.target  minikube-automount.service docker.socket
	 Requires= minikube-automount.service docker.socket 
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	+Restart=on-failure
	 
	 
	 
	@@ -21,7 +24,7 @@
	 # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	 ExecStart=
	 ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	-ExecReload=/bin/kill -s HUP 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0109 00:55:01.021751    5580 machine.go:91] provisioned docker machine in 1m0.8224882s
	I0109 00:55:01.021895    5580 start.go:300] post-start starting for "running-upgrade-248700" (driver="hyperv")
	I0109 00:55:01.021895    5580 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0109 00:55:01.040020    5580 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0109 00:55:01.040020    5580 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM running-upgrade-248700 ).state
	I0109 00:55:03.470986    5580 main.go:141] libmachine: [stdout =====>] : Running
	
	I0109 00:55:03.470986    5580 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:55:03.470986    5580 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM running-upgrade-248700 ).networkadapters[0]).ipaddresses[0]
	I0109 00:55:06.396140    5580 main.go:141] libmachine: [stdout =====>] : 172.24.105.72
	
	I0109 00:55:06.396334    5580 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:55:06.396547    5580 sshutil.go:53] new ssh client: &{IP:172.24.105.72 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\running-upgrade-248700\id_rsa Username:docker}
	I0109 00:55:06.500011    5580 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (5.4598838s)
	I0109 00:55:06.529563    5580 ssh_runner.go:195] Run: cat /etc/os-release
	I0109 00:55:06.538325    5580 info.go:137] Remote host: Buildroot 2019.02.7
	I0109 00:55:06.538443    5580 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\addons for local assets ...
	I0109 00:55:06.539021    5580 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\files for local assets ...
	I0109 00:55:06.540967    5580 filesync.go:149] local asset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\142882.pem -> 142882.pem in /etc/ssl/certs
	I0109 00:55:06.557544    5580 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0109 00:55:06.567121    5580 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\142882.pem --> /etc/ssl/certs/142882.pem (1708 bytes)
	I0109 00:55:06.587308    5580 start.go:303] post-start completed in 5.5654128s
	I0109 00:55:06.587308    5580 fix.go:56] fixHost completed within 1m9.3293975s
	I0109 00:55:06.587395    5580 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM running-upgrade-248700 ).state
	I0109 00:55:08.903408    5580 main.go:141] libmachine: [stdout =====>] : Running
	
	I0109 00:55:08.903629    5580 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:55:08.903724    5580 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM running-upgrade-248700 ).networkadapters[0]).ipaddresses[0]
	I0109 00:55:11.673970    5580 main.go:141] libmachine: [stdout =====>] : 172.24.105.72
	
	I0109 00:55:11.673970    5580 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:55:11.681139    5580 main.go:141] libmachine: Using SSH client type: native
	I0109 00:55:11.682451    5580 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa46120] 0xa48c60 <nil>  [] 0s} 172.24.105.72 22 <nil> <nil>}
	I0109 00:55:11.682451    5580 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0109 00:55:11.840755    5580 main.go:141] libmachine: SSH cmd err, output: <nil>: 1704761711.808770901
	
	I0109 00:55:11.840813    5580 fix.go:206] guest clock: 1704761711.808770901
	I0109 00:55:11.840813    5580 fix.go:219] Guest: 2024-01-09 00:55:11.808770901 +0000 UTC Remote: 2024-01-09 00:55:06.5873086 +0000 UTC m=+86.357966201 (delta=5.221462301s)
	I0109 00:55:11.840813    5580 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM running-upgrade-248700 ).state
	I0109 00:55:14.243578    5580 main.go:141] libmachine: [stdout =====>] : Running
	
	I0109 00:55:14.243751    5580 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:55:14.243751    5580 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM running-upgrade-248700 ).networkadapters[0]).ipaddresses[0]
	I0109 00:55:17.069359    5580 main.go:141] libmachine: [stdout =====>] : 172.24.105.72
	
	I0109 00:55:17.069439    5580 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:55:17.078843    5580 main.go:141] libmachine: Using SSH client type: native
	I0109 00:55:17.079843    5580 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa46120] 0xa48c60 <nil>  [] 0s} 172.24.105.72 22 <nil> <nil>}
	I0109 00:55:17.079843    5580 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1704761711
	I0109 00:55:17.236870    5580 main.go:141] libmachine: SSH cmd err, output: <nil>: Tue Jan  9 00:55:11 UTC 2024
	
	I0109 00:55:17.236955    5580 fix.go:226] clock set: Tue Jan  9 00:55:11 UTC 2024
	 (err=<nil>)
	I0109 00:55:17.236955    5580 start.go:83] releasing machines lock for "running-upgrade-248700", held for 1m19.9792882s
	I0109 00:55:17.237286    5580 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM running-upgrade-248700 ).state
	I0109 00:55:19.717321    5580 main.go:141] libmachine: [stdout =====>] : Running
	
	I0109 00:55:19.717321    5580 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:55:19.717321    5580 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM running-upgrade-248700 ).networkadapters[0]).ipaddresses[0]
	I0109 00:55:22.446872    5580 main.go:141] libmachine: [stdout =====>] : 172.24.105.72
	
	I0109 00:55:22.446952    5580 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:55:22.453463    5580 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0109 00:55:22.453463    5580 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM running-upgrade-248700 ).state
	I0109 00:55:22.469199    5580 ssh_runner.go:195] Run: cat /version.json
	I0109 00:55:22.470220    5580 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM running-upgrade-248700 ).state
	I0109 00:55:25.104109    5580 main.go:141] libmachine: [stdout =====>] : Running
	
	I0109 00:55:25.104429    5580 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:55:25.104587    5580 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM running-upgrade-248700 ).networkadapters[0]).ipaddresses[0]
	I0109 00:55:25.135777    5580 main.go:141] libmachine: [stdout =====>] : Running
	
	I0109 00:55:25.136039    5580 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:55:25.136139    5580 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM running-upgrade-248700 ).networkadapters[0]).ipaddresses[0]
	I0109 00:55:28.123844    5580 main.go:141] libmachine: [stdout =====>] : 172.24.105.72
	
	I0109 00:55:28.123844    5580 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:55:28.123844    5580 sshutil.go:53] new ssh client: &{IP:172.24.105.72 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\running-upgrade-248700\id_rsa Username:docker}
	I0109 00:55:28.202763    5580 main.go:141] libmachine: [stdout =====>] : 172.24.105.72
	
	I0109 00:55:28.202763    5580 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:55:28.202763    5580 sshutil.go:53] new ssh client: &{IP:172.24.105.72 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\running-upgrade-248700\id_rsa Username:docker}
	I0109 00:55:28.327050    5580 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (5.8735859s)
	I0109 00:55:28.347054    5580 ssh_runner.go:235] Completed: cat /version.json: (5.8768337s)
	W0109 00:55:28.347054    5580 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0109 00:55:28.361064    5580 ssh_runner.go:195] Run: systemctl --version
	I0109 00:55:28.383055    5580 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0109 00:55:28.392066    5580 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0109 00:55:28.413047    5580 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0109 00:55:28.444057    5580 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0109 00:55:28.452051    5580 cni.go:305] no active bridge cni configs found in "/etc/cni/net.d" - nothing to configure
	I0109 00:55:28.453055    5580 start.go:475] detecting cgroup driver to use...
	I0109 00:55:28.453055    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0109 00:55:28.488064    5580 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.1"|' /etc/containerd/config.toml"
	I0109 00:55:28.522060    5580 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0109 00:55:28.533079    5580 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0109 00:55:28.547051    5580 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0109 00:55:28.579103    5580 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0109 00:55:28.616382    5580 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0109 00:55:28.648957    5580 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0109 00:55:28.671960    5580 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0109 00:55:28.698958    5580 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0109 00:55:28.724758    5580 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0109 00:55:28.752749    5580 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0109 00:55:28.781793    5580 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0109 00:55:29.090677    5580 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0109 00:55:29.122735    5580 start.go:475] detecting cgroup driver to use...
	I0109 00:55:29.141726    5580 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0109 00:55:29.182752    5580 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0109 00:55:29.218974    5580 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0109 00:55:29.431386    5580 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0109 00:55:29.469442    5580 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0109 00:55:29.485267    5580 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0109 00:55:29.524495    5580 ssh_runner.go:195] Run: which cri-dockerd
	I0109 00:55:29.547721    5580 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0109 00:55:29.557227    5580 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0109 00:55:29.594578    5580 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0109 00:55:29.821778    5580 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0109 00:55:30.088015    5580 docker.go:560] configuring docker to use "cgroupfs" as cgroup driver...
	I0109 00:55:30.088327    5580 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0109 00:55:30.130936    5580 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0109 00:55:30.330263    5580 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0109 00:55:44.453911    5580 ssh_runner.go:235] Completed: sudo systemctl restart docker: (14.1236467s)
	I0109 00:55:44.467840    5580 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0109 00:55:44.535601    5580 out.go:177] 
	W0109 00:55:44.547343    5580 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	-- Logs begin at Tue 2024-01-09 00:50:32 UTC, end at Tue 2024-01-09 00:55:44 UTC. --
	Jan 09 00:52:00 running-upgrade-248700 systemd[1]: Starting Docker Application Container Engine...
	Jan 09 00:52:00 running-upgrade-248700 dockerd[2750]: time="2024-01-09T00:52:00.466602029Z" level=info msg="Starting up"
	Jan 09 00:52:00 running-upgrade-248700 dockerd[2750]: time="2024-01-09T00:52:00.469360429Z" level=info msg="libcontainerd: started new containerd process" pid=2757
	Jan 09 00:52:00 running-upgrade-248700 dockerd[2750]: time="2024-01-09T00:52:00.469588129Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	Jan 09 00:52:00 running-upgrade-248700 dockerd[2750]: time="2024-01-09T00:52:00.469678529Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	Jan 09 00:52:00 running-upgrade-248700 dockerd[2750]: time="2024-01-09T00:52:00.469763429Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/containerd/containerd.sock 0  <nil>}] <nil>}" module=grpc
	Jan 09 00:52:00 running-upgrade-248700 dockerd[2750]: time="2024-01-09T00:52:00.469860629Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Jan 09 00:52:00 running-upgrade-248700 dockerd[2750]: time="2024-01-09T00:52:00.513247329Z" level=info msg="starting containerd" revision=b34a5c8af56e510852c35414db4c1f4fa6172339 version=v1.2.10
	Jan 09 00:52:00 running-upgrade-248700 dockerd[2750]: time="2024-01-09T00:52:00.513749629Z" level=info msg="loading plugin "io.containerd.content.v1.content"..." type=io.containerd.content.v1
	Jan 09 00:52:00 running-upgrade-248700 dockerd[2750]: time="2024-01-09T00:52:00.514174429Z" level=info msg="loading plugin "io.containerd.snapshotter.v1.btrfs"..." type=io.containerd.snapshotter.v1
	Jan 09 00:52:00 running-upgrade-248700 dockerd[2750]: time="2024-01-09T00:52:00.514558129Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.btrfs" error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs must be a btrfs filesystem to be used with the btrfs snapshotter"
	Jan 09 00:52:00 running-upgrade-248700 dockerd[2750]: time="2024-01-09T00:52:00.514655929Z" level=info msg="loading plugin "io.containerd.snapshotter.v1.aufs"..." type=io.containerd.snapshotter.v1
	Jan 09 00:52:00 running-upgrade-248700 dockerd[2750]: time="2024-01-09T00:52:00.516965229Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.aufs" error="modprobe aufs failed: "modprobe: FATAL: Module aufs not found in directory /lib/modules/4.19.81\n": exit status 1"
	Jan 09 00:52:00 running-upgrade-248700 dockerd[2750]: time="2024-01-09T00:52:00.517050929Z" level=info msg="loading plugin "io.containerd.snapshotter.v1.native"..." type=io.containerd.snapshotter.v1
	Jan 09 00:52:00 running-upgrade-248700 dockerd[2750]: time="2024-01-09T00:52:00.517211629Z" level=info msg="loading plugin "io.containerd.snapshotter.v1.overlayfs"..." type=io.containerd.snapshotter.v1
	Jan 09 00:52:00 running-upgrade-248700 dockerd[2750]: time="2024-01-09T00:52:00.517591729Z" level=info msg="loading plugin "io.containerd.snapshotter.v1.zfs"..." type=io.containerd.snapshotter.v1
	Jan 09 00:52:00 running-upgrade-248700 dockerd[2750]: time="2024-01-09T00:52:00.518027129Z" level=info msg="skip loading plugin "io.containerd.snapshotter.v1.zfs"..." type=io.containerd.snapshotter.v1
	Jan 09 00:52:00 running-upgrade-248700 dockerd[2750]: time="2024-01-09T00:52:00.518134329Z" level=info msg="loading plugin "io.containerd.metadata.v1.bolt"..." type=io.containerd.metadata.v1
	Jan 09 00:52:00 running-upgrade-248700 dockerd[2750]: time="2024-01-09T00:52:00.518215629Z" level=warning msg="could not use snapshotter btrfs in metadata plugin" error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs must be a btrfs filesystem to be used with the btrfs snapshotter"
	Jan 09 00:52:00 running-upgrade-248700 dockerd[2750]: time="2024-01-09T00:52:00.518225929Z" level=warning msg="could not use snapshotter aufs in metadata plugin" error="modprobe aufs failed: "modprobe: FATAL: Module aufs not found in directory /lib/modules/4.19.81\n": exit status 1"
	Jan 09 00:52:00 running-upgrade-248700 dockerd[2750]: time="2024-01-09T00:52:00.518234229Z" level=warning msg="could not use snapshotter zfs in metadata plugin" error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin"
	Jan 09 00:52:00 running-upgrade-248700 dockerd[2750]: time="2024-01-09T00:52:00.548789729Z" level=info msg="loading plugin "io.containerd.differ.v1.walking"..." type=io.containerd.differ.v1
	Jan 09 00:52:00 running-upgrade-248700 dockerd[2750]: time="2024-01-09T00:52:00.548955529Z" level=info msg="loading plugin "io.containerd.gc.v1.scheduler"..." type=io.containerd.gc.v1
	Jan 09 00:52:00 running-upgrade-248700 dockerd[2750]: time="2024-01-09T00:52:00.549054229Z" level=info msg="loading plugin "io.containerd.service.v1.containers-service"..." type=io.containerd.service.v1
	Jan 09 00:52:00 running-upgrade-248700 dockerd[2750]: time="2024-01-09T00:52:00.549120929Z" level=info msg="loading plugin "io.containerd.service.v1.content-service"..." type=io.containerd.service.v1
	Jan 09 00:52:00 running-upgrade-248700 dockerd[2750]: time="2024-01-09T00:52:00.549136429Z" level=info msg="loading plugin "io.containerd.service.v1.diff-service"..." type=io.containerd.service.v1
	Jan 09 00:52:00 running-upgrade-248700 dockerd[2750]: time="2024-01-09T00:52:00.549150029Z" level=info msg="loading plugin "io.containerd.service.v1.images-service"..." type=io.containerd.service.v1
	Jan 09 00:52:00 running-upgrade-248700 dockerd[2750]: time="2024-01-09T00:52:00.549163129Z" level=info msg="loading plugin "io.containerd.service.v1.leases-service"..." type=io.containerd.service.v1
	Jan 09 00:52:00 running-upgrade-248700 dockerd[2750]: time="2024-01-09T00:52:00.549176029Z" level=info msg="loading plugin "io.containerd.service.v1.namespaces-service"..." type=io.containerd.service.v1
	Jan 09 00:52:00 running-upgrade-248700 dockerd[2750]: time="2024-01-09T00:52:00.549187629Z" level=info msg="loading plugin "io.containerd.service.v1.snapshots-service"..." type=io.containerd.service.v1
	Jan 09 00:52:00 running-upgrade-248700 dockerd[2750]: time="2024-01-09T00:52:00.549211629Z" level=info msg="loading plugin "io.containerd.runtime.v1.linux"..." type=io.containerd.runtime.v1
	Jan 09 00:52:00 running-upgrade-248700 dockerd[2750]: time="2024-01-09T00:52:00.549567429Z" level=info msg="loading plugin "io.containerd.runtime.v2.task"..." type=io.containerd.runtime.v2
	Jan 09 00:52:00 running-upgrade-248700 dockerd[2750]: time="2024-01-09T00:52:00.549838129Z" level=info msg="loading plugin "io.containerd.monitor.v1.cgroups"..." type=io.containerd.monitor.v1
	Jan 09 00:52:00 running-upgrade-248700 dockerd[2750]: time="2024-01-09T00:52:00.550617729Z" level=info msg="loading plugin "io.containerd.service.v1.tasks-service"..." type=io.containerd.service.v1
	Jan 09 00:52:00 running-upgrade-248700 dockerd[2750]: time="2024-01-09T00:52:00.550726329Z" level=info msg="loading plugin "io.containerd.internal.v1.restart"..." type=io.containerd.internal.v1
	Jan 09 00:52:00 running-upgrade-248700 dockerd[2750]: time="2024-01-09T00:52:00.550763929Z" level=info msg="loading plugin "io.containerd.grpc.v1.containers"..." type=io.containerd.grpc.v1
	Jan 09 00:52:00 running-upgrade-248700 dockerd[2750]: time="2024-01-09T00:52:00.550776429Z" level=info msg="loading plugin "io.containerd.grpc.v1.content"..." type=io.containerd.grpc.v1
	Jan 09 00:52:00 running-upgrade-248700 dockerd[2750]: time="2024-01-09T00:52:00.550791229Z" level=info msg="loading plugin "io.containerd.grpc.v1.diff"..." type=io.containerd.grpc.v1
	Jan 09 00:52:00 running-upgrade-248700 dockerd[2750]: time="2024-01-09T00:52:00.550801929Z" level=info msg="loading plugin "io.containerd.grpc.v1.events"..." type=io.containerd.grpc.v1
	Jan 09 00:52:00 running-upgrade-248700 dockerd[2750]: time="2024-01-09T00:52:00.550812229Z" level=info msg="loading plugin "io.containerd.grpc.v1.healthcheck"..." type=io.containerd.grpc.v1
	Jan 09 00:52:00 running-upgrade-248700 dockerd[2750]: time="2024-01-09T00:52:00.550823229Z" level=info msg="loading plugin "io.containerd.grpc.v1.images"..." type=io.containerd.grpc.v1
	Jan 09 00:52:00 running-upgrade-248700 dockerd[2750]: time="2024-01-09T00:52:00.550833029Z" level=info msg="loading plugin "io.containerd.grpc.v1.leases"..." type=io.containerd.grpc.v1
	Jan 09 00:52:00 running-upgrade-248700 dockerd[2750]: time="2024-01-09T00:52:00.550843029Z" level=info msg="loading plugin "io.containerd.grpc.v1.namespaces"..." type=io.containerd.grpc.v1
	Jan 09 00:52:00 running-upgrade-248700 dockerd[2750]: time="2024-01-09T00:52:00.550852929Z" level=info msg="loading plugin "io.containerd.internal.v1.opt"..." type=io.containerd.internal.v1
	Jan 09 00:52:00 running-upgrade-248700 dockerd[2750]: time="2024-01-09T00:52:00.550991229Z" level=info msg="loading plugin "io.containerd.grpc.v1.snapshots"..." type=io.containerd.grpc.v1
	Jan 09 00:52:00 running-upgrade-248700 dockerd[2750]: time="2024-01-09T00:52:00.551187529Z" level=info msg="loading plugin "io.containerd.grpc.v1.tasks"..." type=io.containerd.grpc.v1
	Jan 09 00:52:00 running-upgrade-248700 dockerd[2750]: time="2024-01-09T00:52:00.551211229Z" level=info msg="loading plugin "io.containerd.grpc.v1.version"..." type=io.containerd.grpc.v1
	Jan 09 00:52:00 running-upgrade-248700 dockerd[2750]: time="2024-01-09T00:52:00.551223029Z" level=info msg="loading plugin "io.containerd.grpc.v1.introspection"..." type=io.containerd.grpc.v1
	Jan 09 00:52:00 running-upgrade-248700 dockerd[2750]: time="2024-01-09T00:52:00.551360229Z" level=info msg=serving... address="/var/run/docker/containerd/containerd-debug.sock"
	Jan 09 00:52:00 running-upgrade-248700 dockerd[2750]: time="2024-01-09T00:52:00.551558029Z" level=info msg=serving... address="/var/run/docker/containerd/containerd.sock"
	Jan 09 00:52:00 running-upgrade-248700 dockerd[2750]: time="2024-01-09T00:52:00.551574329Z" level=info msg="containerd successfully booted in 0.039917s"
	Jan 09 00:52:00 running-upgrade-248700 dockerd[2750]: time="2024-01-09T00:52:00.560857929Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	Jan 09 00:52:00 running-upgrade-248700 dockerd[2750]: time="2024-01-09T00:52:00.561038029Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	Jan 09 00:52:00 running-upgrade-248700 dockerd[2750]: time="2024-01-09T00:52:00.561074329Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/containerd/containerd.sock 0  <nil>}] <nil>}" module=grpc
	Jan 09 00:52:00 running-upgrade-248700 dockerd[2750]: time="2024-01-09T00:52:00.561228429Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Jan 09 00:52:00 running-upgrade-248700 dockerd[2750]: time="2024-01-09T00:52:00.562814829Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	Jan 09 00:52:00 running-upgrade-248700 dockerd[2750]: time="2024-01-09T00:52:00.562956429Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	Jan 09 00:52:00 running-upgrade-248700 dockerd[2750]: time="2024-01-09T00:52:00.562982229Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/containerd/containerd.sock 0  <nil>}] <nil>}" module=grpc
	Jan 09 00:52:00 running-upgrade-248700 dockerd[2750]: time="2024-01-09T00:52:00.562992929Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Jan 09 00:52:00 running-upgrade-248700 dockerd[2750]: time="2024-01-09T00:52:00.629742729Z" level=warning msg="Your kernel does not support cgroup blkio weight"
	Jan 09 00:52:00 running-upgrade-248700 dockerd[2750]: time="2024-01-09T00:52:00.629870229Z" level=warning msg="Your kernel does not support cgroup blkio weight_device"
	Jan 09 00:52:00 running-upgrade-248700 dockerd[2750]: time="2024-01-09T00:52:00.629885229Z" level=warning msg="Your kernel does not support cgroup blkio throttle.read_bps_device"
	Jan 09 00:52:00 running-upgrade-248700 dockerd[2750]: time="2024-01-09T00:52:00.629892629Z" level=warning msg="Your kernel does not support cgroup blkio throttle.write_bps_device"
	Jan 09 00:52:00 running-upgrade-248700 dockerd[2750]: time="2024-01-09T00:52:00.629903329Z" level=warning msg="Your kernel does not support cgroup blkio throttle.read_iops_device"
	Jan 09 00:52:00 running-upgrade-248700 dockerd[2750]: time="2024-01-09T00:52:00.629952829Z" level=warning msg="Your kernel does not support cgroup blkio throttle.write_iops_device"
	Jan 09 00:52:00 running-upgrade-248700 dockerd[2750]: time="2024-01-09T00:52:00.630208829Z" level=info msg="Loading containers: start."
	Jan 09 00:52:00 running-upgrade-248700 dockerd[2750]: time="2024-01-09T00:52:00.821204729Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Jan 09 00:52:00 running-upgrade-248700 dockerd[2750]: time="2024-01-09T00:52:00.920373829Z" level=info msg="Loading containers: done."
	Jan 09 00:52:00 running-upgrade-248700 dockerd[2750]: time="2024-01-09T00:52:00.954901529Z" level=info msg="Docker daemon" commit=633a0ea838 graphdriver(s)=overlay2 version=19.03.5
	Jan 09 00:52:00 running-upgrade-248700 dockerd[2750]: time="2024-01-09T00:52:00.955373429Z" level=info msg="Daemon has completed initialization"
	Jan 09 00:52:01 running-upgrade-248700 dockerd[2750]: time="2024-01-09T00:52:01.078706029Z" level=info msg="API listen on /var/run/docker.sock"
	Jan 09 00:52:01 running-upgrade-248700 dockerd[2750]: time="2024-01-09T00:52:01.078729729Z" level=info msg="API listen on [::]:2376"
	Jan 09 00:52:01 running-upgrade-248700 systemd[1]: Started Docker Application Container Engine.
	Jan 09 00:53:09 running-upgrade-248700 dockerd[2750]: time="2024-01-09T00:53:09.294192436Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/1176d6a000adf5eb30fd1fca44d807236a95b72286e5e29c1f92cd273e60aa95/shim.sock" debug=false pid=4316
	Jan 09 00:53:09 running-upgrade-248700 dockerd[2750]: time="2024-01-09T00:53:09.432639024Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/0e42fdb859a8c96508d0c0b9cc129f88344266cc115129c77d5827135992ea69/shim.sock" debug=false pid=4346
	Jan 09 00:53:09 running-upgrade-248700 dockerd[2750]: time="2024-01-09T00:53:09.943853588Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/1b90ccd1ff5bcb7c19e770525a867eebb4be3e87af8e17728b3ad83f113fea22/shim.sock" debug=false pid=4404
	Jan 09 00:53:10 running-upgrade-248700 dockerd[2750]: time="2024-01-09T00:53:10.892110926Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/c245d5bfa602bdb31dfc8e3d40d008b3425da444079f621f1eeedccbdeafe998/shim.sock" debug=false pid=4475
	Jan 09 00:53:10 running-upgrade-248700 dockerd[2750]: time="2024-01-09T00:53:10.905286668Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/4ae6d5f5c82d19faf3fec3e79e3e5f8534bc184bf9126e51bf03383b66c86717/shim.sock" debug=false pid=4489
	Jan 09 00:53:10 running-upgrade-248700 dockerd[2750]: time="2024-01-09T00:53:10.911519835Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/ab55a5701fff195465f5ca8965a28e09ea28b3bf437e17f2736501f21f7d0fce/shim.sock" debug=false pid=4496
	Jan 09 00:53:23 running-upgrade-248700 dockerd[2750]: time="2024-01-09T00:53:23.657686607Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/8200cc5d099990e98a166febffb03ee6ea1992a526047f76a4da153d217e5b56/shim.sock" debug=false pid=4890
	Jan 09 00:53:23 running-upgrade-248700 dockerd[2750]: time="2024-01-09T00:53:23.699064900Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/8952a1aa457ee649177bbdc4051271f5e567369dcba7219a67df99216aaa9a9d/shim.sock" debug=false pid=4907
	Jan 09 00:53:24 running-upgrade-248700 dockerd[2750]: time="2024-01-09T00:53:24.034399548Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/df16ce882d922f207d4fe1648016387e15e87174b8563dfcc5e3990290afad4f/shim.sock" debug=false pid=4978
	Jan 09 00:53:24 running-upgrade-248700 dockerd[2750]: time="2024-01-09T00:53:24.083173461Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/17f21e88a391b1b11281bd266b0d1d00c7e34a45c8d21ae20b1c2ef52128cc24/shim.sock" debug=false pid=4996
	Jan 09 00:53:43 running-upgrade-248700 dockerd[2750]: time="2024-01-09T00:53:43.878393727Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/eeb8ce9d2294c649f53482045ffddd0821b42ee450293f22081907ac6db11407/shim.sock" debug=false pid=5587
	Jan 09 00:53:44 running-upgrade-248700 dockerd[2750]: time="2024-01-09T00:53:44.388224361Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/b42833878fb4a34121d336e3984b3f8947c5b64d69285582810f419a05a625c6/shim.sock" debug=false pid=5643
	Jan 09 00:53:48 running-upgrade-248700 dockerd[2750]: time="2024-01-09T00:53:48.772102670Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/08a419fa7cf4b392526eb5a8241eab3f031a341682e123390201cb69f49d500f/shim.sock" debug=false pid=5807
	Jan 09 00:53:48 running-upgrade-248700 dockerd[2750]: time="2024-01-09T00:53:48.995785307Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/23e8949cc0b7abd922a3eb09b49923b869a234976c4ae60c8c3642203662b634/shim.sock" debug=false pid=5840
	Jan 09 00:53:50 running-upgrade-248700 dockerd[2750]: time="2024-01-09T00:53:50.086841455Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/065d81c58870e2cf9f731eab4834f85b8a131ec830cc0febb633474dbbb5abfa/shim.sock" debug=false pid=5934
	Jan 09 00:53:50 running-upgrade-248700 dockerd[2750]: time="2024-01-09T00:53:50.351722314Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/40e7b37799eaf2989dc4419aeee38c52271fbcf7ef6ca3e91da7076be7e9568a/shim.sock" debug=false pid=5958
	Jan 09 00:53:52 running-upgrade-248700 dockerd[2750]: time="2024-01-09T00:53:52.730040561Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/280e5d91bb4f4478d5cf6a4388306777e849f3a94cf04656cbea1e0ace2af750/shim.sock" debug=false pid=6056
	Jan 09 00:53:53 running-upgrade-248700 dockerd[2750]: time="2024-01-09T00:53:53.352077461Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/01d55f79860c6bbd923d78b59305cfc742b3e0d4cda72f482455e7b2c4bbdd13/shim.sock" debug=false pid=6136
	Jan 09 00:54:45 running-upgrade-248700 systemd[1]: Stopping Docker Application Container Engine...
	Jan 09 00:54:45 running-upgrade-248700 dockerd[2750]: time="2024-01-09T00:54:45.061327969Z" level=info msg="Processing signal 'terminated'"
	Jan 09 00:54:46 running-upgrade-248700 dockerd[2750]: time="2024-01-09T00:54:46.512349669Z" level=info msg="shim reaped" id=065d81c58870e2cf9f731eab4834f85b8a131ec830cc0febb633474dbbb5abfa
	Jan 09 00:54:46 running-upgrade-248700 dockerd[2750]: time="2024-01-09T00:54:46.512884207Z" level=info msg="shim reaped" id=08a419fa7cf4b392526eb5a8241eab3f031a341682e123390201cb69f49d500f
	Jan 09 00:54:46 running-upgrade-248700 dockerd[2750]: time="2024-01-09T00:54:46.525312665Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 09 00:54:46 running-upgrade-248700 dockerd[2750]: time="2024-01-09T00:54:46.527748083Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 09 00:54:46 running-upgrade-248700 dockerd[2750]: time="2024-01-09T00:54:46.526429236Z" level=warning msg="065d81c58870e2cf9f731eab4834f85b8a131ec830cc0febb633474dbbb5abfa cleanup: failed to unmount IPC: umount /var/lib/docker/containers/065d81c58870e2cf9f731eab4834f85b8a131ec830cc0febb633474dbbb5abfa/mounts/shm, flags: 0x2: no such file or directory"
	Jan 09 00:54:46 running-upgrade-248700 dockerd[2750]: time="2024-01-09T00:54:46.607572124Z" level=info msg="shim reaped" id=ab55a5701fff195465f5ca8965a28e09ea28b3bf437e17f2736501f21f7d0fce
	Jan 09 00:54:46 running-upgrade-248700 dockerd[2750]: time="2024-01-09T00:54:46.616039042Z" level=warning msg="ab55a5701fff195465f5ca8965a28e09ea28b3bf437e17f2736501f21f7d0fce cleanup: failed to unmount IPC: umount /var/lib/docker/containers/ab55a5701fff195465f5ca8965a28e09ea28b3bf437e17f2736501f21f7d0fce/mounts/shm, flags: 0x2: no such file or directory"
	Jan 09 00:54:46 running-upgrade-248700 dockerd[2750]: time="2024-01-09T00:54:46.616069538Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 09 00:54:46 running-upgrade-248700 dockerd[2750]: time="2024-01-09T00:54:46.668311678Z" level=info msg="shim reaped" id=280e5d91bb4f4478d5cf6a4388306777e849f3a94cf04656cbea1e0ace2af750
	Jan 09 00:54:46 running-upgrade-248700 dockerd[2750]: time="2024-01-09T00:54:46.670433932Z" level=info msg="shim reaped" id=eeb8ce9d2294c649f53482045ffddd0821b42ee450293f22081907ac6db11407
	Jan 09 00:54:46 running-upgrade-248700 dockerd[2750]: time="2024-01-09T00:54:46.678280222Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 09 00:54:46 running-upgrade-248700 dockerd[2750]: time="2024-01-09T00:54:46.680753535Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 09 00:54:46 running-upgrade-248700 dockerd[2750]: time="2024-01-09T00:54:46.698466080Z" level=info msg="shim reaped" id=0e42fdb859a8c96508d0c0b9cc129f88344266cc115129c77d5827135992ea69
	Jan 09 00:54:46 running-upgrade-248700 dockerd[2750]: time="2024-01-09T00:54:46.731756119Z" level=info msg="shim reaped" id=1176d6a000adf5eb30fd1fca44d807236a95b72286e5e29c1f92cd273e60aa95
	Jan 09 00:54:46 running-upgrade-248700 dockerd[2750]: time="2024-01-09T00:54:46.733887872Z" level=info msg="shim reaped" id=1b90ccd1ff5bcb7c19e770525a867eebb4be3e87af8e17728b3ad83f113fea22
	Jan 09 00:54:46 running-upgrade-248700 dockerd[2750]: time="2024-01-09T00:54:46.734081349Z" level=info msg="shim reaped" id=8952a1aa457ee649177bbdc4051271f5e567369dcba7219a67df99216aaa9a9d
	Jan 09 00:54:46 running-upgrade-248700 dockerd[2750]: time="2024-01-09T00:54:46.738161776Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 09 00:54:46 running-upgrade-248700 dockerd[2750]: time="2024-01-09T00:54:46.738224369Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 09 00:54:46 running-upgrade-248700 dockerd[2750]: time="2024-01-09T00:54:46.739470824Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 09 00:54:46 running-upgrade-248700 dockerd[2750]: time="2024-01-09T00:54:46.746962755Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 09 00:54:46 running-upgrade-248700 dockerd[2750]: time="2024-01-09T00:54:46.751165367Z" level=info msg="shim reaped" id=df16ce882d922f207d4fe1648016387e15e87174b8563dfcc5e3990290afad4f
	Jan 09 00:54:46 running-upgrade-248700 dockerd[2750]: time="2024-01-09T00:54:46.765514603Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 09 00:54:46 running-upgrade-248700 dockerd[2750]: time="2024-01-09T00:54:46.766030743Z" level=warning msg="df16ce882d922f207d4fe1648016387e15e87174b8563dfcc5e3990290afad4f cleanup: failed to unmount IPC: umount /var/lib/docker/containers/df16ce882d922f207d4fe1648016387e15e87174b8563dfcc5e3990290afad4f/mounts/shm, flags: 0x2: no such file or directory"
	Jan 09 00:54:46 running-upgrade-248700 dockerd[2750]: time="2024-01-09T00:54:46.766597977Z" level=info msg="shim reaped" id=23e8949cc0b7abd922a3eb09b49923b869a234976c4ae60c8c3642203662b634
	Jan 09 00:54:46 running-upgrade-248700 dockerd[2750]: time="2024-01-09T00:54:46.781691927Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 09 00:54:46 running-upgrade-248700 dockerd[2750]: time="2024-01-09T00:54:46.784562694Z" level=info msg="shim reaped" id=17f21e88a391b1b11281bd266b0d1d00c7e34a45c8d21ae20b1c2ef52128cc24
	Jan 09 00:54:46 running-upgrade-248700 dockerd[2750]: time="2024-01-09T00:54:46.794041094Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 09 00:54:46 running-upgrade-248700 dockerd[2750]: time="2024-01-09T00:54:46.794599929Z" level=warning msg="17f21e88a391b1b11281bd266b0d1d00c7e34a45c8d21ae20b1c2ef52128cc24 cleanup: failed to unmount IPC: umount /var/lib/docker/containers/17f21e88a391b1b11281bd266b0d1d00c7e34a45c8d21ae20b1c2ef52128cc24/mounts/shm, flags: 0x2: no such file or directory"
	Jan 09 00:54:46 running-upgrade-248700 dockerd[2750]: time="2024-01-09T00:54:46.811499669Z" level=info msg="shim reaped" id=b42833878fb4a34121d336e3984b3f8947c5b64d69285582810f419a05a625c6
	Jan 09 00:54:46 running-upgrade-248700 dockerd[2750]: time="2024-01-09T00:54:46.814771989Z" level=info msg="shim reaped" id=8200cc5d099990e98a166febffb03ee6ea1992a526047f76a4da153d217e5b56
	Jan 09 00:54:46 running-upgrade-248700 dockerd[2750]: time="2024-01-09T00:54:46.818291981Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 09 00:54:46 running-upgrade-248700 dockerd[2750]: time="2024-01-09T00:54:46.818585547Z" level=warning msg="b42833878fb4a34121d336e3984b3f8947c5b64d69285582810f419a05a625c6 cleanup: failed to unmount IPC: umount /var/lib/docker/containers/b42833878fb4a34121d336e3984b3f8947c5b64d69285582810f419a05a625c6/mounts/shm, flags: 0x2: no such file or directory"
	Jan 09 00:54:46 running-upgrade-248700 dockerd[2750]: time="2024-01-09T00:54:46.828356514Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 09 00:54:47 running-upgrade-248700 dockerd[2750]: time="2024-01-09T00:54:47.548228494Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/fec49ec6f213065612ad769910096f3e458b41215039f397d6fe4836993a3fd3/shim.sock" debug=false pid=7584
	Jan 09 00:54:47 running-upgrade-248700 dockerd[2750]: time="2024-01-09T00:54:47.576401866Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/1e93bc6784ccf561efd8ec373cc9939fe51069a650641995594aa2436a71f702/shim.sock" debug=false pid=7600
	Jan 09 00:54:48 running-upgrade-248700 dockerd[2750]: time="2024-01-09T00:54:48.001337784Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/d3845df507f7bac7e6f2b8351bc52a8aa3d575b616f2568df8e81324776388c4/shim.sock" debug=false pid=7683
	Jan 09 00:54:48 running-upgrade-248700 dockerd[2750]: time="2024-01-09T00:54:48.020450821Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/52ebbf9bf57265eafa1b6fc0b8588480c05d2c7a82199520154000fc1db2d1b1/shim.sock" debug=false pid=7698
	Jan 09 00:54:48 running-upgrade-248700 dockerd[2750]: time="2024-01-09T00:54:48.873356788Z" level=info msg="shim reaped" id=52ebbf9bf57265eafa1b6fc0b8588480c05d2c7a82199520154000fc1db2d1b1
	Jan 09 00:54:48 running-upgrade-248700 dockerd[2750]: time="2024-01-09T00:54:48.883820004Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 09 00:54:48 running-upgrade-248700 dockerd[2750]: time="2024-01-09T00:54:48.884478530Z" level=warning msg="52ebbf9bf57265eafa1b6fc0b8588480c05d2c7a82199520154000fc1db2d1b1 cleanup: failed to unmount IPC: umount /var/lib/docker/containers/52ebbf9bf57265eafa1b6fc0b8588480c05d2c7a82199520154000fc1db2d1b1/mounts/shm, flags: 0x2: no such file or directory"
	Jan 09 00:54:49 running-upgrade-248700 dockerd[2750]: time="2024-01-09T00:54:49.705624338Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/a549c77298550ee24436a80c56d6547cacacac9fd24f52c0b210aedd214a412f/shim.sock" debug=false pid=7827
	Jan 09 00:54:50 running-upgrade-248700 dockerd[2750]: time="2024-01-09T00:54:50.050301653Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/93e22a0c47b71a920571a5900014195f32ca34debc7d3c7eb89f357295026033/shim.sock" debug=false pid=7870
	Jan 09 00:54:50 running-upgrade-248700 dockerd[2750]: time="2024-01-09T00:54:50.483218199Z" level=info msg="shim reaped" id=40e7b37799eaf2989dc4419aeee38c52271fbcf7ef6ca3e91da7076be7e9568a
	Jan 09 00:54:50 running-upgrade-248700 dockerd[2750]: time="2024-01-09T00:54:50.492872432Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 09 00:54:50 running-upgrade-248700 dockerd[2750]: time="2024-01-09T00:54:50.493221594Z" level=warning msg="40e7b37799eaf2989dc4419aeee38c52271fbcf7ef6ca3e91da7076be7e9568a cleanup: failed to unmount IPC: umount /var/lib/docker/containers/40e7b37799eaf2989dc4419aeee38c52271fbcf7ef6ca3e91da7076be7e9568a/mounts/shm, flags: 0x2: no such file or directory"
	Jan 09 00:54:50 running-upgrade-248700 dockerd[2750]: time="2024-01-09T00:54:50.506590816Z" level=info msg="shim reaped" id=01d55f79860c6bbd923d78b59305cfc742b3e0d4cda72f482455e7b2c4bbdd13
	Jan 09 00:54:50 running-upgrade-248700 dockerd[2750]: time="2024-01-09T00:54:50.517149949Z" level=warning msg="01d55f79860c6bbd923d78b59305cfc742b3e0d4cda72f482455e7b2c4bbdd13 cleanup: failed to unmount IPC: umount /var/lib/docker/containers/01d55f79860c6bbd923d78b59305cfc742b3e0d4cda72f482455e7b2c4bbdd13/mounts/shm, flags: 0x2: no such file or directory"
	Jan 09 00:54:50 running-upgrade-248700 dockerd[2750]: time="2024-01-09T00:54:50.520221509Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 09 00:54:50 running-upgrade-248700 dockerd[2750]: time="2024-01-09T00:54:50.740439866Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/1c3f4b67656465e286cb4e6f00ff0ad6dbd30c37dbabffdc8aac0f50ed16ef73/shim.sock" debug=false pid=7990
	Jan 09 00:54:51 running-upgrade-248700 dockerd[2750]: time="2024-01-09T00:54:51.294333909Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/201a3fdfeb4baa6ce37e7f1da911cb218b06a61f0c1631b20247811bd5749c78/shim.sock" debug=false pid=8084
	Jan 09 00:54:51 running-upgrade-248700 dockerd[2750]: time="2024-01-09T00:54:51.781408082Z" level=info msg="shim reaped" id=4ae6d5f5c82d19faf3fec3e79e3e5f8534bc184bf9126e51bf03383b66c86717
	Jan 09 00:54:51 running-upgrade-248700 dockerd[2750]: time="2024-01-09T00:54:51.791796047Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 09 00:54:51 running-upgrade-248700 dockerd[2750]: time="2024-01-09T00:54:51.792013223Z" level=warning msg="4ae6d5f5c82d19faf3fec3e79e3e5f8534bc184bf9126e51bf03383b66c86717 cleanup: failed to unmount IPC: umount /var/lib/docker/containers/4ae6d5f5c82d19faf3fec3e79e3e5f8534bc184bf9126e51bf03383b66c86717/mounts/shm, flags: 0x2: no such file or directory"
	Jan 09 00:54:52 running-upgrade-248700 dockerd[2750]: time="2024-01-09T00:54:52.030027549Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/7578c1a266f579d49991ad205c6ebc926038da58914f50df3c07a37fff7936cb/shim.sock" debug=false pid=8176
	Jan 09 00:54:52 running-upgrade-248700 dockerd[2750]: time="2024-01-09T00:54:52.435519832Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/bba15e8e6ffbbff8fff33be9d76605551e8ef2fd9519c0ce9a48199cf0661de2/shim.sock" debug=false pid=8222
	Jan 09 00:54:53 running-upgrade-248700 dockerd[2750]: time="2024-01-09T00:54:53.946240402Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/89fbee98dc16a7e8c3ab9d522cf15f08da91600edf8620b02a0f3bb7937057cc/shim.sock" debug=false pid=8289
	Jan 09 00:54:54 running-upgrade-248700 dockerd[2750]: time="2024-01-09T00:54:54.492267606Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/257514166ff0f7cdc6035562484661f4cdc39be058baba64c8faaec9b46a344e/shim.sock" debug=false pid=8355
	Jan 09 00:54:55 running-upgrade-248700 dockerd[2750]: time="2024-01-09T00:54:55.357693399Z" level=info msg="Container c245d5bfa602bdb31dfc8e3d40d008b3425da444079f621f1eeedccbdeafe998 failed to exit within 10 seconds of signal 15 - using the force"
	Jan 09 00:54:55 running-upgrade-248700 dockerd[2750]: time="2024-01-09T00:54:55.526229166Z" level=info msg="shim reaped" id=c245d5bfa602bdb31dfc8e3d40d008b3425da444079f621f1eeedccbdeafe998
	Jan 09 00:54:55 running-upgrade-248700 dockerd[2750]: time="2024-01-09T00:54:55.536025241Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 09 00:54:55 running-upgrade-248700 dockerd[2750]: time="2024-01-09T00:54:55.536200923Z" level=warning msg="c245d5bfa602bdb31dfc8e3d40d008b3425da444079f621f1eeedccbdeafe998 cleanup: failed to unmount IPC: umount /var/lib/docker/containers/c245d5bfa602bdb31dfc8e3d40d008b3425da444079f621f1eeedccbdeafe998/mounts/shm, flags: 0x2: no such file or directory"
	Jan 09 00:54:55 running-upgrade-248700 dockerd[2750]: time="2024-01-09T00:54:55.645061333Z" level=info msg="Daemon shutdown complete"
	Jan 09 00:54:55 running-upgrade-248700 dockerd[2750]: time="2024-01-09T00:54:55.645136426Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=moby
	Jan 09 00:54:55 running-upgrade-248700 dockerd[2750]: time="2024-01-09T00:54:55.645171322Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Jan 09 00:54:55 running-upgrade-248700 dockerd[2750]: time="2024-01-09T00:54:55.649082213Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jan 09 00:54:55 running-upgrade-248700 dockerd[2750]: time="2024-01-09T00:54:55.713257698Z" level=warning msg="failed to get endpoint_count map for scope local: open : no such file or directory"
	Jan 09 00:54:55 running-upgrade-248700 dockerd[2750]: time="2024-01-09T00:54:55.742897298Z" level=warning msg="7f5106de09d67d5d3886b97d5345b2c68479b74f2cef746c98e9f93cc1cc785c cleanup: failed to unmount IPC: umount /var/lib/docker/containers/7f5106de09d67d5d3886b97d5345b2c68479b74f2cef746c98e9f93cc1cc785c/mounts/shm, flags: 0x2: no such file or directory"
	Jan 09 00:54:55 running-upgrade-248700 dockerd[2750]: time="2024-01-09T00:54:55.782145491Z" level=error msg="7f5106de09d67d5d3886b97d5345b2c68479b74f2cef746c98e9f93cc1cc785c cleanup: failed to delete container from containerd: grpc: the client connection is closing: unknown"
	Jan 09 00:54:55 running-upgrade-248700 dockerd[2750]: time="2024-01-09T00:54:55.782226583Z" level=error msg="Handler for POST /containers/7f5106de09d67d5d3886b97d5345b2c68479b74f2cef746c98e9f93cc1cc785c/start returned error: failed to update store for object type *libnetwork.endpoint: open : no such file or directory"
	Jan 09 00:54:55 running-upgrade-248700 dockerd[2750]: time="2024-01-09T00:54:55.857008859Z" level=warning msg="failed to get endpoint_count map for scope local: open : no such file or directory"
	Jan 09 00:54:55 running-upgrade-248700 dockerd[2750]: time="2024-01-09T00:54:55.874665711Z" level=warning msg="7f3bd4fb98bb561ef9ed83a66a895417cc880ec31109e51fef69f210c1fe254e cleanup: failed to unmount IPC: umount /var/lib/docker/containers/7f3bd4fb98bb561ef9ed83a66a895417cc880ec31109e51fef69f210c1fe254e/mounts/shm, flags: 0x2: no such file or directory"
	Jan 09 00:54:55 running-upgrade-248700 dockerd[2750]: time="2024-01-09T00:54:55.886319092Z" level=error msg="7f3bd4fb98bb561ef9ed83a66a895417cc880ec31109e51fef69f210c1fe254e cleanup: failed to delete container from containerd: grpc: the client connection is closing: unknown"
	Jan 09 00:54:55 running-upgrade-248700 dockerd[2750]: time="2024-01-09T00:54:55.886532870Z" level=error msg="Handler for POST /containers/7f3bd4fb98bb561ef9ed83a66a895417cc880ec31109e51fef69f210c1fe254e/start returned error: failed to update store for object type *libnetwork.endpoint: open : no such file or directory"
	Jan 09 00:54:56 running-upgrade-248700 systemd[1]: docker.service: Succeeded.
	Jan 09 00:54:56 running-upgrade-248700 systemd[1]: Stopped Docker Application Container Engine.
	Jan 09 00:54:56 running-upgrade-248700 systemd[1]: docker.service: Found left-over process 7584 (containerd-shim) in control group while starting unit. Ignoring.
	Jan 09 00:54:56 running-upgrade-248700 systemd[1]: This usually indicates unclean termination of a previous run, or service implementation deficiencies.
	Jan 09 00:54:56 running-upgrade-248700 systemd[1]: docker.service: Found left-over process 7600 (containerd-shim) in control group while starting unit. Ignoring.
	Jan 09 00:54:56 running-upgrade-248700 systemd[1]: This usually indicates unclean termination of a previous run, or service implementation deficiencies.
	Jan 09 00:54:56 running-upgrade-248700 systemd[1]: docker.service: Found left-over process 7683 (containerd-shim) in control group while starting unit. Ignoring.
	Jan 09 00:54:56 running-upgrade-248700 systemd[1]: This usually indicates unclean termination of a previous run, or service implementation deficiencies.
	Jan 09 00:54:56 running-upgrade-248700 systemd[1]: docker.service: Found left-over process 7827 (containerd-shim) in control group while starting unit. Ignoring.
	Jan 09 00:54:56 running-upgrade-248700 systemd[1]: This usually indicates unclean termination of a previous run, or service implementation deficiencies.
	Jan 09 00:54:56 running-upgrade-248700 systemd[1]: docker.service: Found left-over process 7870 (containerd-shim) in control group while starting unit. Ignoring.
	Jan 09 00:54:56 running-upgrade-248700 systemd[1]: This usually indicates unclean termination of a previous run, or service implementation deficiencies.
	Jan 09 00:54:56 running-upgrade-248700 systemd[1]: docker.service: Found left-over process 7990 (containerd-shim) in control group while starting unit. Ignoring.
	Jan 09 00:54:56 running-upgrade-248700 systemd[1]: This usually indicates unclean termination of a previous run, or service implementation deficiencies.
	Jan 09 00:54:56 running-upgrade-248700 systemd[1]: docker.service: Found left-over process 8084 (containerd-shim) in control group while starting unit. Ignoring.
	Jan 09 00:54:56 running-upgrade-248700 systemd[1]: This usually indicates unclean termination of a previous run, or service implementation deficiencies.
	Jan 09 00:54:56 running-upgrade-248700 systemd[1]: docker.service: Found left-over process 8176 (containerd-shim) in control group while starting unit. Ignoring.
	Jan 09 00:54:56 running-upgrade-248700 systemd[1]: This usually indicates unclean termination of a previous run, or service implementation deficiencies.
	Jan 09 00:54:56 running-upgrade-248700 systemd[1]: docker.service: Found left-over process 8222 (containerd-shim) in control group while starting unit. Ignoring.
	Jan 09 00:54:56 running-upgrade-248700 systemd[1]: This usually indicates unclean termination of a previous run, or service implementation deficiencies.
	Jan 09 00:54:56 running-upgrade-248700 systemd[1]: docker.service: Found left-over process 8289 (containerd-shim) in control group while starting unit. Ignoring.
	Jan 09 00:54:56 running-upgrade-248700 systemd[1]: This usually indicates unclean termination of a previous run, or service implementation deficiencies.
	Jan 09 00:54:56 running-upgrade-248700 systemd[1]: docker.service: Found left-over process 8355 (containerd-shim) in control group while starting unit. Ignoring.
	Jan 09 00:54:56 running-upgrade-248700 systemd[1]: This usually indicates unclean termination of a previous run, or service implementation deficiencies.
	Jan 09 00:54:56 running-upgrade-248700 systemd[1]: Starting Docker Application Container Engine...
	Jan 09 00:54:56 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:54:56.728612151Z" level=info msg="Starting up"
	Jan 09 00:54:56 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:54:56.731616140Z" level=info msg="libcontainerd: started new containerd process" pid=8463
	Jan 09 00:54:56 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:54:56.731825418Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	Jan 09 00:54:56 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:54:56.731901511Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	Jan 09 00:54:56 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:54:56.732058894Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/containerd/containerd.sock 0  <nil>}] <nil>}" module=grpc
	Jan 09 00:54:56 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:54:56.732143985Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Jan 09 00:54:56 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:54:56.778358600Z" level=info msg="starting containerd" revision=b34a5c8af56e510852c35414db4c1f4fa6172339 version=v1.2.10
	Jan 09 00:54:56 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:54:56.779062327Z" level=info msg="loading plugin "io.containerd.content.v1.content"..." type=io.containerd.content.v1
	Jan 09 00:54:56 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:54:56.779700461Z" level=info msg="loading plugin "io.containerd.snapshotter.v1.btrfs"..." type=io.containerd.snapshotter.v1
	Jan 09 00:54:56 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:54:56.780245805Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.btrfs" error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs must be a btrfs filesystem to be used with the btrfs snapshotter"
	Jan 09 00:54:56 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:54:56.780375591Z" level=info msg="loading plugin "io.containerd.snapshotter.v1.aufs"..." type=io.containerd.snapshotter.v1
	Jan 09 00:54:56 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:54:56.782149207Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.aufs" error="modprobe aufs failed: "modprobe: FATAL: Module aufs not found in directory /lib/modules/4.19.81\n": exit status 1"
	Jan 09 00:54:56 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:54:56.782240298Z" level=info msg="loading plugin "io.containerd.snapshotter.v1.native"..." type=io.containerd.snapshotter.v1
	Jan 09 00:54:56 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:54:56.782818938Z" level=info msg="loading plugin "io.containerd.snapshotter.v1.overlayfs"..." type=io.containerd.snapshotter.v1
	Jan 09 00:54:56 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:54:56.783518266Z" level=info msg="loading plugin "io.containerd.snapshotter.v1.zfs"..." type=io.containerd.snapshotter.v1
	Jan 09 00:54:56 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:54:56.784035612Z" level=info msg="skip loading plugin "io.containerd.snapshotter.v1.zfs"..." type=io.containerd.snapshotter.v1
	Jan 09 00:54:56 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:54:56.784120803Z" level=info msg="loading plugin "io.containerd.metadata.v1.bolt"..." type=io.containerd.metadata.v1
	Jan 09 00:54:56 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:54:56.784184497Z" level=warning msg="could not use snapshotter aufs in metadata plugin" error="modprobe aufs failed: "modprobe: FATAL: Module aufs not found in directory /lib/modules/4.19.81\n": exit status 1"
	Jan 09 00:54:56 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:54:56.784271988Z" level=warning msg="could not use snapshotter zfs in metadata plugin" error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin"
	Jan 09 00:54:56 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:54:56.784318883Z" level=warning msg="could not use snapshotter btrfs in metadata plugin" error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs must be a btrfs filesystem to be used with the btrfs snapshotter"
	Jan 09 00:54:56 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:54:56.784770936Z" level=info msg="loading plugin "io.containerd.differ.v1.walking"..." type=io.containerd.differ.v1
	Jan 09 00:54:56 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:54:56.784874325Z" level=info msg="loading plugin "io.containerd.gc.v1.scheduler"..." type=io.containerd.gc.v1
	Jan 09 00:54:56 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:54:56.785034709Z" level=info msg="loading plugin "io.containerd.service.v1.containers-service"..." type=io.containerd.service.v1
	Jan 09 00:54:56 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:54:56.785194592Z" level=info msg="loading plugin "io.containerd.service.v1.content-service"..." type=io.containerd.service.v1
	Jan 09 00:54:56 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:54:56.785254986Z" level=info msg="loading plugin "io.containerd.service.v1.diff-service"..." type=io.containerd.service.v1
	Jan 09 00:54:56 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:54:56.785327678Z" level=info msg="loading plugin "io.containerd.service.v1.images-service"..." type=io.containerd.service.v1
	Jan 09 00:54:56 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:54:56.785400871Z" level=info msg="loading plugin "io.containerd.service.v1.leases-service"..." type=io.containerd.service.v1
	Jan 09 00:54:56 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:54:56.785473963Z" level=info msg="loading plugin "io.containerd.service.v1.namespaces-service"..." type=io.containerd.service.v1
	Jan 09 00:54:56 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:54:56.785545356Z" level=info msg="loading plugin "io.containerd.service.v1.snapshots-service"..." type=io.containerd.service.v1
	Jan 09 00:54:56 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:54:56.785605450Z" level=info msg="loading plugin "io.containerd.runtime.v1.linux"..." type=io.containerd.runtime.v1
	Jan 09 00:54:56 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:54:56.822469832Z" level=info msg="loading plugin "io.containerd.runtime.v2.task"..." type=io.containerd.runtime.v2
	Jan 09 00:54:56 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:54:56.822693809Z" level=info msg="loading plugin "io.containerd.monitor.v1.cgroups"..." type=io.containerd.monitor.v1
	Jan 09 00:54:56 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:54:56.823229254Z" level=info msg="loading plugin "io.containerd.service.v1.tasks-service"..." type=io.containerd.service.v1
	Jan 09 00:54:56 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:54:56.827726388Z" level=info msg="loading plugin "io.containerd.internal.v1.restart"..." type=io.containerd.internal.v1
	Jan 09 00:54:56 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:54:56.827850975Z" level=info msg="loading plugin "io.containerd.grpc.v1.containers"..." type=io.containerd.grpc.v1
	Jan 09 00:54:56 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:54:56.828001559Z" level=info msg="loading plugin "io.containerd.grpc.v1.content"..." type=io.containerd.grpc.v1
	Jan 09 00:54:56 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:54:56.828090950Z" level=info msg="loading plugin "io.containerd.grpc.v1.diff"..." type=io.containerd.grpc.v1
	Jan 09 00:54:56 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:54:56.828153144Z" level=info msg="loading plugin "io.containerd.grpc.v1.events"..." type=io.containerd.grpc.v1
	Jan 09 00:54:56 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:54:56.828281131Z" level=info msg="loading plugin "io.containerd.grpc.v1.healthcheck"..." type=io.containerd.grpc.v1
	Jan 09 00:54:56 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:54:56.828362822Z" level=info msg="loading plugin "io.containerd.grpc.v1.images"..." type=io.containerd.grpc.v1
	Jan 09 00:54:56 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:54:56.828605997Z" level=info msg="loading plugin "io.containerd.grpc.v1.leases"..." type=io.containerd.grpc.v1
	Jan 09 00:54:56 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:54:56.828693688Z" level=info msg="loading plugin "io.containerd.grpc.v1.namespaces"..." type=io.containerd.grpc.v1
	Jan 09 00:54:56 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:54:56.828986058Z" level=info msg="loading plugin "io.containerd.internal.v1.opt"..." type=io.containerd.internal.v1
	Jan 09 00:54:56 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:54:56.829102445Z" level=info msg="loading plugin "io.containerd.grpc.v1.snapshots"..." type=io.containerd.grpc.v1
	Jan 09 00:54:56 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:54:56.829171538Z" level=info msg="loading plugin "io.containerd.grpc.v1.tasks"..." type=io.containerd.grpc.v1
	Jan 09 00:54:56 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:54:56.829232532Z" level=info msg="loading plugin "io.containerd.grpc.v1.version"..." type=io.containerd.grpc.v1
	Jan 09 00:54:56 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:54:56.829305324Z" level=info msg="loading plugin "io.containerd.grpc.v1.introspection"..." type=io.containerd.grpc.v1
	Jan 09 00:54:56 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:54:56.829630391Z" level=info msg=serving... address="/var/run/docker/containerd/containerd-debug.sock"
	Jan 09 00:54:56 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:54:56.829764477Z" level=info msg=serving... address="/var/run/docker/containerd/containerd.sock"
	Jan 09 00:54:56 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:54:56.829828770Z" level=info msg="containerd successfully booted in 0.053503s"
	Jan 09 00:54:56 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:54:56.844807019Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	Jan 09 00:54:56 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:54:56.845161683Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	Jan 09 00:54:56 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:54:56.845319866Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/containerd/containerd.sock 0  <nil>}] <nil>}" module=grpc
	Jan 09 00:54:56 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:54:56.845527145Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Jan 09 00:54:56 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:54:56.846984294Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	Jan 09 00:54:56 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:54:56.847117080Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	Jan 09 00:54:56 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:54:56.847261065Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/containerd/containerd.sock 0  <nil>}] <nil>}" module=grpc
	Jan 09 00:54:56 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:54:56.847390552Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Jan 09 00:54:56 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:54:56.852809091Z" level=info msg="[graphdriver] using prior storage driver: overlay2"
	Jan 09 00:54:56 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:54:56.932121578Z" level=warning msg="Your kernel does not support cgroup blkio weight"
	Jan 09 00:54:56 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:54:56.932364153Z" level=warning msg="Your kernel does not support cgroup blkio weight_device"
	Jan 09 00:54:56 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:54:56.932423647Z" level=warning msg="Your kernel does not support cgroup blkio throttle.read_bps_device"
	Jan 09 00:54:56 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:54:56.932473342Z" level=warning msg="Your kernel does not support cgroup blkio throttle.write_bps_device"
	Jan 09 00:54:56 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:54:56.932523036Z" level=warning msg="Your kernel does not support cgroup blkio throttle.read_iops_device"
	Jan 09 00:54:56 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:54:56.932571331Z" level=warning msg="Your kernel does not support cgroup blkio throttle.write_iops_device"
	Jan 09 00:54:56 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:54:56.932806507Z" level=info msg="Loading containers: start."
	Jan 09 00:54:59 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:54:59.032780544Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 09 00:54:59 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:54:59.034687053Z" level=warning msg="d3845df507f7bac7e6f2b8351bc52a8aa3d575b616f2568df8e81324776388c4 cleanup: failed to unmount IPC: umount /var/lib/docker/containers/d3845df507f7bac7e6f2b8351bc52a8aa3d575b616f2568df8e81324776388c4/mounts/shm, flags: 0x2: no such file or directory"
	Jan 09 00:54:59 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:54:59.072273274Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 09 00:54:59 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:54:59.097104478Z" level=warning msg="unmount task rootfs" error="no such file or directory" id=d3845df507f7bac7e6f2b8351bc52a8aa3d575b616f2568df8e81324776388c4 path="/var/run/docker/containerd/daemon/io.containerd.runtime.v1.linux/moby/d3845df507f7bac7e6f2b8351bc52a8aa3d575b616f2568df8e81324776388c4"
	Jan 09 00:54:59 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:54:59.097578031Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 09 00:54:59 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:54:59.116236655Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 09 00:54:59 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:54:59.129825389Z" level=warning msg="unmount task rootfs" error="no such file or directory" id=a549c77298550ee24436a80c56d6547cacacac9fd24f52c0b210aedd214a412f path="/var/run/docker/containerd/daemon/io.containerd.runtime.v1.linux/moby/a549c77298550ee24436a80c56d6547cacacac9fd24f52c0b210aedd214a412f"
	Jan 09 00:54:59 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:54:59.131585212Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 09 00:54:59 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:54:59.134748394Z" level=warning msg="unmount task rootfs" error="no such file or directory" id=1c3f4b67656465e286cb4e6f00ff0ad6dbd30c37dbabffdc8aac0f50ed16ef73 path="/var/run/docker/containerd/daemon/io.containerd.runtime.v1.linux/moby/1c3f4b67656465e286cb4e6f00ff0ad6dbd30c37dbabffdc8aac0f50ed16ef73"
	Jan 09 00:54:59 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:54:59.153680091Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 09 00:54:59 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:54:59.175582490Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 09 00:54:59 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:54:59.200339701Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 09 00:54:59 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:54:59.200489386Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 09 00:54:59 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:54:59.258732431Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 09 00:54:59 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:54:59.278331261Z" level=warning msg="unmount task rootfs" error="no such file or directory" id=7578c1a266f579d49991ad205c6ebc926038da58914f50df3c07a37fff7936cb path="/var/run/docker/containerd/daemon/io.containerd.runtime.v1.linux/moby/7578c1a266f579d49991ad205c6ebc926038da58914f50df3c07a37fff7936cb"
	Jan 09 00:54:59 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:54:59.278717122Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 09 00:54:59 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:54:59.298595624Z" level=warning msg="unmount task rootfs" error="no such file or directory" id=1e93bc6784ccf561efd8ec373cc9939fe51069a650641995594aa2436a71f702 path="/var/run/docker/containerd/daemon/io.containerd.runtime.v1.linux/moby/1e93bc6784ccf561efd8ec373cc9939fe51069a650641995594aa2436a71f702"
	Jan 09 00:54:59 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:54:59.299276456Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 09 00:54:59 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:54:59.303547826Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 09 00:54:59 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:54:59.304011080Z" level=warning msg="93e22a0c47b71a920571a5900014195f32ca34debc7d3c7eb89f357295026033 cleanup: failed to unmount IPC: umount /var/lib/docker/containers/93e22a0c47b71a920571a5900014195f32ca34debc7d3c7eb89f357295026033/mounts/shm, flags: 0x2: no such file or directory"
	Jan 09 00:54:59 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:54:59.315147760Z" level=warning msg="unmount task rootfs" error="no such file or directory" id=89fbee98dc16a7e8c3ab9d522cf15f08da91600edf8620b02a0f3bb7937057cc path="/var/run/docker/containerd/daemon/io.containerd.runtime.v1.linux/moby/89fbee98dc16a7e8c3ab9d522cf15f08da91600edf8620b02a0f3bb7937057cc"
	Jan 09 00:54:59 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:54:59.315532921Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 09 00:54:59 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:54:59.342782582Z" level=warning msg="unmount task rootfs" error="no such file or directory" id=fec49ec6f213065612ad769910096f3e458b41215039f397d6fe4836993a3fd3 path="/var/run/docker/containerd/daemon/io.containerd.runtime.v1.linux/moby/fec49ec6f213065612ad769910096f3e458b41215039f397d6fe4836993a3fd3"
	Jan 09 00:54:59 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:54:59.343491911Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 09 00:54:59 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:54:59.346574001Z" level=warning msg="unmount task rootfs" error="no such file or directory" id=93e22a0c47b71a920571a5900014195f32ca34debc7d3c7eb89f357295026033 path="/var/run/docker/containerd/daemon/io.containerd.runtime.v1.linux/moby/93e22a0c47b71a920571a5900014195f32ca34debc7d3c7eb89f357295026033"
	Jan 09 00:54:59 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:54:59.346986260Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 09 00:54:59 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:54:59.387824255Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 09 00:54:59 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:54:59.388277309Z" level=warning msg="bba15e8e6ffbbff8fff33be9d76605551e8ef2fd9519c0ce9a48199cf0661de2 cleanup: failed to unmount IPC: umount /var/lib/docker/containers/bba15e8e6ffbbff8fff33be9d76605551e8ef2fd9519c0ce9a48199cf0661de2/mounts/shm, flags: 0x2: no such file or directory"
	Jan 09 00:54:59 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:54:59.403127316Z" level=warning msg="unmount task rootfs" error="no such file or directory" id=bba15e8e6ffbbff8fff33be9d76605551e8ef2fd9519c0ce9a48199cf0661de2 path="/var/run/docker/containerd/daemon/io.containerd.runtime.v1.linux/moby/bba15e8e6ffbbff8fff33be9d76605551e8ef2fd9519c0ce9a48199cf0661de2"
	Jan 09 00:54:59 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:54:59.420209099Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 09 00:54:59 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:54:59.555357314Z" level=info msg="Removing stale sandbox e6f3648f8eb2d345b603cbac8dd6c5f57fece50e739cebbf02fdde098dc21d50 (1e93bc6784ccf561efd8ec373cc9939fe51069a650641995594aa2436a71f702)"
	Jan 09 00:54:59 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:54:59.569307911Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint 868192508eccb66a5d387edf6c012e2b23dae037d882a77eb03eb2d7b2778b65 8982825e49a55f47cb04cfdb9cdbdc024f313c9dc2bce4711df5ef09addd164d], retrying...."
	Jan 09 00:54:59 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:54:59.715218844Z" level=info msg="Removing stale sandbox 727c26c359398653ec0aeb55ed361d52d5f7946bd9b603e249da4cb1948d7963 (7578c1a266f579d49991ad205c6ebc926038da58914f50df3c07a37fff7936cb)"
	Jan 09 00:54:59 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:54:59.724041657Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint 868192508eccb66a5d387edf6c012e2b23dae037d882a77eb03eb2d7b2778b65 a68c282eba13d0da183cf30a23c0b0e7418c57be8fc4df23ab91ce5a7ea9d9f4], retrying...."
	Jan 09 00:55:00 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:55:00.012636458Z" level=info msg="Removing stale sandbox 8a3b1bfa69b0df7a4f398d6669cb4fd5818fa8763e08061332632ee968ef6ae7 (fec49ec6f213065612ad769910096f3e458b41215039f397d6fe4836993a3fd3)"
	Jan 09 00:55:00 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:55:00.020882737Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint 868192508eccb66a5d387edf6c012e2b23dae037d882a77eb03eb2d7b2778b65 df5e0ac8e4ca3bafa31921d8375096e325174b7290d9c375fb4431a85a57c41f], retrying...."
	Jan 09 00:55:00 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:55:00.167141273Z" level=info msg="Removing stale sandbox bd562ea0a6915f45522d031689d8c9c8aebd9524a7144fa4b5ffb7e121377aa9 (1c3f4b67656465e286cb4e6f00ff0ad6dbd30c37dbabffdc8aac0f50ed16ef73)"
	Jan 09 00:55:00 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:55:00.184319962Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint d43de57a96ff6ce4c3404e41366432659b86a397c24b27370d3b874708b77f80 3a8ec84690549eac450321f6b04a88a400b0ad1030716016162697368728188d], retrying...."
	Jan 09 00:55:00 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:55:00.373467128Z" level=info msg="Removing stale sandbox c4e3b8594913db8f85b7543dea5a273025d55bf185797cf9413ecad7e70e93a0 (89fbee98dc16a7e8c3ab9d522cf15f08da91600edf8620b02a0f3bb7937057cc)"
	Jan 09 00:55:00 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:55:00.510979135Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint d43de57a96ff6ce4c3404e41366432659b86a397c24b27370d3b874708b77f80 57b6bf423c84a1c183c368198a64d6b4a7107c97761621f114faa4ccc1f89028], retrying...."
	Jan 09 00:55:00 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:55:00.672392062Z" level=info msg="Removing stale sandbox e6cae70e4d025f20675d849ac19c25ac00ae60822f8e56907e7db10adce71cd2 (a549c77298550ee24436a80c56d6547cacacac9fd24f52c0b210aedd214a412f)"
	Jan 09 00:55:00 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:55:00.681352670Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint 868192508eccb66a5d387edf6c012e2b23dae037d882a77eb03eb2d7b2778b65 c2c3315b04ac321d961e3c425fb5a1412b4c8c4ff7f47865b8c0909a04b10bc2], retrying...."
	Jan 09 00:55:00 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:55:00.721034519Z" level=info msg="There are old running containers, the network config will not take affect"
	Jan 09 00:55:00 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:55:00.883281863Z" level=info msg="Loading containers: done."
	Jan 09 00:55:00 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:55:00.955111610Z" level=info msg="Docker daemon" commit=633a0ea838 graphdriver(s)=overlay2 version=19.03.5
	Jan 09 00:55:00 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:55:00.955326789Z" level=info msg="Daemon has completed initialization"
	Jan 09 00:55:00 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:55:00.989419094Z" level=info msg="API listen on /var/run/docker.sock"
	Jan 09 00:55:00 running-upgrade-248700 systemd[1]: Started Docker Application Container Engine.
	Jan 09 00:55:00 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:55:00.991218315Z" level=info msg="API listen on [::]:2376"
	Jan 09 00:55:01 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:55:01.590585373Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/92ae8374e25a831d6c8a305c0f3e54ab7f46f60d318c319d07dbf01c67db112a/shim.sock" debug=false pid=9220
	Jan 09 00:55:01 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:55:01.649453965Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/c2ec5b1ecea5d91a792bde7818fb073fb043588bdb73273bf875b8c4ecdb5878/shim.sock" debug=false pid=9237
	Jan 09 00:55:01 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:55:01.769381233Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/644b39fcafb15ce3c53cf4c1a1dd84c6d07486cad11aaa6fa153c9cfb542b8b8/shim.sock" debug=false pid=9267
	Jan 09 00:55:01 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:55:01.818422594Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/423bc4f54c4adea21ddce4d637e8f4bf493a848695979c06ddf29380bb007209/shim.sock" debug=false pid=9288
	Jan 09 00:55:01 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:55:01.825062639Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/3efba1120f6862bd0d8d4271e31cfc804010671ccbd838072a087e3f02e96087/shim.sock" debug=false pid=9287
	Jan 09 00:55:01 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:55:01.851233057Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/5515a67f87db183672feec364c33d62b7fde365d14328a944f3a14895b0839cb/shim.sock" debug=false pid=9297
	Jan 09 00:55:02 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:55:02.203837549Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/e5abdc70c0f4472d86f68f5c5286c40ffbc5155dd95f79ef28906f3155c386f9/shim.sock" debug=false pid=9465
	Jan 09 00:55:02 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:55:02.513424980Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/fbb684cadda838507bf79134f59f1e8a5dbcabf152e9865b3cdc161041f46952/shim.sock" debug=false pid=9534
	Jan 09 00:55:02 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:55:02.692060715Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/248e749097ba271a22db42ad5752cb75ca0649973f44642ed648d8061834f8b8/shim.sock" debug=false pid=9559
	Jan 09 00:55:02 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:55:02.777269284Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/2ef9611d99d6626e83e17b171d3793b58ba00fab751e633584740dacae750a3c/shim.sock" debug=false pid=9579
	Jan 09 00:55:03 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:55:03.378477328Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 09 00:55:03 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:55:03.379338545Z" level=warning msg="201a3fdfeb4baa6ce37e7f1da911cb218b06a61f0c1631b20247811bd5749c78 cleanup: failed to unmount IPC: umount /var/lib/docker/containers/201a3fdfeb4baa6ce37e7f1da911cb218b06a61f0c1631b20247811bd5749c78/mounts/shm, flags: 0x2: no such file or directory"
	Jan 09 00:55:03 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:55:03.396197611Z" level=warning msg="unmount task rootfs" error="no such file or directory" id=201a3fdfeb4baa6ce37e7f1da911cb218b06a61f0c1631b20247811bd5749c78 path="/var/run/docker/containerd/daemon/io.containerd.runtime.v1.linux/moby/201a3fdfeb4baa6ce37e7f1da911cb218b06a61f0c1631b20247811bd5749c78"
	Jan 09 00:55:03 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:55:03.396577574Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 09 00:55:03 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:55:03.756960249Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/8a6f02e8041cf90852a8d900967e4a73046dbe45147160b6f919ef1dee197175/shim.sock" debug=false pid=9727
	Jan 09 00:55:04 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:55:04.476701895Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/34e03bf45bfa32592ca073dbd41fecf4b2bb59bc139f3ba3f320128911ae15cb/shim.sock" debug=false pid=9796
	Jan 09 00:55:05 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:55:05.000615259Z" level=warning msg="257514166ff0f7cdc6035562484661f4cdc39be058baba64c8faaec9b46a344e cleanup: failed to unmount IPC: umount /var/lib/docker/containers/257514166ff0f7cdc6035562484661f4cdc39be058baba64c8faaec9b46a344e/mounts/shm, flags: 0x2: no such file or directory"
	Jan 09 00:55:05 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:55:05.001077914Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 09 00:55:05 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:55:05.084089706Z" level=warning msg="unmount task rootfs" error="no such file or directory" id=257514166ff0f7cdc6035562484661f4cdc39be058baba64c8faaec9b46a344e path="/var/run/docker/containerd/daemon/io.containerd.runtime.v1.linux/moby/257514166ff0f7cdc6035562484661f4cdc39be058baba64c8faaec9b46a344e"
	Jan 09 00:55:05 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:55:05.086705657Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 09 00:55:05 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:55:05.296335585Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/3639fc7037351ddc5f904ff314644ed46eb18feb8476ce61622cf7e424eebd6f/shim.sock" debug=false pid=9892
	Jan 09 00:55:05 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:55:05.790979961Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/33ba8bedcaf0b914ef7f1c5fe7106b82b72e372e40f75a91cdb18e21e89cfd5f/shim.sock" debug=false pid=9956
	Jan 09 00:55:06 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:55:06.876988181Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/2fa9eaabc92e3ce0732410609d4af7480806c746472699390987c6262a66646b/shim.sock" debug=false pid=10047
	Jan 09 00:55:07 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:55:07.224221141Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/20bb4ea59ded727663207cabbc48987ebf9ce68e5c8c001a464a2ed1b01da2f6/shim.sock" debug=false pid=10089
	Jan 09 00:55:08 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:55:08.440406864Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/c4a4865b28b6ff4514d9451ad34446236cbc5ade36733f07e48cdd761b65bb35/shim.sock" debug=false pid=10157
	Jan 09 00:55:10 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:55:10.153303397Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/65124b94e5cd2f1aa4ad0a358fe7f787d38ea78a8f913bd00d56319d1c9d6a6f/shim.sock" debug=false pid=10208
	Jan 09 00:55:14 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:55:14.871402973Z" level=info msg="shim reaped" id=2ef9611d99d6626e83e17b171d3793b58ba00fab751e633584740dacae750a3c
	Jan 09 00:55:14 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:55:14.881532171Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 09 00:55:14 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:55:14.882031326Z" level=warning msg="2ef9611d99d6626e83e17b171d3793b58ba00fab751e633584740dacae750a3c cleanup: failed to unmount IPC: umount /var/lib/docker/containers/2ef9611d99d6626e83e17b171d3793b58ba00fab751e633584740dacae750a3c/mounts/shm, flags: 0x2: no such file or directory"
	Jan 09 00:55:30 running-upgrade-248700 systemd[1]: Stopping Docker Application Container Engine...
	Jan 09 00:55:30 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:55:30.333856490Z" level=info msg="Processing signal 'terminated'"
	Jan 09 00:55:31 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:55:31.616699684Z" level=info msg="shim reaped" id=fbb684cadda838507bf79134f59f1e8a5dbcabf152e9865b3cdc161041f46952
	Jan 09 00:55:31 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:55:31.619128313Z" level=info msg="shim reaped" id=3efba1120f6862bd0d8d4271e31cfc804010671ccbd838072a087e3f02e96087
	Jan 09 00:55:31 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:55:31.621760728Z" level=info msg="shim reaped" id=5515a67f87db183672feec364c33d62b7fde365d14328a944f3a14895b0839cb
	Jan 09 00:55:31 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:55:31.627509024Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 09 00:55:31 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:55:31.627851200Z" level=warning msg="fbb684cadda838507bf79134f59f1e8a5dbcabf152e9865b3cdc161041f46952 cleanup: failed to unmount IPC: umount /var/lib/docker/containers/fbb684cadda838507bf79134f59f1e8a5dbcabf152e9865b3cdc161041f46952/mounts/shm, flags: 0x2: no such file or directory"
	Jan 09 00:55:31 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:55:31.637923793Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 09 00:55:31 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:55:31.639568777Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 09 00:55:31 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:55:31.662764448Z" level=info msg="shim reaped" id=3639fc7037351ddc5f904ff314644ed46eb18feb8476ce61622cf7e424eebd6f
	Jan 09 00:55:31 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:55:31.663351307Z" level=info msg="shim reaped" id=20bb4ea59ded727663207cabbc48987ebf9ce68e5c8c001a464a2ed1b01da2f6
	Jan 09 00:55:31 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:55:31.666052317Z" level=info msg="shim reaped" id=423bc4f54c4adea21ddce4d637e8f4bf493a848695979c06ddf29380bb007209
	Jan 09 00:55:31 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:55:31.673507893Z" level=warning msg="20bb4ea59ded727663207cabbc48987ebf9ce68e5c8c001a464a2ed1b01da2f6 cleanup: failed to unmount IPC: umount /var/lib/docker/containers/20bb4ea59ded727663207cabbc48987ebf9ce68e5c8c001a464a2ed1b01da2f6/mounts/shm, flags: 0x2: no such file or directory"
	Jan 09 00:55:31 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:55:31.673514793Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 09 00:55:31 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:55:31.675412160Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 09 00:55:31 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:55:31.676763765Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 09 00:55:31 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:55:31.689647760Z" level=info msg="shim reaped" id=2fa9eaabc92e3ce0732410609d4af7480806c746472699390987c6262a66646b
	Jan 09 00:55:31 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:55:31.694665307Z" level=info msg="shim reaped" id=e5abdc70c0f4472d86f68f5c5286c40ffbc5155dd95f79ef28906f3155c386f9
	Jan 09 00:55:31 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:55:31.699950736Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 09 00:55:31 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:55:31.704658505Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 09 00:55:31 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:55:31.704953785Z" level=warning msg="e5abdc70c0f4472d86f68f5c5286c40ffbc5155dd95f79ef28906f3155c386f9 cleanup: failed to unmount IPC: umount /var/lib/docker/containers/e5abdc70c0f4472d86f68f5c5286c40ffbc5155dd95f79ef28906f3155c386f9/mounts/shm, flags: 0x2: no such file or directory"
	Jan 09 00:55:31 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:55:31.715674632Z" level=info msg="shim reaped" id=8a6f02e8041cf90852a8d900967e4a73046dbe45147160b6f919ef1dee197175
	Jan 09 00:55:31 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:55:31.721614414Z" level=info msg="shim reaped" id=644b39fcafb15ce3c53cf4c1a1dd84c6d07486cad11aaa6fa153c9cfb542b8b8
	Jan 09 00:55:31 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:55:31.725352752Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 09 00:55:31 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:55:31.733676767Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 09 00:55:31 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:55:31.748625217Z" level=info msg="shim reaped" id=92ae8374e25a831d6c8a305c0f3e54ab7f46f60d318c319d07dbf01c67db112a
	Jan 09 00:55:31 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:55:31.760054114Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 09 00:55:31 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:55:31.762226162Z" level=info msg="shim reaped" id=c4a4865b28b6ff4514d9451ad34446236cbc5ade36733f07e48cdd761b65bb35
	Jan 09 00:55:31 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:55:31.772561736Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 09 00:55:31 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:55:31.772821118Z" level=warning msg="c4a4865b28b6ff4514d9451ad34446236cbc5ade36733f07e48cdd761b65bb35 cleanup: failed to unmount IPC: umount /var/lib/docker/containers/c4a4865b28b6ff4514d9451ad34446236cbc5ade36733f07e48cdd761b65bb35/mounts/shm, flags: 0x2: no such file or directory"
	Jan 09 00:55:31 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:55:31.775408536Z" level=info msg="shim reaped" id=c2ec5b1ecea5d91a792bde7818fb073fb043588bdb73273bf875b8c4ecdb5878
	Jan 09 00:55:31 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:55:31.784268414Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 09 00:55:31 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:55:31.828494107Z" level=info msg="shim reaped" id=65124b94e5cd2f1aa4ad0a358fe7f787d38ea78a8f913bd00d56319d1c9d6a6f
	Jan 09 00:55:31 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:55:31.838431709Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 09 00:55:31 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:55:31.838619796Z" level=warning msg="65124b94e5cd2f1aa4ad0a358fe7f787d38ea78a8f913bd00d56319d1c9d6a6f cleanup: failed to unmount IPC: umount /var/lib/docker/containers/65124b94e5cd2f1aa4ad0a358fe7f787d38ea78a8f913bd00d56319d1c9d6a6f/mounts/shm, flags: 0x2: no such file or directory"
	Jan 09 00:55:35 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:55:35.731801640Z" level=info msg="shim reaped" id=34e03bf45bfa32592ca073dbd41fecf4b2bb59bc139f3ba3f320128911ae15cb
	Jan 09 00:55:35 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:55:35.741503159Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 09 00:55:35 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:55:35.741832635Z" level=warning msg="34e03bf45bfa32592ca073dbd41fecf4b2bb59bc139f3ba3f320128911ae15cb cleanup: failed to unmount IPC: umount /var/lib/docker/containers/34e03bf45bfa32592ca073dbd41fecf4b2bb59bc139f3ba3f320128911ae15cb/mounts/shm, flags: 0x2: no such file or directory"
	Jan 09 00:55:35 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:55:35.878800215Z" level=info msg="shim reaped" id=33ba8bedcaf0b914ef7f1c5fe7106b82b72e372e40f75a91cdb18e21e89cfd5f
	Jan 09 00:55:35 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:55:35.889623055Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 09 00:55:35 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:55:35.890055924Z" level=warning msg="33ba8bedcaf0b914ef7f1c5fe7106b82b72e372e40f75a91cdb18e21e89cfd5f cleanup: failed to unmount IPC: umount /var/lib/docker/containers/33ba8bedcaf0b914ef7f1c5fe7106b82b72e372e40f75a91cdb18e21e89cfd5f/mounts/shm, flags: 0x2: no such file or directory"
	Jan 09 00:55:40 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:55:40.642078344Z" level=info msg="Container 248e749097ba271a22db42ad5752cb75ca0649973f44642ed648d8061834f8b8 failed to exit within 10 seconds of signal 15 - using the force"
	Jan 09 00:55:40 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:55:40.786268916Z" level=info msg="shim reaped" id=248e749097ba271a22db42ad5752cb75ca0649973f44642ed648d8061834f8b8
	Jan 09 00:55:40 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:55:40.797028360Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 09 00:55:40 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:55:40.797222346Z" level=warning msg="248e749097ba271a22db42ad5752cb75ca0649973f44642ed648d8061834f8b8 cleanup: failed to unmount IPC: umount /var/lib/docker/containers/248e749097ba271a22db42ad5752cb75ca0649973f44642ed648d8061834f8b8/mounts/shm, flags: 0x2: no such file or directory"
	Jan 09 00:55:43 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:55:43.355599247Z" level=info msg="Daemon shutdown complete"
	Jan 09 00:55:43 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:55:43.356266500Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=moby
	Jan 09 00:55:43 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:55:43.356397591Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Jan 09 00:55:43 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:55:43.357655002Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jan 09 00:55:44 running-upgrade-248700 systemd[1]: docker.service: Succeeded.
	Jan 09 00:55:44 running-upgrade-248700 systemd[1]: Stopped Docker Application Container Engine.
	Jan 09 00:55:44 running-upgrade-248700 systemd[1]: Starting Docker Application Container Engine...
	Jan 09 00:55:44 running-upgrade-248700 dockerd[11435]: time="2024-01-09T00:55:44.430560942Z" level=info msg="Starting up"
	Jan 09 00:55:44 running-upgrade-248700 dockerd[11435]: time="2024-01-09T00:55:44.434323978Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	Jan 09 00:55:44 running-upgrade-248700 dockerd[11435]: time="2024-01-09T00:55:44.434439669Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	Jan 09 00:55:44 running-upgrade-248700 dockerd[11435]: time="2024-01-09T00:55:44.434476567Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock 0  <nil>}] <nil>}" module=grpc
	Jan 09 00:55:44 running-upgrade-248700 dockerd[11435]: time="2024-01-09T00:55:44.434494266Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Jan 09 00:55:44 running-upgrade-248700 dockerd[11435]: time="2024-01-09T00:55:44.434803644Z" level=warning msg="grpc: addrConn.createTransport failed to connect to {unix:///run/containerd/containerd.sock 0  <nil>}. Err :connection error: desc = \"transport: Error while dialing dial unix /run/containerd/containerd.sock: connect: connection refused\". Reconnecting..." module=grpc
	Jan 09 00:55:44 running-upgrade-248700 dockerd[11435]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": all SubConns are in TransientFailure, latest connection error: connection error: desc = "transport: Error while dialing dial unix /run/containerd/containerd.sock: connect: connection refused": unavailable
	Jan 09 00:55:44 running-upgrade-248700 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jan 09 00:55:44 running-upgrade-248700 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jan 09 00:55:44 running-upgrade-248700 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	-- Logs begin at Tue 2024-01-09 00:50:32 UTC, end at Tue 2024-01-09 00:55:44 UTC. --
	Jan 09 00:52:00 running-upgrade-248700 systemd[1]: Starting Docker Application Container Engine...
	Jan 09 00:52:00 running-upgrade-248700 dockerd[2750]: time="2024-01-09T00:52:00.466602029Z" level=info msg="Starting up"
	Jan 09 00:52:00 running-upgrade-248700 dockerd[2750]: time="2024-01-09T00:52:00.469360429Z" level=info msg="libcontainerd: started new containerd process" pid=2757
	Jan 09 00:52:00 running-upgrade-248700 dockerd[2750]: time="2024-01-09T00:52:00.469588129Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	Jan 09 00:52:00 running-upgrade-248700 dockerd[2750]: time="2024-01-09T00:52:00.469678529Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	Jan 09 00:52:00 running-upgrade-248700 dockerd[2750]: time="2024-01-09T00:52:00.469763429Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/containerd/containerd.sock 0  <nil>}] <nil>}" module=grpc
	Jan 09 00:52:00 running-upgrade-248700 dockerd[2750]: time="2024-01-09T00:52:00.469860629Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Jan 09 00:52:00 running-upgrade-248700 dockerd[2750]: time="2024-01-09T00:52:00.513247329Z" level=info msg="starting containerd" revision=b34a5c8af56e510852c35414db4c1f4fa6172339 version=v1.2.10
	Jan 09 00:52:00 running-upgrade-248700 dockerd[2750]: time="2024-01-09T00:52:00.513749629Z" level=info msg="loading plugin "io.containerd.content.v1.content"..." type=io.containerd.content.v1
	Jan 09 00:52:00 running-upgrade-248700 dockerd[2750]: time="2024-01-09T00:52:00.514174429Z" level=info msg="loading plugin "io.containerd.snapshotter.v1.btrfs"..." type=io.containerd.snapshotter.v1
	Jan 09 00:52:00 running-upgrade-248700 dockerd[2750]: time="2024-01-09T00:52:00.514558129Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.btrfs" error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs must be a btrfs filesystem to be used with the btrfs snapshotter"
	Jan 09 00:52:00 running-upgrade-248700 dockerd[2750]: time="2024-01-09T00:52:00.514655929Z" level=info msg="loading plugin "io.containerd.snapshotter.v1.aufs"..." type=io.containerd.snapshotter.v1
	Jan 09 00:52:00 running-upgrade-248700 dockerd[2750]: time="2024-01-09T00:52:00.516965229Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.aufs" error="modprobe aufs failed: "modprobe: FATAL: Module aufs not found in directory /lib/modules/4.19.81\n": exit status 1"
	Jan 09 00:52:00 running-upgrade-248700 dockerd[2750]: time="2024-01-09T00:52:00.517050929Z" level=info msg="loading plugin "io.containerd.snapshotter.v1.native"..." type=io.containerd.snapshotter.v1
	Jan 09 00:52:00 running-upgrade-248700 dockerd[2750]: time="2024-01-09T00:52:00.517211629Z" level=info msg="loading plugin "io.containerd.snapshotter.v1.overlayfs"..." type=io.containerd.snapshotter.v1
	Jan 09 00:52:00 running-upgrade-248700 dockerd[2750]: time="2024-01-09T00:52:00.517591729Z" level=info msg="loading plugin "io.containerd.snapshotter.v1.zfs"..." type=io.containerd.snapshotter.v1
	Jan 09 00:52:00 running-upgrade-248700 dockerd[2750]: time="2024-01-09T00:52:00.518027129Z" level=info msg="skip loading plugin "io.containerd.snapshotter.v1.zfs"..." type=io.containerd.snapshotter.v1
	Jan 09 00:52:00 running-upgrade-248700 dockerd[2750]: time="2024-01-09T00:52:00.518134329Z" level=info msg="loading plugin "io.containerd.metadata.v1.bolt"..." type=io.containerd.metadata.v1
	Jan 09 00:52:00 running-upgrade-248700 dockerd[2750]: time="2024-01-09T00:52:00.518215629Z" level=warning msg="could not use snapshotter btrfs in metadata plugin" error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs must be a btrfs filesystem to be used with the btrfs snapshotter"
	Jan 09 00:52:00 running-upgrade-248700 dockerd[2750]: time="2024-01-09T00:52:00.518225929Z" level=warning msg="could not use snapshotter aufs in metadata plugin" error="modprobe aufs failed: "modprobe: FATAL: Module aufs not found in directory /lib/modules/4.19.81\n": exit status 1"
	Jan 09 00:52:00 running-upgrade-248700 dockerd[2750]: time="2024-01-09T00:52:00.518234229Z" level=warning msg="could not use snapshotter zfs in metadata plugin" error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin"
	Jan 09 00:52:00 running-upgrade-248700 dockerd[2750]: time="2024-01-09T00:52:00.548789729Z" level=info msg="loading plugin "io.containerd.differ.v1.walking"..." type=io.containerd.differ.v1
	Jan 09 00:52:00 running-upgrade-248700 dockerd[2750]: time="2024-01-09T00:52:00.548955529Z" level=info msg="loading plugin "io.containerd.gc.v1.scheduler"..." type=io.containerd.gc.v1
	Jan 09 00:52:00 running-upgrade-248700 dockerd[2750]: time="2024-01-09T00:52:00.549054229Z" level=info msg="loading plugin "io.containerd.service.v1.containers-service"..." type=io.containerd.service.v1
	Jan 09 00:52:00 running-upgrade-248700 dockerd[2750]: time="2024-01-09T00:52:00.549120929Z" level=info msg="loading plugin "io.containerd.service.v1.content-service"..." type=io.containerd.service.v1
	Jan 09 00:52:00 running-upgrade-248700 dockerd[2750]: time="2024-01-09T00:52:00.549136429Z" level=info msg="loading plugin "io.containerd.service.v1.diff-service"..." type=io.containerd.service.v1
	Jan 09 00:52:00 running-upgrade-248700 dockerd[2750]: time="2024-01-09T00:52:00.549150029Z" level=info msg="loading plugin "io.containerd.service.v1.images-service"..." type=io.containerd.service.v1
	Jan 09 00:52:00 running-upgrade-248700 dockerd[2750]: time="2024-01-09T00:52:00.549163129Z" level=info msg="loading plugin "io.containerd.service.v1.leases-service"..." type=io.containerd.service.v1
	Jan 09 00:52:00 running-upgrade-248700 dockerd[2750]: time="2024-01-09T00:52:00.549176029Z" level=info msg="loading plugin "io.containerd.service.v1.namespaces-service"..." type=io.containerd.service.v1
	Jan 09 00:52:00 running-upgrade-248700 dockerd[2750]: time="2024-01-09T00:52:00.549187629Z" level=info msg="loading plugin "io.containerd.service.v1.snapshots-service"..." type=io.containerd.service.v1
	Jan 09 00:52:00 running-upgrade-248700 dockerd[2750]: time="2024-01-09T00:52:00.549211629Z" level=info msg="loading plugin "io.containerd.runtime.v1.linux"..." type=io.containerd.runtime.v1
	Jan 09 00:52:00 running-upgrade-248700 dockerd[2750]: time="2024-01-09T00:52:00.549567429Z" level=info msg="loading plugin "io.containerd.runtime.v2.task"..." type=io.containerd.runtime.v2
	Jan 09 00:52:00 running-upgrade-248700 dockerd[2750]: time="2024-01-09T00:52:00.549838129Z" level=info msg="loading plugin "io.containerd.monitor.v1.cgroups"..." type=io.containerd.monitor.v1
	Jan 09 00:52:00 running-upgrade-248700 dockerd[2750]: time="2024-01-09T00:52:00.550617729Z" level=info msg="loading plugin "io.containerd.service.v1.tasks-service"..." type=io.containerd.service.v1
	Jan 09 00:52:00 running-upgrade-248700 dockerd[2750]: time="2024-01-09T00:52:00.550726329Z" level=info msg="loading plugin "io.containerd.internal.v1.restart"..." type=io.containerd.internal.v1
	Jan 09 00:52:00 running-upgrade-248700 dockerd[2750]: time="2024-01-09T00:52:00.550763929Z" level=info msg="loading plugin "io.containerd.grpc.v1.containers"..." type=io.containerd.grpc.v1
	Jan 09 00:52:00 running-upgrade-248700 dockerd[2750]: time="2024-01-09T00:52:00.550776429Z" level=info msg="loading plugin "io.containerd.grpc.v1.content"..." type=io.containerd.grpc.v1
	Jan 09 00:52:00 running-upgrade-248700 dockerd[2750]: time="2024-01-09T00:52:00.550791229Z" level=info msg="loading plugin "io.containerd.grpc.v1.diff"..." type=io.containerd.grpc.v1
	Jan 09 00:52:00 running-upgrade-248700 dockerd[2750]: time="2024-01-09T00:52:00.550801929Z" level=info msg="loading plugin "io.containerd.grpc.v1.events"..." type=io.containerd.grpc.v1
	Jan 09 00:52:00 running-upgrade-248700 dockerd[2750]: time="2024-01-09T00:52:00.550812229Z" level=info msg="loading plugin "io.containerd.grpc.v1.healthcheck"..." type=io.containerd.grpc.v1
	Jan 09 00:52:00 running-upgrade-248700 dockerd[2750]: time="2024-01-09T00:52:00.550823229Z" level=info msg="loading plugin "io.containerd.grpc.v1.images"..." type=io.containerd.grpc.v1
	Jan 09 00:52:00 running-upgrade-248700 dockerd[2750]: time="2024-01-09T00:52:00.550833029Z" level=info msg="loading plugin "io.containerd.grpc.v1.leases"..." type=io.containerd.grpc.v1
	Jan 09 00:52:00 running-upgrade-248700 dockerd[2750]: time="2024-01-09T00:52:00.550843029Z" level=info msg="loading plugin "io.containerd.grpc.v1.namespaces"..." type=io.containerd.grpc.v1
	Jan 09 00:52:00 running-upgrade-248700 dockerd[2750]: time="2024-01-09T00:52:00.550852929Z" level=info msg="loading plugin "io.containerd.internal.v1.opt"..." type=io.containerd.internal.v1
	Jan 09 00:52:00 running-upgrade-248700 dockerd[2750]: time="2024-01-09T00:52:00.550991229Z" level=info msg="loading plugin "io.containerd.grpc.v1.snapshots"..." type=io.containerd.grpc.v1
	Jan 09 00:52:00 running-upgrade-248700 dockerd[2750]: time="2024-01-09T00:52:00.551187529Z" level=info msg="loading plugin "io.containerd.grpc.v1.tasks"..." type=io.containerd.grpc.v1
	Jan 09 00:52:00 running-upgrade-248700 dockerd[2750]: time="2024-01-09T00:52:00.551211229Z" level=info msg="loading plugin "io.containerd.grpc.v1.version"..." type=io.containerd.grpc.v1
	Jan 09 00:52:00 running-upgrade-248700 dockerd[2750]: time="2024-01-09T00:52:00.551223029Z" level=info msg="loading plugin "io.containerd.grpc.v1.introspection"..." type=io.containerd.grpc.v1
	Jan 09 00:52:00 running-upgrade-248700 dockerd[2750]: time="2024-01-09T00:52:00.551360229Z" level=info msg=serving... address="/var/run/docker/containerd/containerd-debug.sock"
	Jan 09 00:52:00 running-upgrade-248700 dockerd[2750]: time="2024-01-09T00:52:00.551558029Z" level=info msg=serving... address="/var/run/docker/containerd/containerd.sock"
	Jan 09 00:52:00 running-upgrade-248700 dockerd[2750]: time="2024-01-09T00:52:00.551574329Z" level=info msg="containerd successfully booted in 0.039917s"
	Jan 09 00:52:00 running-upgrade-248700 dockerd[2750]: time="2024-01-09T00:52:00.560857929Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	Jan 09 00:52:00 running-upgrade-248700 dockerd[2750]: time="2024-01-09T00:52:00.561038029Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	Jan 09 00:52:00 running-upgrade-248700 dockerd[2750]: time="2024-01-09T00:52:00.561074329Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/containerd/containerd.sock 0  <nil>}] <nil>}" module=grpc
	Jan 09 00:52:00 running-upgrade-248700 dockerd[2750]: time="2024-01-09T00:52:00.561228429Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Jan 09 00:52:00 running-upgrade-248700 dockerd[2750]: time="2024-01-09T00:52:00.562814829Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	Jan 09 00:52:00 running-upgrade-248700 dockerd[2750]: time="2024-01-09T00:52:00.562956429Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	Jan 09 00:52:00 running-upgrade-248700 dockerd[2750]: time="2024-01-09T00:52:00.562982229Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/containerd/containerd.sock 0  <nil>}] <nil>}" module=grpc
	Jan 09 00:52:00 running-upgrade-248700 dockerd[2750]: time="2024-01-09T00:52:00.562992929Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Jan 09 00:52:00 running-upgrade-248700 dockerd[2750]: time="2024-01-09T00:52:00.629742729Z" level=warning msg="Your kernel does not support cgroup blkio weight"
	Jan 09 00:52:00 running-upgrade-248700 dockerd[2750]: time="2024-01-09T00:52:00.629870229Z" level=warning msg="Your kernel does not support cgroup blkio weight_device"
	Jan 09 00:52:00 running-upgrade-248700 dockerd[2750]: time="2024-01-09T00:52:00.629885229Z" level=warning msg="Your kernel does not support cgroup blkio throttle.read_bps_device"
	Jan 09 00:52:00 running-upgrade-248700 dockerd[2750]: time="2024-01-09T00:52:00.629892629Z" level=warning msg="Your kernel does not support cgroup blkio throttle.write_bps_device"
	Jan 09 00:52:00 running-upgrade-248700 dockerd[2750]: time="2024-01-09T00:52:00.629903329Z" level=warning msg="Your kernel does not support cgroup blkio throttle.read_iops_device"
	Jan 09 00:52:00 running-upgrade-248700 dockerd[2750]: time="2024-01-09T00:52:00.629952829Z" level=warning msg="Your kernel does not support cgroup blkio throttle.write_iops_device"
	Jan 09 00:52:00 running-upgrade-248700 dockerd[2750]: time="2024-01-09T00:52:00.630208829Z" level=info msg="Loading containers: start."
	Jan 09 00:52:00 running-upgrade-248700 dockerd[2750]: time="2024-01-09T00:52:00.821204729Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Jan 09 00:52:00 running-upgrade-248700 dockerd[2750]: time="2024-01-09T00:52:00.920373829Z" level=info msg="Loading containers: done."
	Jan 09 00:52:00 running-upgrade-248700 dockerd[2750]: time="2024-01-09T00:52:00.954901529Z" level=info msg="Docker daemon" commit=633a0ea838 graphdriver(s)=overlay2 version=19.03.5
	Jan 09 00:52:00 running-upgrade-248700 dockerd[2750]: time="2024-01-09T00:52:00.955373429Z" level=info msg="Daemon has completed initialization"
	Jan 09 00:52:01 running-upgrade-248700 dockerd[2750]: time="2024-01-09T00:52:01.078706029Z" level=info msg="API listen on /var/run/docker.sock"
	Jan 09 00:52:01 running-upgrade-248700 dockerd[2750]: time="2024-01-09T00:52:01.078729729Z" level=info msg="API listen on [::]:2376"
	Jan 09 00:52:01 running-upgrade-248700 systemd[1]: Started Docker Application Container Engine.
	Jan 09 00:53:09 running-upgrade-248700 dockerd[2750]: time="2024-01-09T00:53:09.294192436Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/1176d6a000adf5eb30fd1fca44d807236a95b72286e5e29c1f92cd273e60aa95/shim.sock" debug=false pid=4316
	Jan 09 00:53:09 running-upgrade-248700 dockerd[2750]: time="2024-01-09T00:53:09.432639024Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/0e42fdb859a8c96508d0c0b9cc129f88344266cc115129c77d5827135992ea69/shim.sock" debug=false pid=4346
	Jan 09 00:53:09 running-upgrade-248700 dockerd[2750]: time="2024-01-09T00:53:09.943853588Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/1b90ccd1ff5bcb7c19e770525a867eebb4be3e87af8e17728b3ad83f113fea22/shim.sock" debug=false pid=4404
	Jan 09 00:53:10 running-upgrade-248700 dockerd[2750]: time="2024-01-09T00:53:10.892110926Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/c245d5bfa602bdb31dfc8e3d40d008b3425da444079f621f1eeedccbdeafe998/shim.sock" debug=false pid=4475
	Jan 09 00:53:10 running-upgrade-248700 dockerd[2750]: time="2024-01-09T00:53:10.905286668Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/4ae6d5f5c82d19faf3fec3e79e3e5f8534bc184bf9126e51bf03383b66c86717/shim.sock" debug=false pid=4489
	Jan 09 00:53:10 running-upgrade-248700 dockerd[2750]: time="2024-01-09T00:53:10.911519835Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/ab55a5701fff195465f5ca8965a28e09ea28b3bf437e17f2736501f21f7d0fce/shim.sock" debug=false pid=4496
	Jan 09 00:53:23 running-upgrade-248700 dockerd[2750]: time="2024-01-09T00:53:23.657686607Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/8200cc5d099990e98a166febffb03ee6ea1992a526047f76a4da153d217e5b56/shim.sock" debug=false pid=4890
	Jan 09 00:53:23 running-upgrade-248700 dockerd[2750]: time="2024-01-09T00:53:23.699064900Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/8952a1aa457ee649177bbdc4051271f5e567369dcba7219a67df99216aaa9a9d/shim.sock" debug=false pid=4907
	Jan 09 00:53:24 running-upgrade-248700 dockerd[2750]: time="2024-01-09T00:53:24.034399548Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/df16ce882d922f207d4fe1648016387e15e87174b8563dfcc5e3990290afad4f/shim.sock" debug=false pid=4978
	Jan 09 00:53:24 running-upgrade-248700 dockerd[2750]: time="2024-01-09T00:53:24.083173461Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/17f21e88a391b1b11281bd266b0d1d00c7e34a45c8d21ae20b1c2ef52128cc24/shim.sock" debug=false pid=4996
	Jan 09 00:53:43 running-upgrade-248700 dockerd[2750]: time="2024-01-09T00:53:43.878393727Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/eeb8ce9d2294c649f53482045ffddd0821b42ee450293f22081907ac6db11407/shim.sock" debug=false pid=5587
	Jan 09 00:53:44 running-upgrade-248700 dockerd[2750]: time="2024-01-09T00:53:44.388224361Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/b42833878fb4a34121d336e3984b3f8947c5b64d69285582810f419a05a625c6/shim.sock" debug=false pid=5643
	Jan 09 00:53:48 running-upgrade-248700 dockerd[2750]: time="2024-01-09T00:53:48.772102670Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/08a419fa7cf4b392526eb5a8241eab3f031a341682e123390201cb69f49d500f/shim.sock" debug=false pid=5807
	Jan 09 00:53:48 running-upgrade-248700 dockerd[2750]: time="2024-01-09T00:53:48.995785307Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/23e8949cc0b7abd922a3eb09b49923b869a234976c4ae60c8c3642203662b634/shim.sock" debug=false pid=5840
	Jan 09 00:53:50 running-upgrade-248700 dockerd[2750]: time="2024-01-09T00:53:50.086841455Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/065d81c58870e2cf9f731eab4834f85b8a131ec830cc0febb633474dbbb5abfa/shim.sock" debug=false pid=5934
	Jan 09 00:53:50 running-upgrade-248700 dockerd[2750]: time="2024-01-09T00:53:50.351722314Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/40e7b37799eaf2989dc4419aeee38c52271fbcf7ef6ca3e91da7076be7e9568a/shim.sock" debug=false pid=5958
	Jan 09 00:53:52 running-upgrade-248700 dockerd[2750]: time="2024-01-09T00:53:52.730040561Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/280e5d91bb4f4478d5cf6a4388306777e849f3a94cf04656cbea1e0ace2af750/shim.sock" debug=false pid=6056
	Jan 09 00:53:53 running-upgrade-248700 dockerd[2750]: time="2024-01-09T00:53:53.352077461Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/01d55f79860c6bbd923d78b59305cfc742b3e0d4cda72f482455e7b2c4bbdd13/shim.sock" debug=false pid=6136
	Jan 09 00:54:45 running-upgrade-248700 systemd[1]: Stopping Docker Application Container Engine...
	Jan 09 00:54:45 running-upgrade-248700 dockerd[2750]: time="2024-01-09T00:54:45.061327969Z" level=info msg="Processing signal 'terminated'"
	Jan 09 00:54:46 running-upgrade-248700 dockerd[2750]: time="2024-01-09T00:54:46.512349669Z" level=info msg="shim reaped" id=065d81c58870e2cf9f731eab4834f85b8a131ec830cc0febb633474dbbb5abfa
	Jan 09 00:54:46 running-upgrade-248700 dockerd[2750]: time="2024-01-09T00:54:46.512884207Z" level=info msg="shim reaped" id=08a419fa7cf4b392526eb5a8241eab3f031a341682e123390201cb69f49d500f
	Jan 09 00:54:46 running-upgrade-248700 dockerd[2750]: time="2024-01-09T00:54:46.525312665Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 09 00:54:46 running-upgrade-248700 dockerd[2750]: time="2024-01-09T00:54:46.527748083Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 09 00:54:46 running-upgrade-248700 dockerd[2750]: time="2024-01-09T00:54:46.526429236Z" level=warning msg="065d81c58870e2cf9f731eab4834f85b8a131ec830cc0febb633474dbbb5abfa cleanup: failed to unmount IPC: umount /var/lib/docker/containers/065d81c58870e2cf9f731eab4834f85b8a131ec830cc0febb633474dbbb5abfa/mounts/shm, flags: 0x2: no such file or directory"
	Jan 09 00:54:46 running-upgrade-248700 dockerd[2750]: time="2024-01-09T00:54:46.607572124Z" level=info msg="shim reaped" id=ab55a5701fff195465f5ca8965a28e09ea28b3bf437e17f2736501f21f7d0fce
	Jan 09 00:54:46 running-upgrade-248700 dockerd[2750]: time="2024-01-09T00:54:46.616039042Z" level=warning msg="ab55a5701fff195465f5ca8965a28e09ea28b3bf437e17f2736501f21f7d0fce cleanup: failed to unmount IPC: umount /var/lib/docker/containers/ab55a5701fff195465f5ca8965a28e09ea28b3bf437e17f2736501f21f7d0fce/mounts/shm, flags: 0x2: no such file or directory"
	Jan 09 00:54:46 running-upgrade-248700 dockerd[2750]: time="2024-01-09T00:54:46.616069538Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 09 00:54:46 running-upgrade-248700 dockerd[2750]: time="2024-01-09T00:54:46.668311678Z" level=info msg="shim reaped" id=280e5d91bb4f4478d5cf6a4388306777e849f3a94cf04656cbea1e0ace2af750
	Jan 09 00:54:46 running-upgrade-248700 dockerd[2750]: time="2024-01-09T00:54:46.670433932Z" level=info msg="shim reaped" id=eeb8ce9d2294c649f53482045ffddd0821b42ee450293f22081907ac6db11407
	Jan 09 00:54:46 running-upgrade-248700 dockerd[2750]: time="2024-01-09T00:54:46.678280222Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 09 00:54:46 running-upgrade-248700 dockerd[2750]: time="2024-01-09T00:54:46.680753535Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 09 00:54:46 running-upgrade-248700 dockerd[2750]: time="2024-01-09T00:54:46.698466080Z" level=info msg="shim reaped" id=0e42fdb859a8c96508d0c0b9cc129f88344266cc115129c77d5827135992ea69
	Jan 09 00:54:46 running-upgrade-248700 dockerd[2750]: time="2024-01-09T00:54:46.731756119Z" level=info msg="shim reaped" id=1176d6a000adf5eb30fd1fca44d807236a95b72286e5e29c1f92cd273e60aa95
	Jan 09 00:54:46 running-upgrade-248700 dockerd[2750]: time="2024-01-09T00:54:46.733887872Z" level=info msg="shim reaped" id=1b90ccd1ff5bcb7c19e770525a867eebb4be3e87af8e17728b3ad83f113fea22
	Jan 09 00:54:46 running-upgrade-248700 dockerd[2750]: time="2024-01-09T00:54:46.734081349Z" level=info msg="shim reaped" id=8952a1aa457ee649177bbdc4051271f5e567369dcba7219a67df99216aaa9a9d
	Jan 09 00:54:46 running-upgrade-248700 dockerd[2750]: time="2024-01-09T00:54:46.738161776Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 09 00:54:46 running-upgrade-248700 dockerd[2750]: time="2024-01-09T00:54:46.738224369Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 09 00:54:46 running-upgrade-248700 dockerd[2750]: time="2024-01-09T00:54:46.739470824Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 09 00:54:46 running-upgrade-248700 dockerd[2750]: time="2024-01-09T00:54:46.746962755Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 09 00:54:46 running-upgrade-248700 dockerd[2750]: time="2024-01-09T00:54:46.751165367Z" level=info msg="shim reaped" id=df16ce882d922f207d4fe1648016387e15e87174b8563dfcc5e3990290afad4f
	Jan 09 00:54:46 running-upgrade-248700 dockerd[2750]: time="2024-01-09T00:54:46.765514603Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 09 00:54:46 running-upgrade-248700 dockerd[2750]: time="2024-01-09T00:54:46.766030743Z" level=warning msg="df16ce882d922f207d4fe1648016387e15e87174b8563dfcc5e3990290afad4f cleanup: failed to unmount IPC: umount /var/lib/docker/containers/df16ce882d922f207d4fe1648016387e15e87174b8563dfcc5e3990290afad4f/mounts/shm, flags: 0x2: no such file or directory"
	Jan 09 00:54:46 running-upgrade-248700 dockerd[2750]: time="2024-01-09T00:54:46.766597977Z" level=info msg="shim reaped" id=23e8949cc0b7abd922a3eb09b49923b869a234976c4ae60c8c3642203662b634
	Jan 09 00:54:46 running-upgrade-248700 dockerd[2750]: time="2024-01-09T00:54:46.781691927Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 09 00:54:46 running-upgrade-248700 dockerd[2750]: time="2024-01-09T00:54:46.784562694Z" level=info msg="shim reaped" id=17f21e88a391b1b11281bd266b0d1d00c7e34a45c8d21ae20b1c2ef52128cc24
	Jan 09 00:54:46 running-upgrade-248700 dockerd[2750]: time="2024-01-09T00:54:46.794041094Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 09 00:54:46 running-upgrade-248700 dockerd[2750]: time="2024-01-09T00:54:46.794599929Z" level=warning msg="17f21e88a391b1b11281bd266b0d1d00c7e34a45c8d21ae20b1c2ef52128cc24 cleanup: failed to unmount IPC: umount /var/lib/docker/containers/17f21e88a391b1b11281bd266b0d1d00c7e34a45c8d21ae20b1c2ef52128cc24/mounts/shm, flags: 0x2: no such file or directory"
	Jan 09 00:54:46 running-upgrade-248700 dockerd[2750]: time="2024-01-09T00:54:46.811499669Z" level=info msg="shim reaped" id=b42833878fb4a34121d336e3984b3f8947c5b64d69285582810f419a05a625c6
	Jan 09 00:54:46 running-upgrade-248700 dockerd[2750]: time="2024-01-09T00:54:46.814771989Z" level=info msg="shim reaped" id=8200cc5d099990e98a166febffb03ee6ea1992a526047f76a4da153d217e5b56
	Jan 09 00:54:46 running-upgrade-248700 dockerd[2750]: time="2024-01-09T00:54:46.818291981Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 09 00:54:46 running-upgrade-248700 dockerd[2750]: time="2024-01-09T00:54:46.818585547Z" level=warning msg="b42833878fb4a34121d336e3984b3f8947c5b64d69285582810f419a05a625c6 cleanup: failed to unmount IPC: umount /var/lib/docker/containers/b42833878fb4a34121d336e3984b3f8947c5b64d69285582810f419a05a625c6/mounts/shm, flags: 0x2: no such file or directory"
	Jan 09 00:54:46 running-upgrade-248700 dockerd[2750]: time="2024-01-09T00:54:46.828356514Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 09 00:54:47 running-upgrade-248700 dockerd[2750]: time="2024-01-09T00:54:47.548228494Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/fec49ec6f213065612ad769910096f3e458b41215039f397d6fe4836993a3fd3/shim.sock" debug=false pid=7584
	Jan 09 00:54:47 running-upgrade-248700 dockerd[2750]: time="2024-01-09T00:54:47.576401866Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/1e93bc6784ccf561efd8ec373cc9939fe51069a650641995594aa2436a71f702/shim.sock" debug=false pid=7600
	Jan 09 00:54:48 running-upgrade-248700 dockerd[2750]: time="2024-01-09T00:54:48.001337784Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/d3845df507f7bac7e6f2b8351bc52a8aa3d575b616f2568df8e81324776388c4/shim.sock" debug=false pid=7683
	Jan 09 00:54:48 running-upgrade-248700 dockerd[2750]: time="2024-01-09T00:54:48.020450821Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/52ebbf9bf57265eafa1b6fc0b8588480c05d2c7a82199520154000fc1db2d1b1/shim.sock" debug=false pid=7698
	Jan 09 00:54:48 running-upgrade-248700 dockerd[2750]: time="2024-01-09T00:54:48.873356788Z" level=info msg="shim reaped" id=52ebbf9bf57265eafa1b6fc0b8588480c05d2c7a82199520154000fc1db2d1b1
	Jan 09 00:54:48 running-upgrade-248700 dockerd[2750]: time="2024-01-09T00:54:48.883820004Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 09 00:54:48 running-upgrade-248700 dockerd[2750]: time="2024-01-09T00:54:48.884478530Z" level=warning msg="52ebbf9bf57265eafa1b6fc0b8588480c05d2c7a82199520154000fc1db2d1b1 cleanup: failed to unmount IPC: umount /var/lib/docker/containers/52ebbf9bf57265eafa1b6fc0b8588480c05d2c7a82199520154000fc1db2d1b1/mounts/shm, flags: 0x2: no such file or directory"
	Jan 09 00:54:49 running-upgrade-248700 dockerd[2750]: time="2024-01-09T00:54:49.705624338Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/a549c77298550ee24436a80c56d6547cacacac9fd24f52c0b210aedd214a412f/shim.sock" debug=false pid=7827
	Jan 09 00:54:50 running-upgrade-248700 dockerd[2750]: time="2024-01-09T00:54:50.050301653Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/93e22a0c47b71a920571a5900014195f32ca34debc7d3c7eb89f357295026033/shim.sock" debug=false pid=7870
	Jan 09 00:54:50 running-upgrade-248700 dockerd[2750]: time="2024-01-09T00:54:50.483218199Z" level=info msg="shim reaped" id=40e7b37799eaf2989dc4419aeee38c52271fbcf7ef6ca3e91da7076be7e9568a
	Jan 09 00:54:50 running-upgrade-248700 dockerd[2750]: time="2024-01-09T00:54:50.492872432Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 09 00:54:50 running-upgrade-248700 dockerd[2750]: time="2024-01-09T00:54:50.493221594Z" level=warning msg="40e7b37799eaf2989dc4419aeee38c52271fbcf7ef6ca3e91da7076be7e9568a cleanup: failed to unmount IPC: umount /var/lib/docker/containers/40e7b37799eaf2989dc4419aeee38c52271fbcf7ef6ca3e91da7076be7e9568a/mounts/shm, flags: 0x2: no such file or directory"
	Jan 09 00:54:50 running-upgrade-248700 dockerd[2750]: time="2024-01-09T00:54:50.506590816Z" level=info msg="shim reaped" id=01d55f79860c6bbd923d78b59305cfc742b3e0d4cda72f482455e7b2c4bbdd13
	Jan 09 00:54:50 running-upgrade-248700 dockerd[2750]: time="2024-01-09T00:54:50.517149949Z" level=warning msg="01d55f79860c6bbd923d78b59305cfc742b3e0d4cda72f482455e7b2c4bbdd13 cleanup: failed to unmount IPC: umount /var/lib/docker/containers/01d55f79860c6bbd923d78b59305cfc742b3e0d4cda72f482455e7b2c4bbdd13/mounts/shm, flags: 0x2: no such file or directory"
	Jan 09 00:54:50 running-upgrade-248700 dockerd[2750]: time="2024-01-09T00:54:50.520221509Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 09 00:54:50 running-upgrade-248700 dockerd[2750]: time="2024-01-09T00:54:50.740439866Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/1c3f4b67656465e286cb4e6f00ff0ad6dbd30c37dbabffdc8aac0f50ed16ef73/shim.sock" debug=false pid=7990
	Jan 09 00:54:51 running-upgrade-248700 dockerd[2750]: time="2024-01-09T00:54:51.294333909Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/201a3fdfeb4baa6ce37e7f1da911cb218b06a61f0c1631b20247811bd5749c78/shim.sock" debug=false pid=8084
	Jan 09 00:54:51 running-upgrade-248700 dockerd[2750]: time="2024-01-09T00:54:51.781408082Z" level=info msg="shim reaped" id=4ae6d5f5c82d19faf3fec3e79e3e5f8534bc184bf9126e51bf03383b66c86717
	Jan 09 00:54:51 running-upgrade-248700 dockerd[2750]: time="2024-01-09T00:54:51.791796047Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 09 00:54:51 running-upgrade-248700 dockerd[2750]: time="2024-01-09T00:54:51.792013223Z" level=warning msg="4ae6d5f5c82d19faf3fec3e79e3e5f8534bc184bf9126e51bf03383b66c86717 cleanup: failed to unmount IPC: umount /var/lib/docker/containers/4ae6d5f5c82d19faf3fec3e79e3e5f8534bc184bf9126e51bf03383b66c86717/mounts/shm, flags: 0x2: no such file or directory"
	Jan 09 00:54:52 running-upgrade-248700 dockerd[2750]: time="2024-01-09T00:54:52.030027549Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/7578c1a266f579d49991ad205c6ebc926038da58914f50df3c07a37fff7936cb/shim.sock" debug=false pid=8176
	Jan 09 00:54:52 running-upgrade-248700 dockerd[2750]: time="2024-01-09T00:54:52.435519832Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/bba15e8e6ffbbff8fff33be9d76605551e8ef2fd9519c0ce9a48199cf0661de2/shim.sock" debug=false pid=8222
	Jan 09 00:54:53 running-upgrade-248700 dockerd[2750]: time="2024-01-09T00:54:53.946240402Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/89fbee98dc16a7e8c3ab9d522cf15f08da91600edf8620b02a0f3bb7937057cc/shim.sock" debug=false pid=8289
	Jan 09 00:54:54 running-upgrade-248700 dockerd[2750]: time="2024-01-09T00:54:54.492267606Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/257514166ff0f7cdc6035562484661f4cdc39be058baba64c8faaec9b46a344e/shim.sock" debug=false pid=8355
	Jan 09 00:54:55 running-upgrade-248700 dockerd[2750]: time="2024-01-09T00:54:55.357693399Z" level=info msg="Container c245d5bfa602bdb31dfc8e3d40d008b3425da444079f621f1eeedccbdeafe998 failed to exit within 10 seconds of signal 15 - using the force"
	Jan 09 00:54:55 running-upgrade-248700 dockerd[2750]: time="2024-01-09T00:54:55.526229166Z" level=info msg="shim reaped" id=c245d5bfa602bdb31dfc8e3d40d008b3425da444079f621f1eeedccbdeafe998
	Jan 09 00:54:55 running-upgrade-248700 dockerd[2750]: time="2024-01-09T00:54:55.536025241Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 09 00:54:55 running-upgrade-248700 dockerd[2750]: time="2024-01-09T00:54:55.536200923Z" level=warning msg="c245d5bfa602bdb31dfc8e3d40d008b3425da444079f621f1eeedccbdeafe998 cleanup: failed to unmount IPC: umount /var/lib/docker/containers/c245d5bfa602bdb31dfc8e3d40d008b3425da444079f621f1eeedccbdeafe998/mounts/shm, flags: 0x2: no such file or directory"
	Jan 09 00:54:55 running-upgrade-248700 dockerd[2750]: time="2024-01-09T00:54:55.645061333Z" level=info msg="Daemon shutdown complete"
	Jan 09 00:54:55 running-upgrade-248700 dockerd[2750]: time="2024-01-09T00:54:55.645136426Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=moby
	Jan 09 00:54:55 running-upgrade-248700 dockerd[2750]: time="2024-01-09T00:54:55.645171322Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Jan 09 00:54:55 running-upgrade-248700 dockerd[2750]: time="2024-01-09T00:54:55.649082213Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jan 09 00:54:55 running-upgrade-248700 dockerd[2750]: time="2024-01-09T00:54:55.713257698Z" level=warning msg="failed to get endpoint_count map for scope local: open : no such file or directory"
	Jan 09 00:54:55 running-upgrade-248700 dockerd[2750]: time="2024-01-09T00:54:55.742897298Z" level=warning msg="7f5106de09d67d5d3886b97d5345b2c68479b74f2cef746c98e9f93cc1cc785c cleanup: failed to unmount IPC: umount /var/lib/docker/containers/7f5106de09d67d5d3886b97d5345b2c68479b74f2cef746c98e9f93cc1cc785c/mounts/shm, flags: 0x2: no such file or directory"
	Jan 09 00:54:55 running-upgrade-248700 dockerd[2750]: time="2024-01-09T00:54:55.782145491Z" level=error msg="7f5106de09d67d5d3886b97d5345b2c68479b74f2cef746c98e9f93cc1cc785c cleanup: failed to delete container from containerd: grpc: the client connection is closing: unknown"
	Jan 09 00:54:55 running-upgrade-248700 dockerd[2750]: time="2024-01-09T00:54:55.782226583Z" level=error msg="Handler for POST /containers/7f5106de09d67d5d3886b97d5345b2c68479b74f2cef746c98e9f93cc1cc785c/start returned error: failed to update store for object type *libnetwork.endpoint: open : no such file or directory"
	Jan 09 00:54:55 running-upgrade-248700 dockerd[2750]: time="2024-01-09T00:54:55.857008859Z" level=warning msg="failed to get endpoint_count map for scope local: open : no such file or directory"
	Jan 09 00:54:55 running-upgrade-248700 dockerd[2750]: time="2024-01-09T00:54:55.874665711Z" level=warning msg="7f3bd4fb98bb561ef9ed83a66a895417cc880ec31109e51fef69f210c1fe254e cleanup: failed to unmount IPC: umount /var/lib/docker/containers/7f3bd4fb98bb561ef9ed83a66a895417cc880ec31109e51fef69f210c1fe254e/mounts/shm, flags: 0x2: no such file or directory"
	Jan 09 00:54:55 running-upgrade-248700 dockerd[2750]: time="2024-01-09T00:54:55.886319092Z" level=error msg="7f3bd4fb98bb561ef9ed83a66a895417cc880ec31109e51fef69f210c1fe254e cleanup: failed to delete container from containerd: grpc: the client connection is closing: unknown"
	Jan 09 00:54:55 running-upgrade-248700 dockerd[2750]: time="2024-01-09T00:54:55.886532870Z" level=error msg="Handler for POST /containers/7f3bd4fb98bb561ef9ed83a66a895417cc880ec31109e51fef69f210c1fe254e/start returned error: failed to update store for object type *libnetwork.endpoint: open : no such file or directory"
	Jan 09 00:54:56 running-upgrade-248700 systemd[1]: docker.service: Succeeded.
	Jan 09 00:54:56 running-upgrade-248700 systemd[1]: Stopped Docker Application Container Engine.
	Jan 09 00:54:56 running-upgrade-248700 systemd[1]: docker.service: Found left-over process 7584 (containerd-shim) in control group while starting unit. Ignoring.
	Jan 09 00:54:56 running-upgrade-248700 systemd[1]: This usually indicates unclean termination of a previous run, or service implementation deficiencies.
	Jan 09 00:54:56 running-upgrade-248700 systemd[1]: docker.service: Found left-over process 7600 (containerd-shim) in control group while starting unit. Ignoring.
	Jan 09 00:54:56 running-upgrade-248700 systemd[1]: This usually indicates unclean termination of a previous run, or service implementation deficiencies.
	Jan 09 00:54:56 running-upgrade-248700 systemd[1]: docker.service: Found left-over process 7683 (containerd-shim) in control group while starting unit. Ignoring.
	Jan 09 00:54:56 running-upgrade-248700 systemd[1]: This usually indicates unclean termination of a previous run, or service implementation deficiencies.
	Jan 09 00:54:56 running-upgrade-248700 systemd[1]: docker.service: Found left-over process 7827 (containerd-shim) in control group while starting unit. Ignoring.
	Jan 09 00:54:56 running-upgrade-248700 systemd[1]: This usually indicates unclean termination of a previous run, or service implementation deficiencies.
	Jan 09 00:54:56 running-upgrade-248700 systemd[1]: docker.service: Found left-over process 7870 (containerd-shim) in control group while starting unit. Ignoring.
	Jan 09 00:54:56 running-upgrade-248700 systemd[1]: This usually indicates unclean termination of a previous run, or service implementation deficiencies.
	Jan 09 00:54:56 running-upgrade-248700 systemd[1]: docker.service: Found left-over process 7990 (containerd-shim) in control group while starting unit. Ignoring.
	Jan 09 00:54:56 running-upgrade-248700 systemd[1]: This usually indicates unclean termination of a previous run, or service implementation deficiencies.
	Jan 09 00:54:56 running-upgrade-248700 systemd[1]: docker.service: Found left-over process 8084 (containerd-shim) in control group while starting unit. Ignoring.
	Jan 09 00:54:56 running-upgrade-248700 systemd[1]: This usually indicates unclean termination of a previous run, or service implementation deficiencies.
	Jan 09 00:54:56 running-upgrade-248700 systemd[1]: docker.service: Found left-over process 8176 (containerd-shim) in control group while starting unit. Ignoring.
	Jan 09 00:54:56 running-upgrade-248700 systemd[1]: This usually indicates unclean termination of a previous run, or service implementation deficiencies.
	Jan 09 00:54:56 running-upgrade-248700 systemd[1]: docker.service: Found left-over process 8222 (containerd-shim) in control group while starting unit. Ignoring.
	Jan 09 00:54:56 running-upgrade-248700 systemd[1]: This usually indicates unclean termination of a previous run, or service implementation deficiencies.
	Jan 09 00:54:56 running-upgrade-248700 systemd[1]: docker.service: Found left-over process 8289 (containerd-shim) in control group while starting unit. Ignoring.
	Jan 09 00:54:56 running-upgrade-248700 systemd[1]: This usually indicates unclean termination of a previous run, or service implementation deficiencies.
	Jan 09 00:54:56 running-upgrade-248700 systemd[1]: docker.service: Found left-over process 8355 (containerd-shim) in control group while starting unit. Ignoring.
	Jan 09 00:54:56 running-upgrade-248700 systemd[1]: This usually indicates unclean termination of a previous run, or service implementation deficiencies.
	Jan 09 00:54:56 running-upgrade-248700 systemd[1]: Starting Docker Application Container Engine...
	Jan 09 00:54:56 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:54:56.728612151Z" level=info msg="Starting up"
	Jan 09 00:54:56 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:54:56.731616140Z" level=info msg="libcontainerd: started new containerd process" pid=8463
	Jan 09 00:54:56 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:54:56.731825418Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	Jan 09 00:54:56 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:54:56.731901511Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	Jan 09 00:54:56 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:54:56.732058894Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/containerd/containerd.sock 0  <nil>}] <nil>}" module=grpc
	Jan 09 00:54:56 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:54:56.732143985Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Jan 09 00:54:56 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:54:56.778358600Z" level=info msg="starting containerd" revision=b34a5c8af56e510852c35414db4c1f4fa6172339 version=v1.2.10
	Jan 09 00:54:56 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:54:56.779062327Z" level=info msg="loading plugin "io.containerd.content.v1.content"..." type=io.containerd.content.v1
	Jan 09 00:54:56 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:54:56.779700461Z" level=info msg="loading plugin "io.containerd.snapshotter.v1.btrfs"..." type=io.containerd.snapshotter.v1
	Jan 09 00:54:56 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:54:56.780245805Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.btrfs" error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs must be a btrfs filesystem to be used with the btrfs snapshotter"
	Jan 09 00:54:56 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:54:56.780375591Z" level=info msg="loading plugin "io.containerd.snapshotter.v1.aufs"..." type=io.containerd.snapshotter.v1
	Jan 09 00:54:56 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:54:56.782149207Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.aufs" error="modprobe aufs failed: "modprobe: FATAL: Module aufs not found in directory /lib/modules/4.19.81\n": exit status 1"
	Jan 09 00:54:56 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:54:56.782240298Z" level=info msg="loading plugin "io.containerd.snapshotter.v1.native"..." type=io.containerd.snapshotter.v1
	Jan 09 00:54:56 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:54:56.782818938Z" level=info msg="loading plugin "io.containerd.snapshotter.v1.overlayfs"..." type=io.containerd.snapshotter.v1
	Jan 09 00:54:56 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:54:56.783518266Z" level=info msg="loading plugin "io.containerd.snapshotter.v1.zfs"..." type=io.containerd.snapshotter.v1
	Jan 09 00:54:56 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:54:56.784035612Z" level=info msg="skip loading plugin "io.containerd.snapshotter.v1.zfs"..." type=io.containerd.snapshotter.v1
	Jan 09 00:54:56 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:54:56.784120803Z" level=info msg="loading plugin "io.containerd.metadata.v1.bolt"..." type=io.containerd.metadata.v1
	Jan 09 00:54:56 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:54:56.784184497Z" level=warning msg="could not use snapshotter aufs in metadata plugin" error="modprobe aufs failed: "modprobe: FATAL: Module aufs not found in directory /lib/modules/4.19.81\n": exit status 1"
	Jan 09 00:54:56 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:54:56.784271988Z" level=warning msg="could not use snapshotter zfs in metadata plugin" error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin"
	Jan 09 00:54:56 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:54:56.784318883Z" level=warning msg="could not use snapshotter btrfs in metadata plugin" error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs must be a btrfs filesystem to be used with the btrfs snapshotter"
	Jan 09 00:54:56 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:54:56.784770936Z" level=info msg="loading plugin "io.containerd.differ.v1.walking"..." type=io.containerd.differ.v1
	Jan 09 00:54:56 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:54:56.784874325Z" level=info msg="loading plugin "io.containerd.gc.v1.scheduler"..." type=io.containerd.gc.v1
	Jan 09 00:54:56 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:54:56.785034709Z" level=info msg="loading plugin "io.containerd.service.v1.containers-service"..." type=io.containerd.service.v1
	Jan 09 00:54:56 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:54:56.785194592Z" level=info msg="loading plugin "io.containerd.service.v1.content-service"..." type=io.containerd.service.v1
	Jan 09 00:54:56 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:54:56.785254986Z" level=info msg="loading plugin "io.containerd.service.v1.diff-service"..." type=io.containerd.service.v1
	Jan 09 00:54:56 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:54:56.785327678Z" level=info msg="loading plugin "io.containerd.service.v1.images-service"..." type=io.containerd.service.v1
	Jan 09 00:54:56 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:54:56.785400871Z" level=info msg="loading plugin "io.containerd.service.v1.leases-service"..." type=io.containerd.service.v1
	Jan 09 00:54:56 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:54:56.785473963Z" level=info msg="loading plugin "io.containerd.service.v1.namespaces-service"..." type=io.containerd.service.v1
	Jan 09 00:54:56 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:54:56.785545356Z" level=info msg="loading plugin "io.containerd.service.v1.snapshots-service"..." type=io.containerd.service.v1
	Jan 09 00:54:56 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:54:56.785605450Z" level=info msg="loading plugin "io.containerd.runtime.v1.linux"..." type=io.containerd.runtime.v1
	Jan 09 00:54:56 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:54:56.822469832Z" level=info msg="loading plugin "io.containerd.runtime.v2.task"..." type=io.containerd.runtime.v2
	Jan 09 00:54:56 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:54:56.822693809Z" level=info msg="loading plugin "io.containerd.monitor.v1.cgroups"..." type=io.containerd.monitor.v1
	Jan 09 00:54:56 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:54:56.823229254Z" level=info msg="loading plugin "io.containerd.service.v1.tasks-service"..." type=io.containerd.service.v1
	Jan 09 00:54:56 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:54:56.827726388Z" level=info msg="loading plugin "io.containerd.internal.v1.restart"..." type=io.containerd.internal.v1
	Jan 09 00:54:56 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:54:56.827850975Z" level=info msg="loading plugin "io.containerd.grpc.v1.containers"..." type=io.containerd.grpc.v1
	Jan 09 00:54:56 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:54:56.828001559Z" level=info msg="loading plugin "io.containerd.grpc.v1.content"..." type=io.containerd.grpc.v1
	Jan 09 00:54:56 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:54:56.828090950Z" level=info msg="loading plugin "io.containerd.grpc.v1.diff"..." type=io.containerd.grpc.v1
	Jan 09 00:54:56 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:54:56.828153144Z" level=info msg="loading plugin "io.containerd.grpc.v1.events"..." type=io.containerd.grpc.v1
	Jan 09 00:54:56 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:54:56.828281131Z" level=info msg="loading plugin "io.containerd.grpc.v1.healthcheck"..." type=io.containerd.grpc.v1
	Jan 09 00:54:56 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:54:56.828362822Z" level=info msg="loading plugin "io.containerd.grpc.v1.images"..." type=io.containerd.grpc.v1
	Jan 09 00:54:56 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:54:56.828605997Z" level=info msg="loading plugin "io.containerd.grpc.v1.leases"..." type=io.containerd.grpc.v1
	Jan 09 00:54:56 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:54:56.828693688Z" level=info msg="loading plugin "io.containerd.grpc.v1.namespaces"..." type=io.containerd.grpc.v1
	Jan 09 00:54:56 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:54:56.828986058Z" level=info msg="loading plugin "io.containerd.internal.v1.opt"..." type=io.containerd.internal.v1
	Jan 09 00:54:56 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:54:56.829102445Z" level=info msg="loading plugin "io.containerd.grpc.v1.snapshots"..." type=io.containerd.grpc.v1
	Jan 09 00:54:56 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:54:56.829171538Z" level=info msg="loading plugin "io.containerd.grpc.v1.tasks"..." type=io.containerd.grpc.v1
	Jan 09 00:54:56 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:54:56.829232532Z" level=info msg="loading plugin "io.containerd.grpc.v1.version"..." type=io.containerd.grpc.v1
	Jan 09 00:54:56 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:54:56.829305324Z" level=info msg="loading plugin "io.containerd.grpc.v1.introspection"..." type=io.containerd.grpc.v1
	Jan 09 00:54:56 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:54:56.829630391Z" level=info msg=serving... address="/var/run/docker/containerd/containerd-debug.sock"
	Jan 09 00:54:56 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:54:56.829764477Z" level=info msg=serving... address="/var/run/docker/containerd/containerd.sock"
	Jan 09 00:54:56 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:54:56.829828770Z" level=info msg="containerd successfully booted in 0.053503s"
	Jan 09 00:54:56 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:54:56.844807019Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	Jan 09 00:54:56 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:54:56.845161683Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	Jan 09 00:54:56 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:54:56.845319866Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/containerd/containerd.sock 0  <nil>}] <nil>}" module=grpc
	Jan 09 00:54:56 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:54:56.845527145Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Jan 09 00:54:56 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:54:56.846984294Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	Jan 09 00:54:56 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:54:56.847117080Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	Jan 09 00:54:56 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:54:56.847261065Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/containerd/containerd.sock 0  <nil>}] <nil>}" module=grpc
	Jan 09 00:54:56 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:54:56.847390552Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Jan 09 00:54:56 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:54:56.852809091Z" level=info msg="[graphdriver] using prior storage driver: overlay2"
	Jan 09 00:54:56 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:54:56.932121578Z" level=warning msg="Your kernel does not support cgroup blkio weight"
	Jan 09 00:54:56 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:54:56.932364153Z" level=warning msg="Your kernel does not support cgroup blkio weight_device"
	Jan 09 00:54:56 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:54:56.932423647Z" level=warning msg="Your kernel does not support cgroup blkio throttle.read_bps_device"
	Jan 09 00:54:56 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:54:56.932473342Z" level=warning msg="Your kernel does not support cgroup blkio throttle.write_bps_device"
	Jan 09 00:54:56 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:54:56.932523036Z" level=warning msg="Your kernel does not support cgroup blkio throttle.read_iops_device"
	Jan 09 00:54:56 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:54:56.932571331Z" level=warning msg="Your kernel does not support cgroup blkio throttle.write_iops_device"
	Jan 09 00:54:56 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:54:56.932806507Z" level=info msg="Loading containers: start."
	Jan 09 00:54:59 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:54:59.032780544Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 09 00:54:59 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:54:59.034687053Z" level=warning msg="d3845df507f7bac7e6f2b8351bc52a8aa3d575b616f2568df8e81324776388c4 cleanup: failed to unmount IPC: umount /var/lib/docker/containers/d3845df507f7bac7e6f2b8351bc52a8aa3d575b616f2568df8e81324776388c4/mounts/shm, flags: 0x2: no such file or directory"
	Jan 09 00:54:59 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:54:59.072273274Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 09 00:54:59 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:54:59.097104478Z" level=warning msg="unmount task rootfs" error="no such file or directory" id=d3845df507f7bac7e6f2b8351bc52a8aa3d575b616f2568df8e81324776388c4 path="/var/run/docker/containerd/daemon/io.containerd.runtime.v1.linux/moby/d3845df507f7bac7e6f2b8351bc52a8aa3d575b616f2568df8e81324776388c4"
	Jan 09 00:54:59 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:54:59.097578031Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 09 00:54:59 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:54:59.116236655Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 09 00:54:59 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:54:59.129825389Z" level=warning msg="unmount task rootfs" error="no such file or directory" id=a549c77298550ee24436a80c56d6547cacacac9fd24f52c0b210aedd214a412f path="/var/run/docker/containerd/daemon/io.containerd.runtime.v1.linux/moby/a549c77298550ee24436a80c56d6547cacacac9fd24f52c0b210aedd214a412f"
	Jan 09 00:54:59 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:54:59.131585212Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 09 00:54:59 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:54:59.134748394Z" level=warning msg="unmount task rootfs" error="no such file or directory" id=1c3f4b67656465e286cb4e6f00ff0ad6dbd30c37dbabffdc8aac0f50ed16ef73 path="/var/run/docker/containerd/daemon/io.containerd.runtime.v1.linux/moby/1c3f4b67656465e286cb4e6f00ff0ad6dbd30c37dbabffdc8aac0f50ed16ef73"
	Jan 09 00:54:59 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:54:59.153680091Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 09 00:54:59 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:54:59.175582490Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 09 00:54:59 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:54:59.200339701Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 09 00:54:59 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:54:59.200489386Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 09 00:54:59 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:54:59.258732431Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 09 00:54:59 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:54:59.278331261Z" level=warning msg="unmount task rootfs" error="no such file or directory" id=7578c1a266f579d49991ad205c6ebc926038da58914f50df3c07a37fff7936cb path="/var/run/docker/containerd/daemon/io.containerd.runtime.v1.linux/moby/7578c1a266f579d49991ad205c6ebc926038da58914f50df3c07a37fff7936cb"
	Jan 09 00:54:59 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:54:59.278717122Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 09 00:54:59 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:54:59.298595624Z" level=warning msg="unmount task rootfs" error="no such file or directory" id=1e93bc6784ccf561efd8ec373cc9939fe51069a650641995594aa2436a71f702 path="/var/run/docker/containerd/daemon/io.containerd.runtime.v1.linux/moby/1e93bc6784ccf561efd8ec373cc9939fe51069a650641995594aa2436a71f702"
	Jan 09 00:54:59 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:54:59.299276456Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 09 00:54:59 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:54:59.303547826Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 09 00:54:59 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:54:59.304011080Z" level=warning msg="93e22a0c47b71a920571a5900014195f32ca34debc7d3c7eb89f357295026033 cleanup: failed to unmount IPC: umount /var/lib/docker/containers/93e22a0c47b71a920571a5900014195f32ca34debc7d3c7eb89f357295026033/mounts/shm, flags: 0x2: no such file or directory"
	Jan 09 00:54:59 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:54:59.315147760Z" level=warning msg="unmount task rootfs" error="no such file or directory" id=89fbee98dc16a7e8c3ab9d522cf15f08da91600edf8620b02a0f3bb7937057cc path="/var/run/docker/containerd/daemon/io.containerd.runtime.v1.linux/moby/89fbee98dc16a7e8c3ab9d522cf15f08da91600edf8620b02a0f3bb7937057cc"
	Jan 09 00:54:59 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:54:59.315532921Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 09 00:54:59 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:54:59.342782582Z" level=warning msg="unmount task rootfs" error="no such file or directory" id=fec49ec6f213065612ad769910096f3e458b41215039f397d6fe4836993a3fd3 path="/var/run/docker/containerd/daemon/io.containerd.runtime.v1.linux/moby/fec49ec6f213065612ad769910096f3e458b41215039f397d6fe4836993a3fd3"
	Jan 09 00:54:59 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:54:59.343491911Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 09 00:54:59 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:54:59.346574001Z" level=warning msg="unmount task rootfs" error="no such file or directory" id=93e22a0c47b71a920571a5900014195f32ca34debc7d3c7eb89f357295026033 path="/var/run/docker/containerd/daemon/io.containerd.runtime.v1.linux/moby/93e22a0c47b71a920571a5900014195f32ca34debc7d3c7eb89f357295026033"
	Jan 09 00:54:59 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:54:59.346986260Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 09 00:54:59 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:54:59.387824255Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 09 00:54:59 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:54:59.388277309Z" level=warning msg="bba15e8e6ffbbff8fff33be9d76605551e8ef2fd9519c0ce9a48199cf0661de2 cleanup: failed to unmount IPC: umount /var/lib/docker/containers/bba15e8e6ffbbff8fff33be9d76605551e8ef2fd9519c0ce9a48199cf0661de2/mounts/shm, flags: 0x2: no such file or directory"
	Jan 09 00:54:59 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:54:59.403127316Z" level=warning msg="unmount task rootfs" error="no such file or directory" id=bba15e8e6ffbbff8fff33be9d76605551e8ef2fd9519c0ce9a48199cf0661de2 path="/var/run/docker/containerd/daemon/io.containerd.runtime.v1.linux/moby/bba15e8e6ffbbff8fff33be9d76605551e8ef2fd9519c0ce9a48199cf0661de2"
	Jan 09 00:54:59 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:54:59.420209099Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 09 00:54:59 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:54:59.555357314Z" level=info msg="Removing stale sandbox e6f3648f8eb2d345b603cbac8dd6c5f57fece50e739cebbf02fdde098dc21d50 (1e93bc6784ccf561efd8ec373cc9939fe51069a650641995594aa2436a71f702)"
	Jan 09 00:54:59 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:54:59.569307911Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint 868192508eccb66a5d387edf6c012e2b23dae037d882a77eb03eb2d7b2778b65 8982825e49a55f47cb04cfdb9cdbdc024f313c9dc2bce4711df5ef09addd164d], retrying...."
	Jan 09 00:54:59 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:54:59.715218844Z" level=info msg="Removing stale sandbox 727c26c359398653ec0aeb55ed361d52d5f7946bd9b603e249da4cb1948d7963 (7578c1a266f579d49991ad205c6ebc926038da58914f50df3c07a37fff7936cb)"
	Jan 09 00:54:59 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:54:59.724041657Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint 868192508eccb66a5d387edf6c012e2b23dae037d882a77eb03eb2d7b2778b65 a68c282eba13d0da183cf30a23c0b0e7418c57be8fc4df23ab91ce5a7ea9d9f4], retrying...."
	Jan 09 00:55:00 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:55:00.012636458Z" level=info msg="Removing stale sandbox 8a3b1bfa69b0df7a4f398d6669cb4fd5818fa8763e08061332632ee968ef6ae7 (fec49ec6f213065612ad769910096f3e458b41215039f397d6fe4836993a3fd3)"
	Jan 09 00:55:00 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:55:00.020882737Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint 868192508eccb66a5d387edf6c012e2b23dae037d882a77eb03eb2d7b2778b65 df5e0ac8e4ca3bafa31921d8375096e325174b7290d9c375fb4431a85a57c41f], retrying...."
	Jan 09 00:55:00 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:55:00.167141273Z" level=info msg="Removing stale sandbox bd562ea0a6915f45522d031689d8c9c8aebd9524a7144fa4b5ffb7e121377aa9 (1c3f4b67656465e286cb4e6f00ff0ad6dbd30c37dbabffdc8aac0f50ed16ef73)"
	Jan 09 00:55:00 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:55:00.184319962Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint d43de57a96ff6ce4c3404e41366432659b86a397c24b27370d3b874708b77f80 3a8ec84690549eac450321f6b04a88a400b0ad1030716016162697368728188d], retrying...."
	Jan 09 00:55:00 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:55:00.373467128Z" level=info msg="Removing stale sandbox c4e3b8594913db8f85b7543dea5a273025d55bf185797cf9413ecad7e70e93a0 (89fbee98dc16a7e8c3ab9d522cf15f08da91600edf8620b02a0f3bb7937057cc)"
	Jan 09 00:55:00 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:55:00.510979135Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint d43de57a96ff6ce4c3404e41366432659b86a397c24b27370d3b874708b77f80 57b6bf423c84a1c183c368198a64d6b4a7107c97761621f114faa4ccc1f89028], retrying...."
	Jan 09 00:55:00 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:55:00.672392062Z" level=info msg="Removing stale sandbox e6cae70e4d025f20675d849ac19c25ac00ae60822f8e56907e7db10adce71cd2 (a549c77298550ee24436a80c56d6547cacacac9fd24f52c0b210aedd214a412f)"
	Jan 09 00:55:00 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:55:00.681352670Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint 868192508eccb66a5d387edf6c012e2b23dae037d882a77eb03eb2d7b2778b65 c2c3315b04ac321d961e3c425fb5a1412b4c8c4ff7f47865b8c0909a04b10bc2], retrying...."
	Jan 09 00:55:00 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:55:00.721034519Z" level=info msg="There are old running containers, the network config will not take affect"
	Jan 09 00:55:00 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:55:00.883281863Z" level=info msg="Loading containers: done."
	Jan 09 00:55:00 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:55:00.955111610Z" level=info msg="Docker daemon" commit=633a0ea838 graphdriver(s)=overlay2 version=19.03.5
	Jan 09 00:55:00 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:55:00.955326789Z" level=info msg="Daemon has completed initialization"
	Jan 09 00:55:00 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:55:00.989419094Z" level=info msg="API listen on /var/run/docker.sock"
	Jan 09 00:55:00 running-upgrade-248700 systemd[1]: Started Docker Application Container Engine.
	Jan 09 00:55:00 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:55:00.991218315Z" level=info msg="API listen on [::]:2376"
	Jan 09 00:55:01 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:55:01.590585373Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/92ae8374e25a831d6c8a305c0f3e54ab7f46f60d318c319d07dbf01c67db112a/shim.sock" debug=false pid=9220
	Jan 09 00:55:01 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:55:01.649453965Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/c2ec5b1ecea5d91a792bde7818fb073fb043588bdb73273bf875b8c4ecdb5878/shim.sock" debug=false pid=9237
	Jan 09 00:55:01 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:55:01.769381233Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/644b39fcafb15ce3c53cf4c1a1dd84c6d07486cad11aaa6fa153c9cfb542b8b8/shim.sock" debug=false pid=9267
	Jan 09 00:55:01 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:55:01.818422594Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/423bc4f54c4adea21ddce4d637e8f4bf493a848695979c06ddf29380bb007209/shim.sock" debug=false pid=9288
	Jan 09 00:55:01 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:55:01.825062639Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/3efba1120f6862bd0d8d4271e31cfc804010671ccbd838072a087e3f02e96087/shim.sock" debug=false pid=9287
	Jan 09 00:55:01 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:55:01.851233057Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/5515a67f87db183672feec364c33d62b7fde365d14328a944f3a14895b0839cb/shim.sock" debug=false pid=9297
	Jan 09 00:55:02 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:55:02.203837549Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/e5abdc70c0f4472d86f68f5c5286c40ffbc5155dd95f79ef28906f3155c386f9/shim.sock" debug=false pid=9465
	Jan 09 00:55:02 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:55:02.513424980Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/fbb684cadda838507bf79134f59f1e8a5dbcabf152e9865b3cdc161041f46952/shim.sock" debug=false pid=9534
	Jan 09 00:55:02 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:55:02.692060715Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/248e749097ba271a22db42ad5752cb75ca0649973f44642ed648d8061834f8b8/shim.sock" debug=false pid=9559
	Jan 09 00:55:02 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:55:02.777269284Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/2ef9611d99d6626e83e17b171d3793b58ba00fab751e633584740dacae750a3c/shim.sock" debug=false pid=9579
	Jan 09 00:55:03 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:55:03.378477328Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 09 00:55:03 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:55:03.379338545Z" level=warning msg="201a3fdfeb4baa6ce37e7f1da911cb218b06a61f0c1631b20247811bd5749c78 cleanup: failed to unmount IPC: umount /var/lib/docker/containers/201a3fdfeb4baa6ce37e7f1da911cb218b06a61f0c1631b20247811bd5749c78/mounts/shm, flags: 0x2: no such file or directory"
	Jan 09 00:55:03 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:55:03.396197611Z" level=warning msg="unmount task rootfs" error="no such file or directory" id=201a3fdfeb4baa6ce37e7f1da911cb218b06a61f0c1631b20247811bd5749c78 path="/var/run/docker/containerd/daemon/io.containerd.runtime.v1.linux/moby/201a3fdfeb4baa6ce37e7f1da911cb218b06a61f0c1631b20247811bd5749c78"
	Jan 09 00:55:03 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:55:03.396577574Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 09 00:55:03 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:55:03.756960249Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/8a6f02e8041cf90852a8d900967e4a73046dbe45147160b6f919ef1dee197175/shim.sock" debug=false pid=9727
	Jan 09 00:55:04 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:55:04.476701895Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/34e03bf45bfa32592ca073dbd41fecf4b2bb59bc139f3ba3f320128911ae15cb/shim.sock" debug=false pid=9796
	Jan 09 00:55:05 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:55:05.000615259Z" level=warning msg="257514166ff0f7cdc6035562484661f4cdc39be058baba64c8faaec9b46a344e cleanup: failed to unmount IPC: umount /var/lib/docker/containers/257514166ff0f7cdc6035562484661f4cdc39be058baba64c8faaec9b46a344e/mounts/shm, flags: 0x2: no such file or directory"
	Jan 09 00:55:05 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:55:05.001077914Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 09 00:55:05 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:55:05.084089706Z" level=warning msg="unmount task rootfs" error="no such file or directory" id=257514166ff0f7cdc6035562484661f4cdc39be058baba64c8faaec9b46a344e path="/var/run/docker/containerd/daemon/io.containerd.runtime.v1.linux/moby/257514166ff0f7cdc6035562484661f4cdc39be058baba64c8faaec9b46a344e"
	Jan 09 00:55:05 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:55:05.086705657Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 09 00:55:05 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:55:05.296335585Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/3639fc7037351ddc5f904ff314644ed46eb18feb8476ce61622cf7e424eebd6f/shim.sock" debug=false pid=9892
	Jan 09 00:55:05 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:55:05.790979961Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/33ba8bedcaf0b914ef7f1c5fe7106b82b72e372e40f75a91cdb18e21e89cfd5f/shim.sock" debug=false pid=9956
	Jan 09 00:55:06 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:55:06.876988181Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/2fa9eaabc92e3ce0732410609d4af7480806c746472699390987c6262a66646b/shim.sock" debug=false pid=10047
	Jan 09 00:55:07 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:55:07.224221141Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/20bb4ea59ded727663207cabbc48987ebf9ce68e5c8c001a464a2ed1b01da2f6/shim.sock" debug=false pid=10089
	Jan 09 00:55:08 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:55:08.440406864Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/c4a4865b28b6ff4514d9451ad34446236cbc5ade36733f07e48cdd761b65bb35/shim.sock" debug=false pid=10157
	Jan 09 00:55:10 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:55:10.153303397Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/65124b94e5cd2f1aa4ad0a358fe7f787d38ea78a8f913bd00d56319d1c9d6a6f/shim.sock" debug=false pid=10208
	Jan 09 00:55:14 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:55:14.871402973Z" level=info msg="shim reaped" id=2ef9611d99d6626e83e17b171d3793b58ba00fab751e633584740dacae750a3c
	Jan 09 00:55:14 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:55:14.881532171Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 09 00:55:14 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:55:14.882031326Z" level=warning msg="2ef9611d99d6626e83e17b171d3793b58ba00fab751e633584740dacae750a3c cleanup: failed to unmount IPC: umount /var/lib/docker/containers/2ef9611d99d6626e83e17b171d3793b58ba00fab751e633584740dacae750a3c/mounts/shm, flags: 0x2: no such file or directory"
	Jan 09 00:55:30 running-upgrade-248700 systemd[1]: Stopping Docker Application Container Engine...
	Jan 09 00:55:30 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:55:30.333856490Z" level=info msg="Processing signal 'terminated'"
	Jan 09 00:55:31 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:55:31.616699684Z" level=info msg="shim reaped" id=fbb684cadda838507bf79134f59f1e8a5dbcabf152e9865b3cdc161041f46952
	Jan 09 00:55:31 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:55:31.619128313Z" level=info msg="shim reaped" id=3efba1120f6862bd0d8d4271e31cfc804010671ccbd838072a087e3f02e96087
	Jan 09 00:55:31 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:55:31.621760728Z" level=info msg="shim reaped" id=5515a67f87db183672feec364c33d62b7fde365d14328a944f3a14895b0839cb
	Jan 09 00:55:31 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:55:31.627509024Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 09 00:55:31 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:55:31.627851200Z" level=warning msg="fbb684cadda838507bf79134f59f1e8a5dbcabf152e9865b3cdc161041f46952 cleanup: failed to unmount IPC: umount /var/lib/docker/containers/fbb684cadda838507bf79134f59f1e8a5dbcabf152e9865b3cdc161041f46952/mounts/shm, flags: 0x2: no such file or directory"
	Jan 09 00:55:31 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:55:31.637923793Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 09 00:55:31 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:55:31.639568777Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 09 00:55:31 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:55:31.662764448Z" level=info msg="shim reaped" id=3639fc7037351ddc5f904ff314644ed46eb18feb8476ce61622cf7e424eebd6f
	Jan 09 00:55:31 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:55:31.663351307Z" level=info msg="shim reaped" id=20bb4ea59ded727663207cabbc48987ebf9ce68e5c8c001a464a2ed1b01da2f6
	Jan 09 00:55:31 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:55:31.666052317Z" level=info msg="shim reaped" id=423bc4f54c4adea21ddce4d637e8f4bf493a848695979c06ddf29380bb007209
	Jan 09 00:55:31 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:55:31.673507893Z" level=warning msg="20bb4ea59ded727663207cabbc48987ebf9ce68e5c8c001a464a2ed1b01da2f6 cleanup: failed to unmount IPC: umount /var/lib/docker/containers/20bb4ea59ded727663207cabbc48987ebf9ce68e5c8c001a464a2ed1b01da2f6/mounts/shm, flags: 0x2: no such file or directory"
	Jan 09 00:55:31 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:55:31.673514793Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 09 00:55:31 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:55:31.675412160Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 09 00:55:31 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:55:31.676763765Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 09 00:55:31 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:55:31.689647760Z" level=info msg="shim reaped" id=2fa9eaabc92e3ce0732410609d4af7480806c746472699390987c6262a66646b
	Jan 09 00:55:31 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:55:31.694665307Z" level=info msg="shim reaped" id=e5abdc70c0f4472d86f68f5c5286c40ffbc5155dd95f79ef28906f3155c386f9
	Jan 09 00:55:31 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:55:31.699950736Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 09 00:55:31 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:55:31.704658505Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 09 00:55:31 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:55:31.704953785Z" level=warning msg="e5abdc70c0f4472d86f68f5c5286c40ffbc5155dd95f79ef28906f3155c386f9 cleanup: failed to unmount IPC: umount /var/lib/docker/containers/e5abdc70c0f4472d86f68f5c5286c40ffbc5155dd95f79ef28906f3155c386f9/mounts/shm, flags: 0x2: no such file or directory"
	Jan 09 00:55:31 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:55:31.715674632Z" level=info msg="shim reaped" id=8a6f02e8041cf90852a8d900967e4a73046dbe45147160b6f919ef1dee197175
	Jan 09 00:55:31 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:55:31.721614414Z" level=info msg="shim reaped" id=644b39fcafb15ce3c53cf4c1a1dd84c6d07486cad11aaa6fa153c9cfb542b8b8
	Jan 09 00:55:31 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:55:31.725352752Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 09 00:55:31 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:55:31.733676767Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 09 00:55:31 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:55:31.748625217Z" level=info msg="shim reaped" id=92ae8374e25a831d6c8a305c0f3e54ab7f46f60d318c319d07dbf01c67db112a
	Jan 09 00:55:31 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:55:31.760054114Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 09 00:55:31 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:55:31.762226162Z" level=info msg="shim reaped" id=c4a4865b28b6ff4514d9451ad34446236cbc5ade36733f07e48cdd761b65bb35
	Jan 09 00:55:31 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:55:31.772561736Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 09 00:55:31 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:55:31.772821118Z" level=warning msg="c4a4865b28b6ff4514d9451ad34446236cbc5ade36733f07e48cdd761b65bb35 cleanup: failed to unmount IPC: umount /var/lib/docker/containers/c4a4865b28b6ff4514d9451ad34446236cbc5ade36733f07e48cdd761b65bb35/mounts/shm, flags: 0x2: no such file or directory"
	Jan 09 00:55:31 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:55:31.775408536Z" level=info msg="shim reaped" id=c2ec5b1ecea5d91a792bde7818fb073fb043588bdb73273bf875b8c4ecdb5878
	Jan 09 00:55:31 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:55:31.784268414Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 09 00:55:31 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:55:31.828494107Z" level=info msg="shim reaped" id=65124b94e5cd2f1aa4ad0a358fe7f787d38ea78a8f913bd00d56319d1c9d6a6f
	Jan 09 00:55:31 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:55:31.838431709Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 09 00:55:31 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:55:31.838619796Z" level=warning msg="65124b94e5cd2f1aa4ad0a358fe7f787d38ea78a8f913bd00d56319d1c9d6a6f cleanup: failed to unmount IPC: umount /var/lib/docker/containers/65124b94e5cd2f1aa4ad0a358fe7f787d38ea78a8f913bd00d56319d1c9d6a6f/mounts/shm, flags: 0x2: no such file or directory"
	Jan 09 00:55:35 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:55:35.731801640Z" level=info msg="shim reaped" id=34e03bf45bfa32592ca073dbd41fecf4b2bb59bc139f3ba3f320128911ae15cb
	Jan 09 00:55:35 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:55:35.741503159Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 09 00:55:35 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:55:35.741832635Z" level=warning msg="34e03bf45bfa32592ca073dbd41fecf4b2bb59bc139f3ba3f320128911ae15cb cleanup: failed to unmount IPC: umount /var/lib/docker/containers/34e03bf45bfa32592ca073dbd41fecf4b2bb59bc139f3ba3f320128911ae15cb/mounts/shm, flags: 0x2: no such file or directory"
	Jan 09 00:55:35 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:55:35.878800215Z" level=info msg="shim reaped" id=33ba8bedcaf0b914ef7f1c5fe7106b82b72e372e40f75a91cdb18e21e89cfd5f
	Jan 09 00:55:35 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:55:35.889623055Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 09 00:55:35 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:55:35.890055924Z" level=warning msg="33ba8bedcaf0b914ef7f1c5fe7106b82b72e372e40f75a91cdb18e21e89cfd5f cleanup: failed to unmount IPC: umount /var/lib/docker/containers/33ba8bedcaf0b914ef7f1c5fe7106b82b72e372e40f75a91cdb18e21e89cfd5f/mounts/shm, flags: 0x2: no such file or directory"
	Jan 09 00:55:40 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:55:40.642078344Z" level=info msg="Container 248e749097ba271a22db42ad5752cb75ca0649973f44642ed648d8061834f8b8 failed to exit within 10 seconds of signal 15 - using the force"
	Jan 09 00:55:40 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:55:40.786268916Z" level=info msg="shim reaped" id=248e749097ba271a22db42ad5752cb75ca0649973f44642ed648d8061834f8b8
	Jan 09 00:55:40 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:55:40.797028360Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 09 00:55:40 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:55:40.797222346Z" level=warning msg="248e749097ba271a22db42ad5752cb75ca0649973f44642ed648d8061834f8b8 cleanup: failed to unmount IPC: umount /var/lib/docker/containers/248e749097ba271a22db42ad5752cb75ca0649973f44642ed648d8061834f8b8/mounts/shm, flags: 0x2: no such file or directory"
	Jan 09 00:55:43 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:55:43.355599247Z" level=info msg="Daemon shutdown complete"
	Jan 09 00:55:43 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:55:43.356266500Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=moby
	Jan 09 00:55:43 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:55:43.356397591Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Jan 09 00:55:43 running-upgrade-248700 dockerd[8456]: time="2024-01-09T00:55:43.357655002Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jan 09 00:55:44 running-upgrade-248700 systemd[1]: docker.service: Succeeded.
	Jan 09 00:55:44 running-upgrade-248700 systemd[1]: Stopped Docker Application Container Engine.
	Jan 09 00:55:44 running-upgrade-248700 systemd[1]: Starting Docker Application Container Engine...
	Jan 09 00:55:44 running-upgrade-248700 dockerd[11435]: time="2024-01-09T00:55:44.430560942Z" level=info msg="Starting up"
	Jan 09 00:55:44 running-upgrade-248700 dockerd[11435]: time="2024-01-09T00:55:44.434323978Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	Jan 09 00:55:44 running-upgrade-248700 dockerd[11435]: time="2024-01-09T00:55:44.434439669Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	Jan 09 00:55:44 running-upgrade-248700 dockerd[11435]: time="2024-01-09T00:55:44.434476567Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock 0  <nil>}] <nil>}" module=grpc
	Jan 09 00:55:44 running-upgrade-248700 dockerd[11435]: time="2024-01-09T00:55:44.434494266Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Jan 09 00:55:44 running-upgrade-248700 dockerd[11435]: time="2024-01-09T00:55:44.434803644Z" level=warning msg="grpc: addrConn.createTransport failed to connect to {unix:///run/containerd/containerd.sock 0  <nil>}. Err :connection error: desc = \"transport: Error while dialing dial unix /run/containerd/containerd.sock: connect: connection refused\". Reconnecting..." module=grpc
	Jan 09 00:55:44 running-upgrade-248700 dockerd[11435]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": all SubConns are in TransientFailure, latest connection error: connection error: desc = "transport: Error while dialing dial unix /run/containerd/containerd.sock: connect: connection refused": unavailable
	Jan 09 00:55:44 running-upgrade-248700 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jan 09 00:55:44 running-upgrade-248700 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jan 09 00:55:44 running-upgrade-248700 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0109 00:55:44.548821    5580 out.go:239] * 
	* 
	W0109 00:55:44.550543    5580 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0109 00:55:44.561384    5580 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:145: upgrade from v1.6.2 to HEAD failed: out/minikube-windows-amd64.exe start -p running-upgrade-248700 --memory=2200 --alsologtostderr -v=1 --driver=hyperv: exit status 90
panic.go:523: *** TestRunningBinaryUpgrade FAILED at 2024-01-09 00:55:45.1191527 +0000 UTC m=+7364.628123101
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p running-upgrade-248700 -n running-upgrade-248700
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p running-upgrade-248700 -n running-upgrade-248700: exit status 6 (12.8596093s)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	W0109 00:55:45.265152   15252 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	E0109 00:55:57.927050   15252 status.go:415] kubeconfig endpoint: extract IP: "running-upgrade-248700" does not appear in C:\Users\jenkins.minikube1\minikube-integration\kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "running-upgrade-248700" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:175: Cleaning up "running-upgrade-248700" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p running-upgrade-248700
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p running-upgrade-248700: (58.3076232s)
--- FAIL: TestRunningBinaryUpgrade (479.03s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (303.29s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-windows-amd64.exe start -p NoKubernetes-248700 --driver=hyperv
no_kubernetes_test.go:95: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p NoKubernetes-248700 --driver=hyperv: exit status 1 (4m59.6925715s)

                                                
                                                
-- stdout --
	* [NoKubernetes-248700] minikube v1.32.0 on Microsoft Windows 10 Enterprise N 10.0.19045.3803 Build 19045.3803
	  - KUBECONFIG=C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube1\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=17830
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the hyperv driver based on user configuration
	* Starting control plane node NoKubernetes-248700 in cluster NoKubernetes-248700
	* Creating hyperv VM (CPUs=2, Memory=6000MB, Disk=20000MB) ...

                                                
                                                
-- /stdout --
** stderr ** 
	W0109 00:48:57.813924    9408 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
no_kubernetes_test.go:97: failed to start minikube with args: "out/minikube-windows-amd64.exe start -p NoKubernetes-248700 --driver=hyperv" : exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p NoKubernetes-248700 -n NoKubernetes-248700
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p NoKubernetes-248700 -n NoKubernetes-248700: exit status 7 (3.5938276s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	W0109 00:53:57.476674    9252 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	E0109 00:54:00.882381    9252 main.go:137] libmachine: [stderr =====>] : Hyper-V\Get-VM : Hyper-V was unable to find a virtual machine with name "NoKubernetes-248700".
	At line:1 char:3
	+ ( Hyper-V\Get-VM NoKubernetes-248700 ).state
	+   ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
	    + CategoryInfo          : InvalidArgument: (NoKubernetes-248700:String) [Get-VM], VirtualizationException
	    + FullyQualifiedErrorId : InvalidParameter,Microsoft.HyperV.PowerShell.Commands.GetVM
	 
	

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-248700" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestNoKubernetes/serial/StartWithK8s (303.29s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (630.82s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:196: (dbg) Run:  C:\Users\jenkins.minikube1\AppData\Local\Temp\minikube-v1.6.2.11401904.exe start -p stopped-upgrade-748100 --memory=2200 --vm-driver=hyperv
E0109 00:53:27.420249   14288 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ingress-addon-legacy-744200\client.crt: The system cannot find the path specified.
version_upgrade_test.go:196: (dbg) Done: C:\Users\jenkins.minikube1\AppData\Local\Temp\minikube-v1.6.2.11401904.exe start -p stopped-upgrade-748100 --memory=2200 --vm-driver=hyperv: (4m39.0706206s)
version_upgrade_test.go:205: (dbg) Run:  C:\Users\jenkins.minikube1\AppData\Local\Temp\minikube-v1.6.2.11401904.exe -p stopped-upgrade-748100 stop
version_upgrade_test.go:205: (dbg) Done: C:\Users\jenkins.minikube1\AppData\Local\Temp\minikube-v1.6.2.11401904.exe -p stopped-upgrade-748100 stop: (26.9576924s)
version_upgrade_test.go:211: (dbg) Run:  out/minikube-windows-amd64.exe start -p stopped-upgrade-748100 --memory=2200 --alsologtostderr -v=1 --driver=hyperv
E0109 00:58:27.424759   14288 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ingress-addon-legacy-744200\client.crt: The system cannot find the path specified.
version_upgrade_test.go:211: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p stopped-upgrade-748100 --memory=2200 --alsologtostderr -v=1 --driver=hyperv: exit status 90 (5m24.5760379s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-748100] minikube v1.32.0 on Microsoft Windows 10 Enterprise N 10.0.19045.3803 Build 19045.3803
	  - KUBECONFIG=C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube1\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=17830
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	* Using the hyperv driver based on existing profile
	* Starting control plane node stopped-upgrade-748100 in cluster stopped-upgrade-748100
	* Restarting existing hyperv VM for "stopped-upgrade-748100" ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0109 00:58:22.255285    8412 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I0109 00:58:22.332703    8412 out.go:296] Setting OutFile to fd 820 ...
	I0109 00:58:22.332703    8412 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0109 00:58:22.332703    8412 out.go:309] Setting ErrFile to fd 1772...
	I0109 00:58:22.332703    8412 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0109 00:58:22.356697    8412 out.go:303] Setting JSON to false
	I0109 00:58:22.359690    8412 start.go:128] hostinfo: {"hostname":"minikube1","uptime":9397,"bootTime":1704752505,"procs":204,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.3803 Build 19045.3803","kernelVersion":"10.0.19045.3803 Build 19045.3803","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f3192dc9-6fb2-4797-bdaa-5f567903ef41"}
	W0109 00:58:22.359690    8412 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0109 00:58:22.527689    8412 out.go:177] * [stopped-upgrade-748100] minikube v1.32.0 on Microsoft Windows 10 Enterprise N 10.0.19045.3803 Build 19045.3803
	I0109 00:58:22.629284    8412 notify.go:220] Checking for updates...
	I0109 00:58:22.719789    8412 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0109 00:58:22.885576    8412 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0109 00:58:23.084482    8412 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube1\minikube-integration\.minikube
	I0109 00:58:23.271266    8412 out.go:177]   - MINIKUBE_LOCATION=17830
	I0109 00:58:23.473903    8412 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0109 00:58:23.527006    8412 config.go:182] Loaded profile config "stopped-upgrade-748100": Driver=, ContainerRuntime=docker, KubernetesVersion=v1.17.0
	I0109 00:58:23.527006    8412 start_flags.go:694] config upgrade: Driver=hyperv
	I0109 00:58:23.527006    8412 start_flags.go:706] config upgrade: KicBaseImage=gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704751654-17830@sha256:cabd32f8d9e8d804966eb117ed5366660f6363a4d1415f0b5480de6e396be617
	I0109 00:58:23.527006    8412 profile.go:148] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\stopped-upgrade-748100\config.json ...
	I0109 00:58:23.670903    8412 out.go:177] * Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	I0109 00:58:23.724192    8412 driver.go:392] Setting default libvirt URI to qemu:///system
	I0109 00:58:30.074870    8412 out.go:177] * Using the hyperv driver based on existing profile
	I0109 00:58:30.129012    8412 start.go:298] selected driver: hyperv
	I0109 00:58:30.129012    8412 start.go:902] validating driver "hyperv" against &{Name:stopped-upgrade-748100 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.6.0.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704751654-17830@sha256:cabd32f8d9e8d804966eb117ed5366660f6363a4d1415f0b5480de6e396be617 Memory:2200 CPUs:2 DiskSize:20000 VMDriver:hyperv Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.17.0
ClusterName: Namespace: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:minikube IP:172.24.101.209 Port:8443 KubernetesVersion:v1.17.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[] StartHostTimeout:0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:
SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:0s GPUs:}
	I0109 00:58:30.130167    8412 start.go:913] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0109 00:58:30.185296    8412 cni.go:84] Creating CNI manager for ""
	I0109 00:58:30.185296    8412 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0109 00:58:30.185296    8412 start_flags.go:323] config:
	{Name:stopped-upgrade-748100 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.6.0.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704751654-17830@sha256:cabd32f8d9e8d804966eb117ed5366660f6363a4d1415f0b5480de6e396be617 Memory:2200 CPUs:2 DiskSize:20000 VMDriver:hyperv Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.17.0 ClusterName: Namespace: APIServerName:minikubeCA APIServerNames:[] APIServerIPs
:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:minikube IP:172.24.101.209 Port:8443 KubernetesVersion:v1.17.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[] StartHostTimeout:0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:0s GPUs:}
	I0109 00:58:30.186048    8412 iso.go:125] acquiring lock: {Name:mk6e44fd4f974e035b521383471f58bfbae3f4f1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0109 00:58:30.282602    8412 out.go:177] * Starting control plane node stopped-upgrade-748100 in cluster stopped-upgrade-748100
	I0109 00:58:30.287660    8412 preload.go:132] Checking if preload exists for k8s version v1.17.0 and runtime docker
	W0109 00:58:30.334894    8412 preload.go:115] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.17.0/preloaded-images-k8s-v18-v1.17.0-docker-overlay2-amd64.tar.lz4 status code: 404
	I0109 00:58:30.335341    8412 profile.go:148] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\stopped-upgrade-748100\config.json ...
	I0109 00:58:30.335407    8412 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager:v1.17.0 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager_v1.17.0
	I0109 00:58:30.335515    8412 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver:v1.17.0 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver_v1.17.0
	I0109 00:58:30.335407    8412 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner:v5 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner_v5
	I0109 00:58:30.335515    8412 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause:3.1 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause_3.1
	I0109 00:58:30.335515    8412 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd:3.4.3-0 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.4.3-0
	I0109 00:58:30.335515    8412 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns:1.6.5 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns_1.6.5
	I0109 00:58:30.335573    8412 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler:v1.17.0 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler_v1.17.0
	I0109 00:58:30.335573    8412 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy:v1.17.0 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.17.0
	I0109 00:58:30.339401    8412 start.go:365] acquiring machines lock for stopped-upgrade-748100: {Name:mk1756dfe9622c208593bf9b718faf6c1651aea2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0109 00:58:30.527585    8412 cache.go:107] acquiring lock: {Name:mk67b634fe9a890edc5195da54a2f3093e0c8f30 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0109 00:58:30.528145    8412 cache.go:107] acquiring lock: {Name:mkcd99a49ef11cbbf53d95904dadb7eadb7e30f9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0109 00:58:30.528406    8412 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube1\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner_v5 exists
	I0109 00:58:30.528406    8412 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube1\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause_3.1 exists
	I0109 00:58:30.528700    8412 cache.go:96] cache image "registry.k8s.io/pause:3.1" -> "C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\pause_3.1" took 192.8697ms
	I0109 00:58:30.528742    8412 cache.go:80] save to tar file registry.k8s.io/pause:3.1 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause_3.1 succeeded
	I0109 00:58:30.528601    8412 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\cache\\images\\amd64\\gcr.io\\k8s-minikube\\storage-provisioner_v5" took 193.0275ms
	I0109 00:58:30.528854    8412 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner_v5 succeeded
	I0109 00:58:30.533866    8412 cache.go:107] acquiring lock: {Name:mkf253ced278c18e0b579f9f5e07f6a2fe7db678 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0109 00:58:30.534401    8412 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube1\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.4.3-0 exists
	I0109 00:58:30.534591    8412 cache.go:96] cache image "registry.k8s.io/etcd:3.4.3-0" -> "C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\etcd_3.4.3-0" took 198.7403ms
	I0109 00:58:30.534694    8412 cache.go:80] save to tar file registry.k8s.io/etcd:3.4.3-0 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.4.3-0 succeeded
	I0109 00:58:30.536729    8412 cache.go:107] acquiring lock: {Name:mk69342e4f48cfcf5669830048d73215a892bfa9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0109 00:58:30.536729    8412 cache.go:107] acquiring lock: {Name:mk7a50c4bf2c20bec1fff9de3ac74780139c1c4b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0109 00:58:30.536729    8412 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube1\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns_1.6.5 exists
	I0109 00:58:30.536729    8412 cache.go:96] cache image "registry.k8s.io/coredns:1.6.5" -> "C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\coredns_1.6.5" took 200.8432ms
	I0109 00:58:30.536729    8412 cache.go:80] save to tar file registry.k8s.io/coredns:1.6.5 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns_1.6.5 succeeded
	I0109 00:58:30.536729    8412 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube1\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler_v1.17.0 exists
	I0109 00:58:30.537281    8412 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.17.0" -> "C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\kube-scheduler_v1.17.0" took 201.1486ms
	I0109 00:58:30.537378    8412 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.17.0 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler_v1.17.0 succeeded
	I0109 00:58:30.552348    8412 cache.go:107] acquiring lock: {Name:mka7be082bbc64a256cc388eda31b6c9edba386f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0109 00:58:30.552348    8412 cache.go:107] acquiring lock: {Name:mkbbc88bc55edd0ef8bd1c53673fe74e0129caa1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0109 00:58:30.552348    8412 cache.go:107] acquiring lock: {Name:mk4e8ee16ba5b475b341c78282e92381b8584a70 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0109 00:58:30.552917    8412 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube1\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager_v1.17.0 exists
	I0109 00:58:30.552952    8412 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube1\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver_v1.17.0 exists
	I0109 00:58:30.553127    8412 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.17.0" -> "C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\kube-controller-manager_v1.17.0" took 217.6124ms
	I0109 00:58:30.553183    8412 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.17.0 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager_v1.17.0 succeeded
	I0109 00:58:30.553183    8412 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube1\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.17.0 exists
	I0109 00:58:30.553303    8412 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.17.0" -> "C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\kube-apiserver_v1.17.0" took 217.7298ms
	I0109 00:58:30.553423    8412 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.17.0 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver_v1.17.0 succeeded
	I0109 00:58:30.553303    8412 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.17.0" -> "C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\kube-proxy_v1.17.0" took 217.1702ms
	I0109 00:58:30.553423    8412 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.17.0 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.17.0 succeeded
	I0109 00:58:30.553513    8412 cache.go:87] Successfully saved all images to host disk.
	I0109 01:01:34.286235    8412 start.go:369] acquired machines lock for "stopped-upgrade-748100" in 3m3.9467279s
	I0109 01:01:34.286664    8412 start.go:96] Skipping create...Using existing machine configuration
	I0109 01:01:34.286759    8412 fix.go:54] fixHost starting: minikube
	I0109 01:01:34.287143    8412 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM stopped-upgrade-748100 ).state
	I0109 01:01:36.522759    8412 main.go:141] libmachine: [stdout =====>] : Off
	
	I0109 01:01:36.522886    8412 main.go:141] libmachine: [stderr =====>] : 
	I0109 01:01:36.522886    8412 fix.go:102] recreateIfNeeded on stopped-upgrade-748100: state=Stopped err=<nil>
	W0109 01:01:36.523043    8412 fix.go:128] unexpected machine state, will restart: <nil>
	I0109 01:01:36.525533    8412 out.go:177] * Restarting existing hyperv VM for "stopped-upgrade-748100" ...
	I0109 01:01:36.530289    8412 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM stopped-upgrade-748100
	I0109 01:01:39.811835    8412 main.go:141] libmachine: [stdout =====>] : 
	I0109 01:01:39.811921    8412 main.go:141] libmachine: [stderr =====>] : 
	I0109 01:01:39.812020    8412 main.go:141] libmachine: Waiting for host to start...
	I0109 01:01:39.812020    8412 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM stopped-upgrade-748100 ).state
	I0109 01:01:42.390936    8412 main.go:141] libmachine: [stdout =====>] : Running
	
	I0109 01:01:42.390936    8412 main.go:141] libmachine: [stderr =====>] : 
	I0109 01:01:42.391330    8412 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM stopped-upgrade-748100 ).networkadapters[0]).ipaddresses[0]
	I0109 01:01:45.106325    8412 main.go:141] libmachine: [stdout =====>] : 
	I0109 01:01:45.106391    8412 main.go:141] libmachine: [stderr =====>] : 
	I0109 01:01:46.106970    8412 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM stopped-upgrade-748100 ).state
	I0109 01:01:48.407012    8412 main.go:141] libmachine: [stdout =====>] : Running
	
	I0109 01:01:48.407012    8412 main.go:141] libmachine: [stderr =====>] : 
	I0109 01:01:48.407012    8412 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM stopped-upgrade-748100 ).networkadapters[0]).ipaddresses[0]
	I0109 01:01:51.240702    8412 main.go:141] libmachine: [stdout =====>] : 
	I0109 01:01:51.240949    8412 main.go:141] libmachine: [stderr =====>] : 
	I0109 01:01:52.250106    8412 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM stopped-upgrade-748100 ).state
	I0109 01:01:54.588511    8412 main.go:141] libmachine: [stdout =====>] : Running
	
	I0109 01:01:54.588789    8412 main.go:141] libmachine: [stderr =====>] : 
	I0109 01:01:54.588789    8412 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM stopped-upgrade-748100 ).networkadapters[0]).ipaddresses[0]
	I0109 01:01:57.240215    8412 main.go:141] libmachine: [stdout =====>] : 
	I0109 01:01:57.240466    8412 main.go:141] libmachine: [stderr =====>] : 
	I0109 01:01:58.254869    8412 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM stopped-upgrade-748100 ).state
	I0109 01:02:00.544604    8412 main.go:141] libmachine: [stdout =====>] : Running
	
	I0109 01:02:00.544604    8412 main.go:141] libmachine: [stderr =====>] : 
	I0109 01:02:00.544604    8412 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM stopped-upgrade-748100 ).networkadapters[0]).ipaddresses[0]
	I0109 01:02:03.221877    8412 main.go:141] libmachine: [stdout =====>] : 
	I0109 01:02:03.221955    8412 main.go:141] libmachine: [stderr =====>] : 
	I0109 01:02:04.234863    8412 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM stopped-upgrade-748100 ).state
	I0109 01:02:06.579100    8412 main.go:141] libmachine: [stdout =====>] : Running
	
	I0109 01:02:06.579142    8412 main.go:141] libmachine: [stderr =====>] : 
	I0109 01:02:06.579142    8412 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM stopped-upgrade-748100 ).networkadapters[0]).ipaddresses[0]
	I0109 01:02:09.257275    8412 main.go:141] libmachine: [stdout =====>] : 
	I0109 01:02:09.257584    8412 main.go:141] libmachine: [stderr =====>] : 
	I0109 01:02:10.259123    8412 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM stopped-upgrade-748100 ).state
	I0109 01:02:12.554840    8412 main.go:141] libmachine: [stdout =====>] : Running
	
	I0109 01:02:12.555176    8412 main.go:141] libmachine: [stderr =====>] : 
	I0109 01:02:12.555220    8412 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM stopped-upgrade-748100 ).networkadapters[0]).ipaddresses[0]
	I0109 01:02:15.218090    8412 main.go:141] libmachine: [stdout =====>] : 
	I0109 01:02:15.218209    8412 main.go:141] libmachine: [stderr =====>] : 
	I0109 01:02:16.230438    8412 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM stopped-upgrade-748100 ).state
	I0109 01:02:18.540914    8412 main.go:141] libmachine: [stdout =====>] : Running
	
	I0109 01:02:18.541215    8412 main.go:141] libmachine: [stderr =====>] : 
	I0109 01:02:18.541288    8412 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM stopped-upgrade-748100 ).networkadapters[0]).ipaddresses[0]
	I0109 01:02:21.254793    8412 main.go:141] libmachine: [stdout =====>] : 
	I0109 01:02:21.254885    8412 main.go:141] libmachine: [stderr =====>] : 
	I0109 01:02:22.270466    8412 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM stopped-upgrade-748100 ).state
	I0109 01:02:24.606427    8412 main.go:141] libmachine: [stdout =====>] : Running
	
	I0109 01:02:24.606657    8412 main.go:141] libmachine: [stderr =====>] : 
	I0109 01:02:24.606737    8412 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM stopped-upgrade-748100 ).networkadapters[0]).ipaddresses[0]
	I0109 01:02:27.522648    8412 main.go:141] libmachine: [stdout =====>] : 172.24.101.209
	
	I0109 01:02:27.522849    8412 main.go:141] libmachine: [stderr =====>] : 
	I0109 01:02:27.525831    8412 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM stopped-upgrade-748100 ).state
	I0109 01:02:29.962767    8412 main.go:141] libmachine: [stdout =====>] : Running
	
	I0109 01:02:29.962865    8412 main.go:141] libmachine: [stderr =====>] : 
	I0109 01:02:29.962865    8412 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM stopped-upgrade-748100 ).networkadapters[0]).ipaddresses[0]
	I0109 01:02:32.791235    8412 main.go:141] libmachine: [stdout =====>] : 172.24.101.209
	
	I0109 01:02:32.791235    8412 main.go:141] libmachine: [stderr =====>] : 
	I0109 01:02:32.791628    8412 profile.go:148] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\stopped-upgrade-748100\config.json ...
	I0109 01:02:32.795501    8412 machine.go:88] provisioning docker machine ...
	I0109 01:02:32.795592    8412 buildroot.go:166] provisioning hostname "stopped-upgrade-748100"
	I0109 01:02:32.795751    8412 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM stopped-upgrade-748100 ).state
	I0109 01:02:35.094054    8412 main.go:141] libmachine: [stdout =====>] : Running
	
	I0109 01:02:35.094283    8412 main.go:141] libmachine: [stderr =====>] : 
	I0109 01:02:35.094283    8412 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM stopped-upgrade-748100 ).networkadapters[0]).ipaddresses[0]
	I0109 01:02:37.816140    8412 main.go:141] libmachine: [stdout =====>] : 172.24.101.209
	
	I0109 01:02:37.816140    8412 main.go:141] libmachine: [stderr =====>] : 
	I0109 01:02:37.821132    8412 main.go:141] libmachine: Using SSH client type: native
	I0109 01:02:37.822171    8412 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa46120] 0xa48c60 <nil>  [] 0s} 172.24.101.209 22 <nil> <nil>}
	I0109 01:02:37.822171    8412 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-748100 && echo "stopped-upgrade-748100" | sudo tee /etc/hostname
	I0109 01:02:37.989218    8412 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-748100
	
	I0109 01:02:37.989218    8412 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM stopped-upgrade-748100 ).state
	I0109 01:02:40.433180    8412 main.go:141] libmachine: [stdout =====>] : Running
	
	I0109 01:02:40.433246    8412 main.go:141] libmachine: [stderr =====>] : 
	I0109 01:02:40.433246    8412 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM stopped-upgrade-748100 ).networkadapters[0]).ipaddresses[0]
	I0109 01:02:43.149861    8412 main.go:141] libmachine: [stdout =====>] : 172.24.101.209
	
	I0109 01:02:43.150094    8412 main.go:141] libmachine: [stderr =====>] : 
	I0109 01:02:43.156220    8412 main.go:141] libmachine: Using SSH client type: native
	I0109 01:02:43.156949    8412 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa46120] 0xa48c60 <nil>  [] 0s} 172.24.101.209 22 <nil> <nil>}
	I0109 01:02:43.157501    8412 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-748100' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-748100/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-748100' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0109 01:02:43.304725    8412 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0109 01:02:43.304884    8412 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube1\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube1\minikube-integration\.minikube}
	I0109 01:02:43.304979    8412 buildroot.go:174] setting up certificates
	I0109 01:02:43.304979    8412 provision.go:83] configureAuth start
	I0109 01:02:43.305071    8412 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM stopped-upgrade-748100 ).state
	I0109 01:02:45.632353    8412 main.go:141] libmachine: [stdout =====>] : Running
	
	I0109 01:02:45.632535    8412 main.go:141] libmachine: [stderr =====>] : 
	I0109 01:02:45.632535    8412 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM stopped-upgrade-748100 ).networkadapters[0]).ipaddresses[0]
	I0109 01:02:48.446545    8412 main.go:141] libmachine: [stdout =====>] : 172.24.101.209
	
	I0109 01:02:48.446963    8412 main.go:141] libmachine: [stderr =====>] : 
	I0109 01:02:48.447067    8412 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM stopped-upgrade-748100 ).state
	I0109 01:02:50.732320    8412 main.go:141] libmachine: [stdout =====>] : Running
	
	I0109 01:02:50.732320    8412 main.go:141] libmachine: [stderr =====>] : 
	I0109 01:02:50.732456    8412 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM stopped-upgrade-748100 ).networkadapters[0]).ipaddresses[0]
	I0109 01:02:53.492049    8412 main.go:141] libmachine: [stdout =====>] : 172.24.101.209
	
	I0109 01:02:53.492265    8412 main.go:141] libmachine: [stderr =====>] : 
	I0109 01:02:53.492265    8412 provision.go:138] copyHostCerts
	I0109 01:02:53.492795    8412 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem, removing ...
	I0109 01:02:53.492795    8412 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.pem
	I0109 01:02:53.493335    8412 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0109 01:02:53.494637    8412 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem, removing ...
	I0109 01:02:53.494709    8412 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cert.pem
	I0109 01:02:53.495020    8412 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0109 01:02:53.495757    8412 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem, removing ...
	I0109 01:02:53.495757    8412 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\key.pem
	I0109 01:02:53.496532    8412 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem (1675 bytes)
	I0109 01:02:53.497334    8412 provision.go:112] generating server cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.stopped-upgrade-748100 san=[172.24.101.209 172.24.101.209 localhost 127.0.0.1 minikube stopped-upgrade-748100]
	I0109 01:02:53.822296    8412 provision.go:172] copyRemoteCerts
	I0109 01:02:53.837325    8412 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0109 01:02:53.837325    8412 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM stopped-upgrade-748100 ).state
	I0109 01:02:56.168212    8412 main.go:141] libmachine: [stdout =====>] : Running
	
	I0109 01:02:56.168384    8412 main.go:141] libmachine: [stderr =====>] : 
	I0109 01:02:56.168464    8412 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM stopped-upgrade-748100 ).networkadapters[0]).ipaddresses[0]
	I0109 01:02:58.899195    8412 main.go:141] libmachine: [stdout =====>] : 172.24.101.209
	
	I0109 01:02:58.899288    8412 main.go:141] libmachine: [stderr =====>] : 
	I0109 01:02:58.899557    8412 sshutil.go:53] new ssh client: &{IP:172.24.101.209 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\stopped-upgrade-748100\id_rsa Username:docker}
	I0109 01:02:59.005102    8412 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (5.167777s)
	I0109 01:02:59.005102    8412 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0109 01:02:59.024914    8412 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1241 bytes)
	I0109 01:02:59.043006    8412 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0109 01:02:59.060747    8412 provision.go:86] duration metric: configureAuth took 15.7557122s
	I0109 01:02:59.060747    8412 buildroot.go:189] setting minikube options for container-runtime
	I0109 01:02:59.062794    8412 config.go:182] Loaded profile config "stopped-upgrade-748100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.17.0
	I0109 01:02:59.063326    8412 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM stopped-upgrade-748100 ).state
	I0109 01:03:01.396188    8412 main.go:141] libmachine: [stdout =====>] : Running
	
	I0109 01:03:01.396188    8412 main.go:141] libmachine: [stderr =====>] : 
	I0109 01:03:01.396286    8412 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM stopped-upgrade-748100 ).networkadapters[0]).ipaddresses[0]
	I0109 01:03:04.115729    8412 main.go:141] libmachine: [stdout =====>] : 172.24.101.209
	
	I0109 01:03:04.115729    8412 main.go:141] libmachine: [stderr =====>] : 
	I0109 01:03:04.121553    8412 main.go:141] libmachine: Using SSH client type: native
	I0109 01:03:04.122314    8412 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa46120] 0xa48c60 <nil>  [] 0s} 172.24.101.209 22 <nil> <nil>}
	I0109 01:03:04.122314    8412 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0109 01:03:04.280651    8412 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0109 01:03:04.280651    8412 buildroot.go:70] root file system type: tmpfs
	I0109 01:03:04.280651    8412 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0109 01:03:04.281200    8412 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM stopped-upgrade-748100 ).state
	I0109 01:03:06.502855    8412 main.go:141] libmachine: [stdout =====>] : Running
	
	I0109 01:03:06.502855    8412 main.go:141] libmachine: [stderr =====>] : 
	I0109 01:03:06.502855    8412 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM stopped-upgrade-748100 ).networkadapters[0]).ipaddresses[0]
	I0109 01:03:09.133380    8412 main.go:141] libmachine: [stdout =====>] : 172.24.101.209
	
	I0109 01:03:09.133380    8412 main.go:141] libmachine: [stderr =====>] : 
	I0109 01:03:09.142074    8412 main.go:141] libmachine: Using SSH client type: native
	I0109 01:03:09.142310    8412 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa46120] 0xa48c60 <nil>  [] 0s} 172.24.101.209 22 <nil> <nil>}
	I0109 01:03:09.142310    8412 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0109 01:03:09.305135    8412 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0109 01:03:09.305135    8412 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM stopped-upgrade-748100 ).state
	I0109 01:03:11.565747    8412 main.go:141] libmachine: [stdout =====>] : Running
	
	I0109 01:03:11.566012    8412 main.go:141] libmachine: [stderr =====>] : 
	I0109 01:03:11.566012    8412 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM stopped-upgrade-748100 ).networkadapters[0]).ipaddresses[0]
	I0109 01:03:14.226027    8412 main.go:141] libmachine: [stdout =====>] : 172.24.101.209
	
	I0109 01:03:14.226144    8412 main.go:141] libmachine: [stderr =====>] : 
	I0109 01:03:14.231679    8412 main.go:141] libmachine: Using SSH client type: native
	I0109 01:03:14.232774    8412 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa46120] 0xa48c60 <nil>  [] 0s} 172.24.101.209 22 <nil> <nil>}
	I0109 01:03:14.232774    8412 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0109 01:03:17.213913    8412 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0109 01:03:17.213913    8412 machine.go:91] provisioned docker machine in 44.418316s
	I0109 01:03:17.213913    8412 start.go:300] post-start starting for "stopped-upgrade-748100" (driver="hyperv")
	I0109 01:03:17.213913    8412 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0109 01:03:17.229978    8412 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0109 01:03:17.229978    8412 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM stopped-upgrade-748100 ).state
	I0109 01:03:19.615676    8412 main.go:141] libmachine: [stdout =====>] : Running
	
	I0109 01:03:19.615676    8412 main.go:141] libmachine: [stderr =====>] : 
	I0109 01:03:19.615876    8412 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM stopped-upgrade-748100 ).networkadapters[0]).ipaddresses[0]
	I0109 01:03:22.332608    8412 main.go:141] libmachine: [stdout =====>] : 172.24.101.209
	
	I0109 01:03:22.332754    8412 main.go:141] libmachine: [stderr =====>] : 
	I0109 01:03:22.333003    8412 sshutil.go:53] new ssh client: &{IP:172.24.101.209 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\stopped-upgrade-748100\id_rsa Username:docker}
	I0109 01:03:22.452917    8412 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (5.2229386s)
	I0109 01:03:22.497290    8412 ssh_runner.go:195] Run: cat /etc/os-release
	I0109 01:03:22.505410    8412 info.go:137] Remote host: Buildroot 2019.02.7
	I0109 01:03:22.505410    8412 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\addons for local assets ...
	I0109 01:03:22.505973    8412 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\files for local assets ...
	I0109 01:03:22.507269    8412 filesync.go:149] local asset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\142882.pem -> 142882.pem in /etc/ssl/certs
	I0109 01:03:22.522458    8412 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0109 01:03:22.527171    8412 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\142882.pem --> /etc/ssl/certs/142882.pem (1708 bytes)
	I0109 01:03:22.552613    8412 start.go:303] post-start completed in 5.3386996s
	I0109 01:03:22.552613    8412 fix.go:56] fixHost completed within 1m48.2658428s
	I0109 01:03:22.552806    8412 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM stopped-upgrade-748100 ).state
	I0109 01:03:24.781251    8412 main.go:141] libmachine: [stdout =====>] : Running
	
	I0109 01:03:24.781251    8412 main.go:141] libmachine: [stderr =====>] : 
	I0109 01:03:24.781251    8412 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM stopped-upgrade-748100 ).networkadapters[0]).ipaddresses[0]
	I0109 01:03:27.527189    8412 main.go:141] libmachine: [stdout =====>] : 172.24.101.209
	
	I0109 01:03:27.527296    8412 main.go:141] libmachine: [stderr =====>] : 
	I0109 01:03:27.532126    8412 main.go:141] libmachine: Using SSH client type: native
	I0109 01:03:27.532927    8412 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa46120] 0xa48c60 <nil>  [] 0s} 172.24.101.209 22 <nil> <nil>}
	I0109 01:03:27.532927    8412 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0109 01:03:27.678226    8412 main.go:141] libmachine: SSH cmd err, output: <nil>: 1704762207.663236754
	
	I0109 01:03:27.678298    8412 fix.go:206] guest clock: 1704762207.663236754
	I0109 01:03:27.678298    8412 fix.go:219] Guest: 2024-01-09 01:03:27.663236754 +0000 UTC Remote: 2024-01-09 01:03:22.5526131 +0000 UTC m=+300.409915301 (delta=5.110623654s)
	I0109 01:03:27.678443    8412 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM stopped-upgrade-748100 ).state
	I0109 01:03:29.869330    8412 main.go:141] libmachine: [stdout =====>] : Running
	
	I0109 01:03:29.869580    8412 main.go:141] libmachine: [stderr =====>] : 
	I0109 01:03:29.869664    8412 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM stopped-upgrade-748100 ).networkadapters[0]).ipaddresses[0]
	I0109 01:03:32.570551    8412 main.go:141] libmachine: [stdout =====>] : 172.24.101.209
	
	I0109 01:03:32.570551    8412 main.go:141] libmachine: [stderr =====>] : 
	I0109 01:03:32.577840    8412 main.go:141] libmachine: Using SSH client type: native
	I0109 01:03:32.578619    8412 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa46120] 0xa48c60 <nil>  [] 0s} 172.24.101.209 22 <nil> <nil>}
	I0109 01:03:32.578619    8412 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1704762207
	I0109 01:03:32.737248    8412 main.go:141] libmachine: SSH cmd err, output: <nil>: Tue Jan  9 01:03:27 UTC 2024
	
	I0109 01:03:32.737248    8412 fix.go:226] clock set: Tue Jan  9 01:03:27 UTC 2024
	 (err=<nil>)
	I0109 01:03:32.737248    8412 start.go:83] releasing machines lock for "stopped-upgrade-748100", held for 1m58.4509076s
	I0109 01:03:32.737248    8412 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM stopped-upgrade-748100 ).state
	I0109 01:03:35.155285    8412 main.go:141] libmachine: [stdout =====>] : Running
	
	I0109 01:03:35.155475    8412 main.go:141] libmachine: [stderr =====>] : 
	I0109 01:03:35.155475    8412 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM stopped-upgrade-748100 ).networkadapters[0]).ipaddresses[0]
	I0109 01:03:38.114554    8412 main.go:141] libmachine: [stdout =====>] : 172.24.101.209
	
	I0109 01:03:38.114642    8412 main.go:141] libmachine: [stderr =====>] : 
	I0109 01:03:38.126184    8412 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0109 01:03:38.126184    8412 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM stopped-upgrade-748100 ).state
	I0109 01:03:38.140023    8412 ssh_runner.go:195] Run: cat /version.json
	I0109 01:03:38.141042    8412 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM stopped-upgrade-748100 ).state
	I0109 01:03:40.722764    8412 main.go:141] libmachine: [stdout =====>] : Running
	
	I0109 01:03:40.722963    8412 main.go:141] libmachine: [stderr =====>] : 
	I0109 01:03:40.722853    8412 main.go:141] libmachine: [stdout =====>] : Running
	
	I0109 01:03:40.723086    8412 main.go:141] libmachine: [stderr =====>] : 
	I0109 01:03:40.723166    8412 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM stopped-upgrade-748100 ).networkadapters[0]).ipaddresses[0]
	I0109 01:03:40.723086    8412 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM stopped-upgrade-748100 ).networkadapters[0]).ipaddresses[0]
	I0109 01:03:43.926012    8412 main.go:141] libmachine: [stdout =====>] : 172.24.101.209
	
	I0109 01:03:43.926012    8412 main.go:141] libmachine: [stderr =====>] : 
	I0109 01:03:43.926012    8412 sshutil.go:53] new ssh client: &{IP:172.24.101.209 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\stopped-upgrade-748100\id_rsa Username:docker}
	I0109 01:03:43.973456    8412 main.go:141] libmachine: [stdout =====>] : 172.24.101.209
	
	I0109 01:03:43.973583    8412 main.go:141] libmachine: [stderr =====>] : 
	I0109 01:03:43.973923    8412 sshutil.go:53] new ssh client: &{IP:172.24.101.209 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\stopped-upgrade-748100\id_rsa Username:docker}
	I0109 01:03:44.044819    8412 ssh_runner.go:235] Completed: cat /version.json: (5.9037764s)
	W0109 01:03:44.044819    8412 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0109 01:03:44.059812    8412 ssh_runner.go:195] Run: systemctl --version
	I0109 01:03:44.175058    8412 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (6.048874s)
	I0109 01:03:44.192966    8412 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0109 01:03:44.201566    8412 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0109 01:03:44.219246    8412 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0109 01:03:44.241242    8412 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0109 01:03:44.249957    8412 cni.go:305] no active bridge cni configs found in "/etc/cni/net.d" - nothing to configure
	I0109 01:03:44.250062    8412 start.go:475] detecting cgroup driver to use...
	I0109 01:03:44.250310    8412 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0109 01:03:44.286330    8412 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.1"|' /etc/containerd/config.toml"
	I0109 01:03:44.308328    8412 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0109 01:03:44.317327    8412 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0109 01:03:44.330331    8412 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0109 01:03:44.353366    8412 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0109 01:03:44.377324    8412 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0109 01:03:44.406801    8412 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0109 01:03:44.443849    8412 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0109 01:03:44.477424    8412 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0109 01:03:44.507906    8412 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0109 01:03:44.529578    8412 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0109 01:03:44.558986    8412 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0109 01:03:44.721156    8412 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0109 01:03:44.748028    8412 start.go:475] detecting cgroup driver to use...
	I0109 01:03:44.766770    8412 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0109 01:03:44.805371    8412 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0109 01:03:44.833378    8412 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0109 01:03:44.884877    8412 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0109 01:03:44.918033    8412 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0109 01:03:44.937927    8412 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0109 01:03:44.978303    8412 ssh_runner.go:195] Run: which cri-dockerd
	I0109 01:03:45.005290    8412 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0109 01:03:45.017945    8412 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0109 01:03:45.052268    8412 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0109 01:03:45.196425    8412 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0109 01:03:45.331713    8412 docker.go:560] configuring docker to use "cgroupfs" as cgroup driver...
	I0109 01:03:45.331713    8412 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0109 01:03:45.365574    8412 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0109 01:03:45.497156    8412 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0109 01:03:46.606553    8412 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.1093962s)
	I0109 01:03:46.622962    8412 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0109 01:03:46.646609    8412 out.go:177] 
	W0109 01:03:46.649619    8412 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	-- Logs begin at Tue 2024-01-09 01:02:19 UTC, end at Tue 2024-01-09 01:03:46 UTC. --
	Jan 09 01:03:14 stopped-upgrade-748100 systemd[1]: Starting Docker Application Container Engine...
	Jan 09 01:03:14 stopped-upgrade-748100 dockerd[2469]: time="2024-01-09T01:03:14.660176919Z" level=info msg="Starting up"
	Jan 09 01:03:14 stopped-upgrade-748100 dockerd[2469]: time="2024-01-09T01:03:14.662935731Z" level=info msg="libcontainerd: started new containerd process" pid=2477
	Jan 09 01:03:14 stopped-upgrade-748100 dockerd[2469]: time="2024-01-09T01:03:14.663116225Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	Jan 09 01:03:14 stopped-upgrade-748100 dockerd[2469]: time="2024-01-09T01:03:14.663192722Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	Jan 09 01:03:14 stopped-upgrade-748100 dockerd[2469]: time="2024-01-09T01:03:14.663264620Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/containerd/containerd.sock 0  <nil>}] <nil>}" module=grpc
	Jan 09 01:03:14 stopped-upgrade-748100 dockerd[2469]: time="2024-01-09T01:03:14.663335318Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Jan 09 01:03:14 stopped-upgrade-748100 dockerd[2469]: time="2024-01-09T01:03:14.701477599Z" level=info msg="starting containerd" revision=b34a5c8af56e510852c35414db4c1f4fa6172339 version=v1.2.10
	Jan 09 01:03:14 stopped-upgrade-748100 dockerd[2469]: time="2024-01-09T01:03:14.701886586Z" level=info msg="loading plugin "io.containerd.content.v1.content"..." type=io.containerd.content.v1
	Jan 09 01:03:14 stopped-upgrade-748100 dockerd[2469]: time="2024-01-09T01:03:14.702475568Z" level=info msg="loading plugin "io.containerd.snapshotter.v1.btrfs"..." type=io.containerd.snapshotter.v1
	Jan 09 01:03:14 stopped-upgrade-748100 dockerd[2469]: time="2024-01-09T01:03:14.702762358Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.btrfs" error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs must be a btrfs filesystem to be used with the btrfs snapshotter"
	Jan 09 01:03:14 stopped-upgrade-748100 dockerd[2469]: time="2024-01-09T01:03:14.702841656Z" level=info msg="loading plugin "io.containerd.snapshotter.v1.aufs"..." type=io.containerd.snapshotter.v1
	Jan 09 01:03:14 stopped-upgrade-748100 dockerd[2469]: time="2024-01-09T01:03:14.705282978Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.aufs" error="modprobe aufs failed: "modprobe: FATAL: Module aufs not found in directory /lib/modules/4.19.81\n": exit status 1"
	Jan 09 01:03:14 stopped-upgrade-748100 dockerd[2469]: time="2024-01-09T01:03:14.705383375Z" level=info msg="loading plugin "io.containerd.snapshotter.v1.native"..." type=io.containerd.snapshotter.v1
	Jan 09 01:03:14 stopped-upgrade-748100 dockerd[2469]: time="2024-01-09T01:03:14.706308545Z" level=info msg="loading plugin "io.containerd.snapshotter.v1.overlayfs"..." type=io.containerd.snapshotter.v1
	Jan 09 01:03:14 stopped-upgrade-748100 dockerd[2469]: time="2024-01-09T01:03:14.707223916Z" level=info msg="loading plugin "io.containerd.snapshotter.v1.zfs"..." type=io.containerd.snapshotter.v1
	Jan 09 01:03:14 stopped-upgrade-748100 dockerd[2469]: time="2024-01-09T01:03:14.707534306Z" level=info msg="skip loading plugin "io.containerd.snapshotter.v1.zfs"..." type=io.containerd.snapshotter.v1
	Jan 09 01:03:14 stopped-upgrade-748100 dockerd[2469]: time="2024-01-09T01:03:14.707629503Z" level=info msg="loading plugin "io.containerd.metadata.v1.bolt"..." type=io.containerd.metadata.v1
	Jan 09 01:03:14 stopped-upgrade-748100 dockerd[2469]: time="2024-01-09T01:03:14.707659602Z" level=warning msg="could not use snapshotter zfs in metadata plugin" error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin"
	Jan 09 01:03:14 stopped-upgrade-748100 dockerd[2469]: time="2024-01-09T01:03:14.707668902Z" level=warning msg="could not use snapshotter btrfs in metadata plugin" error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs must be a btrfs filesystem to be used with the btrfs snapshotter"
	Jan 09 01:03:14 stopped-upgrade-748100 dockerd[2469]: time="2024-01-09T01:03:14.707678101Z" level=warning msg="could not use snapshotter aufs in metadata plugin" error="modprobe aufs failed: "modprobe: FATAL: Module aufs not found in directory /lib/modules/4.19.81\n": exit status 1"
	Jan 09 01:03:14 stopped-upgrade-748100 dockerd[2469]: time="2024-01-09T01:03:14.711192589Z" level=info msg="loading plugin "io.containerd.differ.v1.walking"..." type=io.containerd.differ.v1
	Jan 09 01:03:14 stopped-upgrade-748100 dockerd[2469]: time="2024-01-09T01:03:14.711248487Z" level=info msg="loading plugin "io.containerd.gc.v1.scheduler"..." type=io.containerd.gc.v1
	Jan 09 01:03:14 stopped-upgrade-748100 dockerd[2469]: time="2024-01-09T01:03:14.711315785Z" level=info msg="loading plugin "io.containerd.service.v1.containers-service"..." type=io.containerd.service.v1
	Jan 09 01:03:14 stopped-upgrade-748100 dockerd[2469]: time="2024-01-09T01:03:14.711333085Z" level=info msg="loading plugin "io.containerd.service.v1.content-service"..." type=io.containerd.service.v1
	Jan 09 01:03:14 stopped-upgrade-748100 dockerd[2469]: time="2024-01-09T01:03:14.711344584Z" level=info msg="loading plugin "io.containerd.service.v1.diff-service"..." type=io.containerd.service.v1
	Jan 09 01:03:14 stopped-upgrade-748100 dockerd[2469]: time="2024-01-09T01:03:14.711356884Z" level=info msg="loading plugin "io.containerd.service.v1.images-service"..." type=io.containerd.service.v1
	Jan 09 01:03:14 stopped-upgrade-748100 dockerd[2469]: time="2024-01-09T01:03:14.711368383Z" level=info msg="loading plugin "io.containerd.service.v1.leases-service"..." type=io.containerd.service.v1
	Jan 09 01:03:14 stopped-upgrade-748100 dockerd[2469]: time="2024-01-09T01:03:14.711379683Z" level=info msg="loading plugin "io.containerd.service.v1.namespaces-service"..." type=io.containerd.service.v1
	Jan 09 01:03:14 stopped-upgrade-748100 dockerd[2469]: time="2024-01-09T01:03:14.711390583Z" level=info msg="loading plugin "io.containerd.service.v1.snapshots-service"..." type=io.containerd.service.v1
	Jan 09 01:03:14 stopped-upgrade-748100 dockerd[2469]: time="2024-01-09T01:03:14.711401782Z" level=info msg="loading plugin "io.containerd.runtime.v1.linux"..." type=io.containerd.runtime.v1
	Jan 09 01:03:14 stopped-upgrade-748100 dockerd[2469]: time="2024-01-09T01:03:14.711496579Z" level=info msg="loading plugin "io.containerd.runtime.v2.task"..." type=io.containerd.runtime.v2
	Jan 09 01:03:14 stopped-upgrade-748100 dockerd[2469]: time="2024-01-09T01:03:14.711561777Z" level=info msg="loading plugin "io.containerd.monitor.v1.cgroups"..." type=io.containerd.monitor.v1
	Jan 09 01:03:14 stopped-upgrade-748100 dockerd[2469]: time="2024-01-09T01:03:14.712095060Z" level=info msg="loading plugin "io.containerd.service.v1.tasks-service"..." type=io.containerd.service.v1
	Jan 09 01:03:14 stopped-upgrade-748100 dockerd[2469]: time="2024-01-09T01:03:14.712199857Z" level=info msg="loading plugin "io.containerd.internal.v1.restart"..." type=io.containerd.internal.v1
	Jan 09 01:03:14 stopped-upgrade-748100 dockerd[2469]: time="2024-01-09T01:03:14.712241256Z" level=info msg="loading plugin "io.containerd.grpc.v1.containers"..." type=io.containerd.grpc.v1
	Jan 09 01:03:14 stopped-upgrade-748100 dockerd[2469]: time="2024-01-09T01:03:14.712256155Z" level=info msg="loading plugin "io.containerd.grpc.v1.content"..." type=io.containerd.grpc.v1
	Jan 09 01:03:14 stopped-upgrade-748100 dockerd[2469]: time="2024-01-09T01:03:14.712267555Z" level=info msg="loading plugin "io.containerd.grpc.v1.diff"..." type=io.containerd.grpc.v1
	Jan 09 01:03:14 stopped-upgrade-748100 dockerd[2469]: time="2024-01-09T01:03:14.712278154Z" level=info msg="loading plugin "io.containerd.grpc.v1.events"..." type=io.containerd.grpc.v1
	Jan 09 01:03:14 stopped-upgrade-748100 dockerd[2469]: time="2024-01-09T01:03:14.712288054Z" level=info msg="loading plugin "io.containerd.grpc.v1.healthcheck"..." type=io.containerd.grpc.v1
	Jan 09 01:03:14 stopped-upgrade-748100 dockerd[2469]: time="2024-01-09T01:03:14.712298654Z" level=info msg="loading plugin "io.containerd.grpc.v1.images"..." type=io.containerd.grpc.v1
	Jan 09 01:03:14 stopped-upgrade-748100 dockerd[2469]: time="2024-01-09T01:03:14.712308653Z" level=info msg="loading plugin "io.containerd.grpc.v1.leases"..." type=io.containerd.grpc.v1
	Jan 09 01:03:14 stopped-upgrade-748100 dockerd[2469]: time="2024-01-09T01:03:14.712318953Z" level=info msg="loading plugin "io.containerd.grpc.v1.namespaces"..." type=io.containerd.grpc.v1
	Jan 09 01:03:14 stopped-upgrade-748100 dockerd[2469]: time="2024-01-09T01:03:14.712328953Z" level=info msg="loading plugin "io.containerd.internal.v1.opt"..." type=io.containerd.internal.v1
	Jan 09 01:03:14 stopped-upgrade-748100 dockerd[2469]: time="2024-01-09T01:03:14.712386751Z" level=info msg="loading plugin "io.containerd.grpc.v1.snapshots"..." type=io.containerd.grpc.v1
	Jan 09 01:03:14 stopped-upgrade-748100 dockerd[2469]: time="2024-01-09T01:03:14.712401050Z" level=info msg="loading plugin "io.containerd.grpc.v1.tasks"..." type=io.containerd.grpc.v1
	Jan 09 01:03:14 stopped-upgrade-748100 dockerd[2469]: time="2024-01-09T01:03:14.712411650Z" level=info msg="loading plugin "io.containerd.grpc.v1.version"..." type=io.containerd.grpc.v1
	Jan 09 01:03:14 stopped-upgrade-748100 dockerd[2469]: time="2024-01-09T01:03:14.712423350Z" level=info msg="loading plugin "io.containerd.grpc.v1.introspection"..." type=io.containerd.grpc.v1
	Jan 09 01:03:14 stopped-upgrade-748100 dockerd[2469]: time="2024-01-09T01:03:14.712543446Z" level=info msg=serving... address="/var/run/docker/containerd/containerd-debug.sock"
	Jan 09 01:03:14 stopped-upgrade-748100 dockerd[2469]: time="2024-01-09T01:03:14.712683841Z" level=info msg=serving... address="/var/run/docker/containerd/containerd.sock"
	Jan 09 01:03:14 stopped-upgrade-748100 dockerd[2469]: time="2024-01-09T01:03:14.712698841Z" level=info msg="containerd successfully booted in 0.014102s"
	Jan 09 01:03:14 stopped-upgrade-748100 dockerd[2469]: time="2024-01-09T01:03:14.728670831Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	Jan 09 01:03:14 stopped-upgrade-748100 dockerd[2469]: time="2024-01-09T01:03:14.728769728Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	Jan 09 01:03:14 stopped-upgrade-748100 dockerd[2469]: time="2024-01-09T01:03:14.728850725Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/containerd/containerd.sock 0  <nil>}] <nil>}" module=grpc
	Jan 09 01:03:14 stopped-upgrade-748100 dockerd[2469]: time="2024-01-09T01:03:14.728873024Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Jan 09 01:03:14 stopped-upgrade-748100 dockerd[2469]: time="2024-01-09T01:03:14.730995356Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	Jan 09 01:03:14 stopped-upgrade-748100 dockerd[2469]: time="2024-01-09T01:03:14.731189550Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	Jan 09 01:03:14 stopped-upgrade-748100 dockerd[2469]: time="2024-01-09T01:03:14.731213549Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/containerd/containerd.sock 0  <nil>}] <nil>}" module=grpc
	Jan 09 01:03:14 stopped-upgrade-748100 dockerd[2469]: time="2024-01-09T01:03:14.731225049Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Jan 09 01:03:14 stopped-upgrade-748100 dockerd[2469]: time="2024-01-09T01:03:14.748149208Z" level=info msg="[graphdriver] using prior storage driver: overlay2"
	Jan 09 01:03:14 stopped-upgrade-748100 dockerd[2469]: time="2024-01-09T01:03:14.902984462Z" level=warning msg="Your kernel does not support cgroup blkio weight"
	Jan 09 01:03:14 stopped-upgrade-748100 dockerd[2469]: time="2024-01-09T01:03:14.903050160Z" level=warning msg="Your kernel does not support cgroup blkio weight_device"
	Jan 09 01:03:14 stopped-upgrade-748100 dockerd[2469]: time="2024-01-09T01:03:14.903066159Z" level=warning msg="Your kernel does not support cgroup blkio throttle.read_bps_device"
	Jan 09 01:03:14 stopped-upgrade-748100 dockerd[2469]: time="2024-01-09T01:03:14.903073559Z" level=warning msg="Your kernel does not support cgroup blkio throttle.write_bps_device"
	Jan 09 01:03:14 stopped-upgrade-748100 dockerd[2469]: time="2024-01-09T01:03:14.903080759Z" level=warning msg="Your kernel does not support cgroup blkio throttle.read_iops_device"
	Jan 09 01:03:14 stopped-upgrade-748100 dockerd[2469]: time="2024-01-09T01:03:14.903090759Z" level=warning msg="Your kernel does not support cgroup blkio throttle.write_iops_device"
	Jan 09 01:03:14 stopped-upgrade-748100 dockerd[2469]: time="2024-01-09T01:03:14.903278953Z" level=info msg="Loading containers: start."
	Jan 09 01:03:15 stopped-upgrade-748100 dockerd[2469]: time="2024-01-09T01:03:15.344696233Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Jan 09 01:03:16 stopped-upgrade-748100 dockerd[2469]: time="2024-01-09T01:03:16.283364024Z" level=info msg="Loading containers: done."
	Jan 09 01:03:16 stopped-upgrade-748100 dockerd[2469]: time="2024-01-09T01:03:16.639806816Z" level=info msg="Docker daemon" commit=633a0ea838 graphdriver(s)=overlay2 version=19.03.5
	Jan 09 01:03:16 stopped-upgrade-748100 dockerd[2469]: time="2024-01-09T01:03:16.640727990Z" level=info msg="Daemon has completed initialization"
	Jan 09 01:03:17 stopped-upgrade-748100 dockerd[2469]: time="2024-01-09T01:03:17.196562105Z" level=info msg="API listen on /var/run/docker.sock"
	Jan 09 01:03:17 stopped-upgrade-748100 systemd[1]: Started Docker Application Container Engine.
	Jan 09 01:03:17 stopped-upgrade-748100 dockerd[2469]: time="2024-01-09T01:03:17.197346085Z" level=info msg="API listen on [::]:2376"
	Jan 09 01:03:45 stopped-upgrade-748100 dockerd[2469]: time="2024-01-09T01:03:45.500238690Z" level=info msg="Processing signal 'terminated'"
	Jan 09 01:03:45 stopped-upgrade-748100 systemd[1]: Stopping Docker Application Container Engine...
	Jan 09 01:03:45 stopped-upgrade-748100 dockerd[2469]: time="2024-01-09T01:03:45.501525906Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jan 09 01:03:45 stopped-upgrade-748100 dockerd[2469]: time="2024-01-09T01:03:45.501965711Z" level=info msg="Daemon shutdown complete"
	Jan 09 01:03:45 stopped-upgrade-748100 dockerd[2469]: time="2024-01-09T01:03:45.502038612Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jan 09 01:03:45 stopped-upgrade-748100 dockerd[2469]: time="2024-01-09T01:03:45.502078313Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Jan 09 01:03:46 stopped-upgrade-748100 systemd[1]: docker.service: Succeeded.
	Jan 09 01:03:46 stopped-upgrade-748100 systemd[1]: Stopped Docker Application Container Engine.
	Jan 09 01:03:46 stopped-upgrade-748100 systemd[1]: Starting Docker Application Container Engine...
	Jan 09 01:03:46 stopped-upgrade-748100 dockerd[2919]: time="2024-01-09T01:03:46.584939113Z" level=info msg="Starting up"
	Jan 09 01:03:46 stopped-upgrade-748100 dockerd[2919]: time="2024-01-09T01:03:46.588139352Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	Jan 09 01:03:46 stopped-upgrade-748100 dockerd[2919]: time="2024-01-09T01:03:46.588226253Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	Jan 09 01:03:46 stopped-upgrade-748100 dockerd[2919]: time="2024-01-09T01:03:46.588367455Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock 0  <nil>}] <nil>}" module=grpc
	Jan 09 01:03:46 stopped-upgrade-748100 dockerd[2919]: time="2024-01-09T01:03:46.588503456Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Jan 09 01:03:46 stopped-upgrade-748100 dockerd[2919]: time="2024-01-09T01:03:46.588893461Z" level=warning msg="grpc: addrConn.createTransport failed to connect to {unix:///run/containerd/containerd.sock 0  <nil>}. Err :connection error: desc = \"transport: Error while dialing dial unix /run/containerd/containerd.sock: connect: connection refused\". Reconnecting..." module=grpc
	Jan 09 01:03:46 stopped-upgrade-748100 dockerd[2919]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": all SubConns are in TransientFailure, latest connection error: connection error: desc = "transport: Error while dialing dial unix /run/containerd/containerd.sock: connect: connection refused": unavailable
	Jan 09 01:03:46 stopped-upgrade-748100 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jan 09 01:03:46 stopped-upgrade-748100 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jan 09 01:03:46 stopped-upgrade-748100 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	-- Logs begin at Tue 2024-01-09 01:02:19 UTC, end at Tue 2024-01-09 01:03:46 UTC. --
	Jan 09 01:03:14 stopped-upgrade-748100 systemd[1]: Starting Docker Application Container Engine...
	Jan 09 01:03:14 stopped-upgrade-748100 dockerd[2469]: time="2024-01-09T01:03:14.660176919Z" level=info msg="Starting up"
	Jan 09 01:03:14 stopped-upgrade-748100 dockerd[2469]: time="2024-01-09T01:03:14.662935731Z" level=info msg="libcontainerd: started new containerd process" pid=2477
	Jan 09 01:03:14 stopped-upgrade-748100 dockerd[2469]: time="2024-01-09T01:03:14.663116225Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	Jan 09 01:03:14 stopped-upgrade-748100 dockerd[2469]: time="2024-01-09T01:03:14.663192722Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	Jan 09 01:03:14 stopped-upgrade-748100 dockerd[2469]: time="2024-01-09T01:03:14.663264620Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/containerd/containerd.sock 0  <nil>}] <nil>}" module=grpc
	Jan 09 01:03:14 stopped-upgrade-748100 dockerd[2469]: time="2024-01-09T01:03:14.663335318Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Jan 09 01:03:14 stopped-upgrade-748100 dockerd[2469]: time="2024-01-09T01:03:14.701477599Z" level=info msg="starting containerd" revision=b34a5c8af56e510852c35414db4c1f4fa6172339 version=v1.2.10
	Jan 09 01:03:14 stopped-upgrade-748100 dockerd[2469]: time="2024-01-09T01:03:14.701886586Z" level=info msg="loading plugin "io.containerd.content.v1.content"..." type=io.containerd.content.v1
	Jan 09 01:03:14 stopped-upgrade-748100 dockerd[2469]: time="2024-01-09T01:03:14.702475568Z" level=info msg="loading plugin "io.containerd.snapshotter.v1.btrfs"..." type=io.containerd.snapshotter.v1
	Jan 09 01:03:14 stopped-upgrade-748100 dockerd[2469]: time="2024-01-09T01:03:14.702762358Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.btrfs" error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs must be a btrfs filesystem to be used with the btrfs snapshotter"
	Jan 09 01:03:14 stopped-upgrade-748100 dockerd[2469]: time="2024-01-09T01:03:14.702841656Z" level=info msg="loading plugin "io.containerd.snapshotter.v1.aufs"..." type=io.containerd.snapshotter.v1
	Jan 09 01:03:14 stopped-upgrade-748100 dockerd[2469]: time="2024-01-09T01:03:14.705282978Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.aufs" error="modprobe aufs failed: "modprobe: FATAL: Module aufs not found in directory /lib/modules/4.19.81\n": exit status 1"
	Jan 09 01:03:14 stopped-upgrade-748100 dockerd[2469]: time="2024-01-09T01:03:14.705383375Z" level=info msg="loading plugin "io.containerd.snapshotter.v1.native"..." type=io.containerd.snapshotter.v1
	Jan 09 01:03:14 stopped-upgrade-748100 dockerd[2469]: time="2024-01-09T01:03:14.706308545Z" level=info msg="loading plugin "io.containerd.snapshotter.v1.overlayfs"..." type=io.containerd.snapshotter.v1
	Jan 09 01:03:14 stopped-upgrade-748100 dockerd[2469]: time="2024-01-09T01:03:14.707223916Z" level=info msg="loading plugin "io.containerd.snapshotter.v1.zfs"..." type=io.containerd.snapshotter.v1
	Jan 09 01:03:14 stopped-upgrade-748100 dockerd[2469]: time="2024-01-09T01:03:14.707534306Z" level=info msg="skip loading plugin "io.containerd.snapshotter.v1.zfs"..." type=io.containerd.snapshotter.v1
	Jan 09 01:03:14 stopped-upgrade-748100 dockerd[2469]: time="2024-01-09T01:03:14.707629503Z" level=info msg="loading plugin "io.containerd.metadata.v1.bolt"..." type=io.containerd.metadata.v1
	Jan 09 01:03:14 stopped-upgrade-748100 dockerd[2469]: time="2024-01-09T01:03:14.707659602Z" level=warning msg="could not use snapshotter zfs in metadata plugin" error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin"
	Jan 09 01:03:14 stopped-upgrade-748100 dockerd[2469]: time="2024-01-09T01:03:14.707668902Z" level=warning msg="could not use snapshotter btrfs in metadata plugin" error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs must be a btrfs filesystem to be used with the btrfs snapshotter"
	Jan 09 01:03:14 stopped-upgrade-748100 dockerd[2469]: time="2024-01-09T01:03:14.707678101Z" level=warning msg="could not use snapshotter aufs in metadata plugin" error="modprobe aufs failed: "modprobe: FATAL: Module aufs not found in directory /lib/modules/4.19.81\n": exit status 1"
	Jan 09 01:03:14 stopped-upgrade-748100 dockerd[2469]: time="2024-01-09T01:03:14.711192589Z" level=info msg="loading plugin "io.containerd.differ.v1.walking"..." type=io.containerd.differ.v1
	Jan 09 01:03:14 stopped-upgrade-748100 dockerd[2469]: time="2024-01-09T01:03:14.711248487Z" level=info msg="loading plugin "io.containerd.gc.v1.scheduler"..." type=io.containerd.gc.v1
	Jan 09 01:03:14 stopped-upgrade-748100 dockerd[2469]: time="2024-01-09T01:03:14.711315785Z" level=info msg="loading plugin "io.containerd.service.v1.containers-service"..." type=io.containerd.service.v1
	Jan 09 01:03:14 stopped-upgrade-748100 dockerd[2469]: time="2024-01-09T01:03:14.711333085Z" level=info msg="loading plugin "io.containerd.service.v1.content-service"..." type=io.containerd.service.v1
	Jan 09 01:03:14 stopped-upgrade-748100 dockerd[2469]: time="2024-01-09T01:03:14.711344584Z" level=info msg="loading plugin "io.containerd.service.v1.diff-service"..." type=io.containerd.service.v1
	Jan 09 01:03:14 stopped-upgrade-748100 dockerd[2469]: time="2024-01-09T01:03:14.711356884Z" level=info msg="loading plugin "io.containerd.service.v1.images-service"..." type=io.containerd.service.v1
	Jan 09 01:03:14 stopped-upgrade-748100 dockerd[2469]: time="2024-01-09T01:03:14.711368383Z" level=info msg="loading plugin "io.containerd.service.v1.leases-service"..." type=io.containerd.service.v1
	Jan 09 01:03:14 stopped-upgrade-748100 dockerd[2469]: time="2024-01-09T01:03:14.711379683Z" level=info msg="loading plugin "io.containerd.service.v1.namespaces-service"..." type=io.containerd.service.v1
	Jan 09 01:03:14 stopped-upgrade-748100 dockerd[2469]: time="2024-01-09T01:03:14.711390583Z" level=info msg="loading plugin "io.containerd.service.v1.snapshots-service"..." type=io.containerd.service.v1
	Jan 09 01:03:14 stopped-upgrade-748100 dockerd[2469]: time="2024-01-09T01:03:14.711401782Z" level=info msg="loading plugin "io.containerd.runtime.v1.linux"..." type=io.containerd.runtime.v1
	Jan 09 01:03:14 stopped-upgrade-748100 dockerd[2469]: time="2024-01-09T01:03:14.711496579Z" level=info msg="loading plugin "io.containerd.runtime.v2.task"..." type=io.containerd.runtime.v2
	Jan 09 01:03:14 stopped-upgrade-748100 dockerd[2469]: time="2024-01-09T01:03:14.711561777Z" level=info msg="loading plugin "io.containerd.monitor.v1.cgroups"..." type=io.containerd.monitor.v1
	Jan 09 01:03:14 stopped-upgrade-748100 dockerd[2469]: time="2024-01-09T01:03:14.712095060Z" level=info msg="loading plugin "io.containerd.service.v1.tasks-service"..." type=io.containerd.service.v1
	Jan 09 01:03:14 stopped-upgrade-748100 dockerd[2469]: time="2024-01-09T01:03:14.712199857Z" level=info msg="loading plugin "io.containerd.internal.v1.restart"..." type=io.containerd.internal.v1
	Jan 09 01:03:14 stopped-upgrade-748100 dockerd[2469]: time="2024-01-09T01:03:14.712241256Z" level=info msg="loading plugin "io.containerd.grpc.v1.containers"..." type=io.containerd.grpc.v1
	Jan 09 01:03:14 stopped-upgrade-748100 dockerd[2469]: time="2024-01-09T01:03:14.712256155Z" level=info msg="loading plugin "io.containerd.grpc.v1.content"..." type=io.containerd.grpc.v1
	Jan 09 01:03:14 stopped-upgrade-748100 dockerd[2469]: time="2024-01-09T01:03:14.712267555Z" level=info msg="loading plugin "io.containerd.grpc.v1.diff"..." type=io.containerd.grpc.v1
	Jan 09 01:03:14 stopped-upgrade-748100 dockerd[2469]: time="2024-01-09T01:03:14.712278154Z" level=info msg="loading plugin "io.containerd.grpc.v1.events"..." type=io.containerd.grpc.v1
	Jan 09 01:03:14 stopped-upgrade-748100 dockerd[2469]: time="2024-01-09T01:03:14.712288054Z" level=info msg="loading plugin "io.containerd.grpc.v1.healthcheck"..." type=io.containerd.grpc.v1
	Jan 09 01:03:14 stopped-upgrade-748100 dockerd[2469]: time="2024-01-09T01:03:14.712298654Z" level=info msg="loading plugin "io.containerd.grpc.v1.images"..." type=io.containerd.grpc.v1
	Jan 09 01:03:14 stopped-upgrade-748100 dockerd[2469]: time="2024-01-09T01:03:14.712308653Z" level=info msg="loading plugin "io.containerd.grpc.v1.leases"..." type=io.containerd.grpc.v1
	Jan 09 01:03:14 stopped-upgrade-748100 dockerd[2469]: time="2024-01-09T01:03:14.712318953Z" level=info msg="loading plugin "io.containerd.grpc.v1.namespaces"..." type=io.containerd.grpc.v1
	Jan 09 01:03:14 stopped-upgrade-748100 dockerd[2469]: time="2024-01-09T01:03:14.712328953Z" level=info msg="loading plugin "io.containerd.internal.v1.opt"..." type=io.containerd.internal.v1
	Jan 09 01:03:14 stopped-upgrade-748100 dockerd[2469]: time="2024-01-09T01:03:14.712386751Z" level=info msg="loading plugin "io.containerd.grpc.v1.snapshots"..." type=io.containerd.grpc.v1
	Jan 09 01:03:14 stopped-upgrade-748100 dockerd[2469]: time="2024-01-09T01:03:14.712401050Z" level=info msg="loading plugin "io.containerd.grpc.v1.tasks"..." type=io.containerd.grpc.v1
	Jan 09 01:03:14 stopped-upgrade-748100 dockerd[2469]: time="2024-01-09T01:03:14.712411650Z" level=info msg="loading plugin "io.containerd.grpc.v1.version"..." type=io.containerd.grpc.v1
	Jan 09 01:03:14 stopped-upgrade-748100 dockerd[2469]: time="2024-01-09T01:03:14.712423350Z" level=info msg="loading plugin "io.containerd.grpc.v1.introspection"..." type=io.containerd.grpc.v1
	Jan 09 01:03:14 stopped-upgrade-748100 dockerd[2469]: time="2024-01-09T01:03:14.712543446Z" level=info msg=serving... address="/var/run/docker/containerd/containerd-debug.sock"
	Jan 09 01:03:14 stopped-upgrade-748100 dockerd[2469]: time="2024-01-09T01:03:14.712683841Z" level=info msg=serving... address="/var/run/docker/containerd/containerd.sock"
	Jan 09 01:03:14 stopped-upgrade-748100 dockerd[2469]: time="2024-01-09T01:03:14.712698841Z" level=info msg="containerd successfully booted in 0.014102s"
	Jan 09 01:03:14 stopped-upgrade-748100 dockerd[2469]: time="2024-01-09T01:03:14.728670831Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	Jan 09 01:03:14 stopped-upgrade-748100 dockerd[2469]: time="2024-01-09T01:03:14.728769728Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	Jan 09 01:03:14 stopped-upgrade-748100 dockerd[2469]: time="2024-01-09T01:03:14.728850725Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/containerd/containerd.sock 0  <nil>}] <nil>}" module=grpc
	Jan 09 01:03:14 stopped-upgrade-748100 dockerd[2469]: time="2024-01-09T01:03:14.728873024Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Jan 09 01:03:14 stopped-upgrade-748100 dockerd[2469]: time="2024-01-09T01:03:14.730995356Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	Jan 09 01:03:14 stopped-upgrade-748100 dockerd[2469]: time="2024-01-09T01:03:14.731189550Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	Jan 09 01:03:14 stopped-upgrade-748100 dockerd[2469]: time="2024-01-09T01:03:14.731213549Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/containerd/containerd.sock 0  <nil>}] <nil>}" module=grpc
	Jan 09 01:03:14 stopped-upgrade-748100 dockerd[2469]: time="2024-01-09T01:03:14.731225049Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Jan 09 01:03:14 stopped-upgrade-748100 dockerd[2469]: time="2024-01-09T01:03:14.748149208Z" level=info msg="[graphdriver] using prior storage driver: overlay2"
	Jan 09 01:03:14 stopped-upgrade-748100 dockerd[2469]: time="2024-01-09T01:03:14.902984462Z" level=warning msg="Your kernel does not support cgroup blkio weight"
	Jan 09 01:03:14 stopped-upgrade-748100 dockerd[2469]: time="2024-01-09T01:03:14.903050160Z" level=warning msg="Your kernel does not support cgroup blkio weight_device"
	Jan 09 01:03:14 stopped-upgrade-748100 dockerd[2469]: time="2024-01-09T01:03:14.903066159Z" level=warning msg="Your kernel does not support cgroup blkio throttle.read_bps_device"
	Jan 09 01:03:14 stopped-upgrade-748100 dockerd[2469]: time="2024-01-09T01:03:14.903073559Z" level=warning msg="Your kernel does not support cgroup blkio throttle.write_bps_device"
	Jan 09 01:03:14 stopped-upgrade-748100 dockerd[2469]: time="2024-01-09T01:03:14.903080759Z" level=warning msg="Your kernel does not support cgroup blkio throttle.read_iops_device"
	Jan 09 01:03:14 stopped-upgrade-748100 dockerd[2469]: time="2024-01-09T01:03:14.903090759Z" level=warning msg="Your kernel does not support cgroup blkio throttle.write_iops_device"
	Jan 09 01:03:14 stopped-upgrade-748100 dockerd[2469]: time="2024-01-09T01:03:14.903278953Z" level=info msg="Loading containers: start."
	Jan 09 01:03:15 stopped-upgrade-748100 dockerd[2469]: time="2024-01-09T01:03:15.344696233Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Jan 09 01:03:16 stopped-upgrade-748100 dockerd[2469]: time="2024-01-09T01:03:16.283364024Z" level=info msg="Loading containers: done."
	Jan 09 01:03:16 stopped-upgrade-748100 dockerd[2469]: time="2024-01-09T01:03:16.639806816Z" level=info msg="Docker daemon" commit=633a0ea838 graphdriver(s)=overlay2 version=19.03.5
	Jan 09 01:03:16 stopped-upgrade-748100 dockerd[2469]: time="2024-01-09T01:03:16.640727990Z" level=info msg="Daemon has completed initialization"
	Jan 09 01:03:17 stopped-upgrade-748100 dockerd[2469]: time="2024-01-09T01:03:17.196562105Z" level=info msg="API listen on /var/run/docker.sock"
	Jan 09 01:03:17 stopped-upgrade-748100 systemd[1]: Started Docker Application Container Engine.
	Jan 09 01:03:17 stopped-upgrade-748100 dockerd[2469]: time="2024-01-09T01:03:17.197346085Z" level=info msg="API listen on [::]:2376"
	Jan 09 01:03:45 stopped-upgrade-748100 dockerd[2469]: time="2024-01-09T01:03:45.500238690Z" level=info msg="Processing signal 'terminated'"
	Jan 09 01:03:45 stopped-upgrade-748100 systemd[1]: Stopping Docker Application Container Engine...
	Jan 09 01:03:45 stopped-upgrade-748100 dockerd[2469]: time="2024-01-09T01:03:45.501525906Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jan 09 01:03:45 stopped-upgrade-748100 dockerd[2469]: time="2024-01-09T01:03:45.501965711Z" level=info msg="Daemon shutdown complete"
	Jan 09 01:03:45 stopped-upgrade-748100 dockerd[2469]: time="2024-01-09T01:03:45.502038612Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jan 09 01:03:45 stopped-upgrade-748100 dockerd[2469]: time="2024-01-09T01:03:45.502078313Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Jan 09 01:03:46 stopped-upgrade-748100 systemd[1]: docker.service: Succeeded.
	Jan 09 01:03:46 stopped-upgrade-748100 systemd[1]: Stopped Docker Application Container Engine.
	Jan 09 01:03:46 stopped-upgrade-748100 systemd[1]: Starting Docker Application Container Engine...
	Jan 09 01:03:46 stopped-upgrade-748100 dockerd[2919]: time="2024-01-09T01:03:46.584939113Z" level=info msg="Starting up"
	Jan 09 01:03:46 stopped-upgrade-748100 dockerd[2919]: time="2024-01-09T01:03:46.588139352Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	Jan 09 01:03:46 stopped-upgrade-748100 dockerd[2919]: time="2024-01-09T01:03:46.588226253Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	Jan 09 01:03:46 stopped-upgrade-748100 dockerd[2919]: time="2024-01-09T01:03:46.588367455Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock 0  <nil>}] <nil>}" module=grpc
	Jan 09 01:03:46 stopped-upgrade-748100 dockerd[2919]: time="2024-01-09T01:03:46.588503456Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Jan 09 01:03:46 stopped-upgrade-748100 dockerd[2919]: time="2024-01-09T01:03:46.588893461Z" level=warning msg="grpc: addrConn.createTransport failed to connect to {unix:///run/containerd/containerd.sock 0  <nil>}. Err :connection error: desc = \"transport: Error while dialing dial unix /run/containerd/containerd.sock: connect: connection refused\". Reconnecting..." module=grpc
	Jan 09 01:03:46 stopped-upgrade-748100 dockerd[2919]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": all SubConns are in TransientFailure, latest connection error: connection error: desc = "transport: Error while dialing dial unix /run/containerd/containerd.sock: connect: connection refused": unavailable
	Jan 09 01:03:46 stopped-upgrade-748100 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jan 09 01:03:46 stopped-upgrade-748100 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jan 09 01:03:46 stopped-upgrade-748100 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0109 01:03:46.649619    8412 out.go:239] * 
	* 
	W0109 01:03:46.651627    8412 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0109 01:03:46.653623    8412 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:213: upgrade from v1.6.2 to HEAD failed: out/minikube-windows-amd64.exe start -p stopped-upgrade-748100 --memory=2200 --alsologtostderr -v=1 --driver=hyperv: exit status 90
--- FAIL: TestStoppedBinaryUpgrade/Upgrade (630.82s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (10800.625s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-511200 replace --force -f testdata\netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-h2l6s" [8fb9ebb4-0fbe-4898-a056-5d45877f6988] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
panic: test timed out after 3h0m0s
running tests:
	TestNetworkPlugins (49m28s)
	TestNetworkPlugins/group (49m28s)
	TestNetworkPlugins/group/calico (8m35s)
	TestNetworkPlugins/group/custom-flannel (6m37s)
	TestNetworkPlugins/group/custom-flannel/NetCatPod (14s)
	TestNetworkPlugins/group/false (2m16s)
	TestNetworkPlugins/group/false/Start (2m16s)
	TestNetworkPlugins/group/kindnet (10m12s)
	TestStartStop (58m56s)
	TestStartStop/group (58m56s)

                                                
                                                
goroutine 3115 [running]:
testing.(*M).startAlarm.func1()
	/usr/local/go/src/testing/testing.go:2259 +0x3b9
created by time.goFunc
	/usr/local/go/src/time/sleep.go:176 +0x2d

                                                
                                                
goroutine 1 [chan receive, 34 minutes]:
testing.tRunner.func1()
	/usr/local/go/src/testing/testing.go:1561 +0x489
testing.tRunner(0xc0005a5860, 0xc0005f5b80)
	/usr/local/go/src/testing/testing.go:1601 +0x138
testing.runTests(0xc0007c19a0?, {0x4be7d80, 0x2a, 0x2a}, {0xc0005f5be8?, 0x9fbfe5?, 0x4c09a20?})
	/usr/local/go/src/testing/testing.go:2052 +0x445
testing.(*M).Run(0xc0007c19a0)
	/usr/local/go/src/testing/testing.go:1925 +0x636
k8s.io/minikube/test/integration.TestMain(0xc00009bef0?)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/main_test.go:62 +0x88
main.main()
	_testmain.go:131 +0x1c6

                                                
                                                
goroutine 10 [select]:
go.opencensus.io/stats/view.(*worker).start(0xc0006bc900)
	/var/lib/jenkins/go/pkg/mod/go.opencensus.io@v0.24.0/stats/view/worker.go:292 +0x9f
created by go.opencensus.io/stats/view.init.0 in goroutine 1
	/var/lib/jenkins/go/pkg/mod/go.opencensus.io@v0.24.0/stats/view/worker.go:34 +0x8d

                                                
                                                
goroutine 2017 [chan receive, 49 minutes]:
testing.(*testContext).waitParallel(0xc0000e87d0)
	/usr/local/go/src/testing/testing.go:1715 +0xac
testing.(*T).Parallel(0xc0022d1860)
	/usr/local/go/src/testing/testing.go:1404 +0x219
k8s.io/minikube/test/integration.MaybeParallel(0xc0022d1860)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc0022d1860)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x33c
testing.tRunner(0xc0022d1860, 0xc000c33100)
	/usr/local/go/src/testing/testing.go:1595 +0xff
created by testing.(*T).Run in goroutine 2012
	/usr/local/go/src/testing/testing.go:1648 +0x3ad

                                                
                                                
goroutine 2862 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc002b18ba0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/util/workqueue/delaying_queue.go:276 +0x305
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 2861
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/util/workqueue/delaying_queue.go:113 +0x21f

                                                
                                                
goroutine 32 [select]:
k8s.io/klog/v2.(*flushDaemon).run.func1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/klog/v2@v2.110.1/klog.go:1157 +0x111
created by k8s.io/klog/v2.(*flushDaemon).run in goroutine 31
	/var/lib/jenkins/go/pkg/mod/k8s.io/klog/v2@v2.110.1/klog.go:1153 +0x171

                                                
                                                
goroutine 147 [sync.Cond.Wait, 2 minutes]:
sync.runtime_notifyListWait(0xc0005f7d50, 0x3c)
	/usr/local/go/src/runtime/sema.go:527 +0x15d
sync.(*Cond).Wait(0x38e69e0?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc0005e5860)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/util/workqueue/queue.go:200 +0x99
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc0005f7d80)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc0005e7f90?, {0x38ead80, 0xc0020614a0}, 0x1, 0xc000106d80)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc000c28480?, 0x3b9aca00, 0x0, 0xd0?, 0x98821c?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(0xacdf65?, 0xc0000f3a20?, 0xc000804e40?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/backoff.go:161 +0x1e
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 172
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 2280 [sync.Cond.Wait, 2 minutes]:
sync.runtime_notifyListWait(0xc002186590, 0x16)
	/usr/local/go/src/runtime/sema.go:527 +0x15d
sync.(*Cond).Wait(0x38e69e0?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc002852d20)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/util/workqueue/queue.go:200 +0x99
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc0021865c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc00212df88?, {0x38ead80, 0xc00205c000}, 0x1, 0xc000106d80)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0x0?, 0x3b9aca00, 0x0, 0xd0?, 0x98821c?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(0xc00212dfd0?, 0xacdfc7?, 0xc000055140?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/backoff.go:161 +0x1e
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2290
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 876 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x390d520, 0xc000106d80}, 0xc0021a9f50, 0xc002c6c1d8?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/wait.go:205 +0xd7
k8s.io/apimachinery/pkg/util/wait.poll({0x390d520, 0xc000106d80}, 0x0?, 0x0?, 0x0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x390d520?, 0xc000106d80?}, 0x0?, 0x0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xacdf65?, 0xc0004ae2c0?, 0xc002f68f60?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 839
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 2035 [chan receive, 2 minutes]:
testing.(*T).Run(0xc0022d1ba0, {0x2a3f9dc?, 0x38e4300?}, 0xc002a47290)
	/usr/local/go/src/testing/testing.go:1649 +0x3c8
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc0022d1ba0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:111 +0x5f0
testing.tRunner(0xc0022d1ba0, 0xc000c33200)
	/usr/local/go/src/testing/testing.go:1595 +0xff
created by testing.(*T).Run in goroutine 2012
	/usr/local/go/src/testing/testing.go:1648 +0x3ad

                                                
                                                
goroutine 2369 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc002853260)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/util/workqueue/delaying_queue.go:276 +0x305
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 2336
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/util/workqueue/delaying_queue.go:113 +0x21f

                                                
                                                
goroutine 2014 [chan receive, 49 minutes]:
testing.(*testContext).waitParallel(0xc0000e87d0)
	/usr/local/go/src/testing/testing.go:1715 +0xac
testing.(*T).Parallel(0xc0022d1380)
	/usr/local/go/src/testing/testing.go:1404 +0x219
k8s.io/minikube/test/integration.MaybeParallel(0xc0022d1380)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc0022d1380)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x33c
testing.tRunner(0xc0022d1380, 0xc000c32a00)
	/usr/local/go/src/testing/testing.go:1595 +0xff
created by testing.(*T).Run in goroutine 2012
	/usr/local/go/src/testing/testing.go:1648 +0x3ad

                                                
                                                
goroutine 148 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x390d520, 0xc000106d80}, 0xc00097bf50, 0x23898ac?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/wait.go:205 +0xd7
k8s.io/apimachinery/pkg/util/wait.poll({0x390d520, 0xc000106d80}, 0x60?, 0xc000457e80?, 0xc0004da3f0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x390d520?, 0xc000106d80?}, 0xc000482820?, 0xa891c0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xa8a085?, 0xc000482820?, 0xc0008bdd80?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 172
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 2282 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/poll.go:297 +0x1c5
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 2281
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/poll.go:280 +0xc5

                                                
                                                
goroutine 149 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/poll.go:297 +0x1c5
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 148
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/poll.go:280 +0xc5

                                                
                                                
goroutine 2036 [chan receive]:
testing.(*T).Run(0xc0022d1d40, {0x2a48349?, 0x38e4300?}, 0xc002994150)
	/usr/local/go/src/testing/testing.go:1649 +0x3c8
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc0022d1d40)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:148 +0x8c5
testing.tRunner(0xc0022d1d40, 0xc000c33280)
	/usr/local/go/src/testing/testing.go:1595 +0xff
created by testing.(*T).Run in goroutine 2012
	/usr/local/go/src/testing/testing.go:1648 +0x3ad

                                                
                                                
goroutine 2192 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc0009093e0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/util/workqueue/delaying_queue.go:276 +0x305
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 2200
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/util/workqueue/delaying_queue.go:113 +0x21f

                                                
                                                
goroutine 2372 [sync.Cond.Wait, 2 minutes]:
sync.runtime_notifyListWait(0xc000aee7d0, 0x15)
	/usr/local/go/src/runtime/sema.go:527 +0x15d
sync.(*Cond).Wait(0x38e69e0?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc002853140)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/util/workqueue/queue.go:200 +0x99
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc000aee800)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc00211ff90?, {0x38ead80, 0xc00233a450}, 0x1, 0xc000106d80)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc000ae8600?, 0x3b9aca00, 0x0, 0xd0?, 0x98821c?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(0xacdf65?, 0xc0000f3760?, 0xc002cfdda0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/backoff.go:161 +0x1e
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2386
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 171 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc0005e5980)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/util/workqueue/delaying_queue.go:276 +0x305
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 63
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/util/workqueue/delaying_queue.go:113 +0x21f

                                                
                                                
goroutine 172 [chan receive, 173 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc0005f7d80, 0xc000106d80)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 63
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/transport/cache.go:122 +0x594

                                                
                                                
goroutine 3014 [select, 2 minutes]:
os/exec.(*Cmd).watchCtx(0xc000ca2b00, 0xc002a08180)
	/usr/local/go/src/os/exec/exec.go:757 +0xb5
created by os/exec.(*Cmd).Start in goroutine 3011
	/usr/local/go/src/os/exec/exec.go:743 +0xa34

                                                
                                                
goroutine 1386 [select, 6 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/poll.go:297 +0x1c5
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 1385
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/poll.go:280 +0xc5

                                                
                                                
goroutine 2940 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc00219b020)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/util/workqueue/delaying_queue.go:276 +0x305
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 2939
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/util/workqueue/delaying_queue.go:113 +0x21f

                                                
                                                
goroutine 877 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/poll.go:297 +0x1c5
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 876
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/poll.go:280 +0xc5

                                                
                                                
goroutine 2016 [chan receive, 49 minutes]:
testing.(*testContext).waitParallel(0xc0000e87d0)
	/usr/local/go/src/testing/testing.go:1715 +0xac
testing.(*T).Parallel(0xc0022d16c0)
	/usr/local/go/src/testing/testing.go:1404 +0x219
k8s.io/minikube/test/integration.MaybeParallel(0xc0022d16c0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc0022d16c0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x33c
testing.tRunner(0xc0022d16c0, 0xc000c33080)
	/usr/local/go/src/testing/testing.go:1595 +0xff
created by testing.(*T).Run in goroutine 2012
	/usr/local/go/src/testing/testing.go:1648 +0x3ad

                                                
                                                
goroutine 1384 [sync.Cond.Wait, 6 minutes]:
sync.runtime_notifyListWait(0xc000aee590, 0x31)
	/usr/local/go/src/runtime/sema.go:527 +0x15d
sync.(*Cond).Wait(0x38e69e0?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc002899560)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/util/workqueue/queue.go:200 +0x99
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc000aee5c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc002ce3f88?, {0x38ead80, 0xc00207e090}, 0x1, 0xc000106d80)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc000804cc0?, 0x3b9aca00, 0x0, 0xd0?, 0x98821c?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(0xacdf65?, 0xc0000f2c60?, 0xc000804de0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/backoff.go:161 +0x1e
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 1443
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 875 [sync.Cond.Wait, 4 minutes]:
sync.runtime_notifyListWait(0xc0029ce850, 0x36)
	/usr/local/go/src/runtime/sema.go:527 +0x15d
sync.(*Cond).Wait(0x38e69e0?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc000838b40)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/util/workqueue/queue.go:200 +0x99
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc0029ce880)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc002e37f88?, {0x38ead80, 0xc0021161b0}, 0x1, 0xc000106d80)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0x0?, 0x3b9aca00, 0x0, 0xd0?, 0x98821c?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(0xc002e37fd0?, 0xacdfc7?, 0x0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/backoff.go:161 +0x1e
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 839
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 1443 [chan receive, 135 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc000aee5c0, 0xc000106d80)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 1409
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/transport/cache.go:122 +0x594

                                                
                                                
goroutine 2290 [chan receive, 32 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc0021865c0, 0xc000106d80)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 2276
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/transport/cache.go:122 +0x594

                                                
                                                
goroutine 1159 [chan send, 149 minutes]:
os/exec.(*Cmd).watchCtx(0xc000ca3e40, 0xc0028db380)
	/usr/local/go/src/os/exec/exec.go:782 +0x3ef
created by os/exec.(*Cmd).Start in goroutine 1158
	/usr/local/go/src/os/exec/exec.go:743 +0xa34

                                                
                                                
goroutine 2257 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc002852e40)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/util/workqueue/delaying_queue.go:276 +0x305
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 2276
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/util/workqueue/delaying_queue.go:113 +0x21f

                                                
                                                
goroutine 839 [chan receive, 153 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc0029ce880, 0xc000106d80)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 791
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/transport/cache.go:122 +0x594

                                                
                                                
goroutine 2215 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/poll.go:297 +0x1c5
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 2214
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/poll.go:280 +0xc5

                                                
                                                
goroutine 3086 [chan receive]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc000afae00, 0xc000106d80)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 3027
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/transport/cache.go:122 +0x594

                                                
                                                
goroutine 2845 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x390d520, 0xc000106d80}, 0xc002141f50, 0xc002b9e418?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/wait.go:205 +0xd7
k8s.io/apimachinery/pkg/util/wait.poll({0x390d520, 0xc000106d80}, 0x1?, 0x1?, 0xc002141fb8?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x390d520?, 0xc000106d80?}, 0x1?, 0x0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc002141fd0?, 0xacdfc7?, 0xc000afa180?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2863
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 2012 [chan receive, 2 minutes]:
testing.tRunner.func1()
	/usr/local/go/src/testing/testing.go:1561 +0x489
testing.tRunner(0xc0022d0680, 0xc0020fc0d8)
	/usr/local/go/src/testing/testing.go:1601 +0x138
created by testing.(*T).Run in goroutine 1848
	/usr/local/go/src/testing/testing.go:1648 +0x3ad

                                                
                                                
goroutine 1442 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc002899680)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/util/workqueue/delaying_queue.go:276 +0x305
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 1409
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/util/workqueue/delaying_queue.go:113 +0x21f

                                                
                                                
goroutine 1848 [chan receive, 51 minutes]:
testing.(*T).Run(0xc00208dd40, {0x2a3f9d7?, 0x9b806d?}, 0xc0020fc0d8)
	/usr/local/go/src/testing/testing.go:1649 +0x3c8
k8s.io/minikube/test/integration.TestNetworkPlugins(0xc00208dd40)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:52 +0xd4
testing.tRunner(0xc00208dd40, 0x3498560)
	/usr/local/go/src/testing/testing.go:1595 +0xff
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1648 +0x3ad

                                                
                                                
goroutine 710 [IO wait, 163 minutes]:
internal/poll.runtime_pollWait(0x2b5f6e4e130, 0x72)
	/usr/local/go/src/runtime/netpoll.go:343 +0x85
internal/poll.(*pollDesc).wait(0x0?, 0x0?, 0x0)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.execIO(0xc002184018, 0xc0005e7bb8)
	/usr/local/go/src/internal/poll/fd_windows.go:175 +0xe6
internal/poll.(*FD).acceptOne(0xc002184000, 0x300, {0xc0020fa000?, 0xc0000a1800?, 0x3499020?}, 0xc0005e7cc8?)
	/usr/local/go/src/internal/poll/fd_windows.go:944 +0x67
internal/poll.(*FD).Accept(0xc002184000, 0xc0005e7d90)
	/usr/local/go/src/internal/poll/fd_windows.go:978 +0x1bc
net.(*netFD).accept(0xc002184000)
	/usr/local/go/src/net/fd_windows.go:166 +0x54
net.(*TCPListener).accept(0xc0008fa340)
	/usr/local/go/src/net/tcpsock_posix.go:152 +0x1e
net.(*TCPListener).Accept(0xc0008fa340)
	/usr/local/go/src/net/tcpsock.go:315 +0x30
net/http.(*Server).Serve(0xc00063a1e0, {0x3900fa0, 0xc0008fa340})
	/usr/local/go/src/net/http/server.go:3056 +0x364
net/http.(*Server).ListenAndServe(0xc00063a1e0)
	/usr/local/go/src/net/http/server.go:2985 +0x71
k8s.io/minikube/test/integration.startHTTPProxy.func1(0xd?, 0xc00208d380)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/functional_test.go:2212 +0x18
created by k8s.io/minikube/test/integration.startHTTPProxy in goroutine 707
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/functional_test.go:2211 +0x13a

                                                
                                                
goroutine 2015 [chan receive, 49 minutes]:
testing.(*testContext).waitParallel(0xc0000e87d0)
	/usr/local/go/src/testing/testing.go:1715 +0xac
testing.(*T).Parallel(0xc0022d1520)
	/usr/local/go/src/testing/testing.go:1404 +0x219
k8s.io/minikube/test/integration.MaybeParallel(0xc0022d1520)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc0022d1520)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x33c
testing.tRunner(0xc0022d1520, 0xc000c33000)
	/usr/local/go/src/testing/testing.go:1595 +0xff
created by testing.(*T).Run in goroutine 2012
	/usr/local/go/src/testing/testing.go:1648 +0x3ad

                                                
                                                
goroutine 3012 [syscall, 2 minutes, locked to thread]:
syscall.SyscallN(0x4c38900?, {0xc0020cbc28?, 0xc53fab?, 0x3f54f40?})
	/usr/local/go/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall6(0xc0020cbc80?, 0x95e656?, 0x4c64c40?, 0xc0020cbce8?, 0x9513bd?, 0x2b5d1860eb8?, 0xc000483a4d?, 0xc0020cbce0?)
	/usr/local/go/src/runtime/syscall_windows.go:488 +0x4a
syscall.readFile(0x0?, {0xc00236e21c?, 0x5e4, 0x9f7fbf?}, 0x0?, 0x800000?)
	/usr/local/go/src/syscall/zsyscall_windows.go:1024 +0x8e
syscall.ReadFile(...)
	/usr/local/go/src/syscall/syscall_windows.go:438
syscall.Read(0xc00043e280?, {0xc00236e21c?, 0x0?, 0xc00236e000?})
	/usr/local/go/src/syscall/syscall_windows.go:417 +0x2d
internal/poll.(*FD).Read(0xc00043e280, {0xc00236e21c, 0x5e4, 0x5e4})
	/usr/local/go/src/internal/poll/fd_windows.go:422 +0x1c5
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc00009e040, {0xc00236e21c?, 0xc0020cbe68?, 0xc0020cbe68?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc002a47350, {0x38e9a20, 0xc00009e040})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x38e9aa0, 0xc002a47350}, {0x38e9a20, 0xc00009e040}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:416 +0x147
io.Copy(...)
	/usr/local/go/src/io/io.go:389
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:560 +0x34
os/exec.(*Cmd).Start.func2(0x0?)
	/usr/local/go/src/os/exec/exec.go:717 +0x2c
created by os/exec.(*Cmd).Start in goroutine 3011
	/usr/local/go/src/os/exec/exec.go:716 +0xa75

                                                
                                                
goroutine 1385 [select, 6 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x390d520, 0xc000106d80}, 0xc003053f50, 0xc002d38118?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/wait.go:205 +0xd7
k8s.io/apimachinery/pkg/util/wait.poll({0x390d520, 0xc000106d80}, 0x0?, 0x0?, 0x0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x390d520?, 0xc000106d80?}, 0x0?, 0x0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xacdf65?, 0xc0007869a0?, 0xc0008048a0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 1443
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 1094 [chan send, 145 minutes]:
os/exec.(*Cmd).watchCtx(0xc00019d4a0, 0xc000ae8540)
	/usr/local/go/src/os/exec/exec.go:782 +0x3ef
created by os/exec.(*Cmd).Start in goroutine 823
	/usr/local/go/src/os/exec/exec.go:743 +0xa34

                                                
                                                
goroutine 2034 [syscall, locked to thread]:
syscall.SyscallN(0x7ffb919c4de0?, {0xc00203f0e8?, 0x3?, 0x0?})
	/usr/local/go/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall(0x5?, 0x30?, 0x38de690?, 0xc000b1b0e0?, 0x100c00203f1e8?)
	/usr/local/go/src/runtime/syscall_windows.go:482 +0x35
syscall.WaitForSingleObject(0xc00009e158?, 0xffffffff)
	/usr/local/go/src/syscall/zsyscall_windows.go:1145 +0x5d
os.(*Process).wait(0xc0025a1b00)
	/usr/local/go/src/os/exec_windows.go:18 +0x55
os.(*Process).Wait(...)
	/usr/local/go/src/os/exec.go:134
os/exec.(*Cmd).Wait(0xc0023c3e40)
	/usr/local/go/src/os/exec/exec.go:890 +0x45
os/exec.(*Cmd).Run(0x178?)
	/usr/local/go/src/os/exec/exec.go:590 +0x2d
os/exec.(*Cmd).CombinedOutput(0xc0023c3e40)
	/usr/local/go/src/os/exec/exec.go:1005 +0x94
k8s.io/minikube/test/integration.debugLogs(0xc0022d1a00, {0xc000647fa0, 0xe})
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:618 +0xa9e5
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc0022d1a00)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:211 +0xc2c
testing.tRunner(0xc0022d1a00, 0xc000c33180)
	/usr/local/go/src/testing/testing.go:1595 +0xff
created by testing.(*T).Run in goroutine 2012
	/usr/local/go/src/testing/testing.go:1648 +0x3ad

                                                
                                                
goroutine 838 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc000838c60)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/util/workqueue/delaying_queue.go:276 +0x305
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 791
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/util/workqueue/delaying_queue.go:113 +0x21f

                                                
                                                
goroutine 3085 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc002ba9080)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/util/workqueue/delaying_queue.go:276 +0x305
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 3027
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/util/workqueue/delaying_queue.go:113 +0x21f

                                                
                                                
goroutine 1821 [chan receive, 59 minutes]:
testing.(*T).Run(0xc0020e8820, {0x2a3f9d7?, 0x8085bd541a0?}, 0x3498780)
	/usr/local/go/src/testing/testing.go:1649 +0x3c8
k8s.io/minikube/test/integration.TestStartStop(0xc0020e8680?)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:46 +0x35
testing.tRunner(0xc0020e8820, 0x34985a8)
	/usr/local/go/src/testing/testing.go:1595 +0xff
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1648 +0x3ad

                                                
                                                
goroutine 2844 [sync.Cond.Wait, 2 minutes]:
sync.runtime_notifyListWait(0xc0027789d0, 0x1)
	/usr/local/go/src/runtime/sema.go:527 +0x15d
sync.(*Cond).Wait(0x38e69e0?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc002b18a80)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/util/workqueue/queue.go:200 +0x99
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc002778a00)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc002077f90?, {0x38ead80, 0xc00060c120}, 0x1, 0xc000106d80)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0x0?, 0x3b9aca00, 0x0, 0xd0?, 0x98821c?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(0xc002077fd0?, 0xacdfc7?, 0xc0026dec80?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/backoff.go:161 +0x1e
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2863
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 3011 [syscall, 2 minutes, locked to thread]:
syscall.SyscallN(0x7ffb919c4de0?, {0xc00242dba8?, 0x3?, 0x0?})
	/usr/local/go/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall(0xa?, 0xc00242dcc0?, 0xc00242dbb0?, 0xc00242dce0?, 0x100c00242dca8?)
	/usr/local/go/src/runtime/syscall_windows.go:482 +0x35
syscall.WaitForSingleObject(0xc00009e020?, 0xffffffff)
	/usr/local/go/src/syscall/zsyscall_windows.go:1145 +0x5d
os.(*Process).wait(0xc002a72ae0)
	/usr/local/go/src/os/exec_windows.go:18 +0x55
os.(*Process).Wait(...)
	/usr/local/go/src/os/exec.go:134
os/exec.(*Cmd).Wait(0xc000ca2b00)
	/usr/local/go/src/os/exec/exec.go:890 +0x45
os/exec.(*Cmd).Run(0xc002468680?)
	/usr/local/go/src/os/exec/exec.go:590 +0x2d
k8s.io/minikube/test/integration.Run(0xc002468680, 0xc000ca2b00)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:103 +0x1ed
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1.1(0xc002468680)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:112 +0x52
testing.tRunner(0xc002468680, 0xc002a47290)
	/usr/local/go/src/testing/testing.go:1595 +0xff
created by testing.(*T).Run in goroutine 2035
	/usr/local/go/src/testing/testing.go:1648 +0x3ad

                                                
                                                
goroutine 2724 [select, 6 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/poll.go:297 +0x1c5
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 2723
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/poll.go:280 +0xc5

                                                
                                                
goroutine 1976 [chan receive, 8 minutes]:
testing.(*testContext).waitParallel(0xc0000e87d0)
	/usr/local/go/src/testing/testing.go:1715 +0xac
testing.tRunner.func1()
	/usr/local/go/src/testing/testing.go:1571 +0x53c
testing.tRunner(0xc0005a44e0, 0x3498780)
	/usr/local/go/src/testing/testing.go:1601 +0x138
created by testing.(*T).Run in goroutine 1821
	/usr/local/go/src/testing/testing.go:1648 +0x3ad

                                                
                                                
goroutine 2863 [chan receive, 6 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc002778a00, 0xc000106d80)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 2861
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/transport/cache.go:122 +0x594

                                                
                                                
goroutine 2037 [syscall, locked to thread]:
syscall.SyscallN(0x7ffb919c4de0?, {0xc0025e10e8?, 0x3?, 0x0?})
	/usr/local/go/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall(0x5?, 0x30?, 0x38de690?, 0xc0025e1168?, 0x100c0025e11e8?)
	/usr/local/go/src/runtime/syscall_windows.go:482 +0x35
syscall.WaitForSingleObject(0xc00009e1a0?, 0xffffffff)
	/usr/local/go/src/syscall/zsyscall_windows.go:1145 +0x5d
os.(*Process).wait(0xc0027b6600)
	/usr/local/go/src/os/exec_windows.go:18 +0x55
os.(*Process).Wait(...)
	/usr/local/go/src/os/exec.go:134
os/exec.(*Cmd).Wait(0xc000ca2c60)
	/usr/local/go/src/os/exec/exec.go:890 +0x45
os/exec.(*Cmd).Run(0x331?)
	/usr/local/go/src/os/exec/exec.go:590 +0x2d
os/exec.(*Cmd).CombinedOutput(0xc000ca2c60)
	/usr/local/go/src/os/exec/exec.go:1005 +0x94
k8s.io/minikube/test/integration.debugLogs(0xc000104000, {0xc000457cc0, 0xd})
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:422 +0x41e5
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc000104000)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:211 +0xc2c
testing.tRunner(0xc000104000, 0xc000c33300)
	/usr/local/go/src/testing/testing.go:1595 +0xff
created by testing.(*T).Run in goroutine 2012
	/usr/local/go/src/testing/testing.go:1648 +0x3ad

                                                
                                                
goroutine 2373 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x390d520, 0xc000106d80}, 0xc002dbdf50, 0xc002ba9678?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/wait.go:205 +0xd7
k8s.io/apimachinery/pkg/util/wait.poll({0x390d520, 0xc000106d80}, 0x1?, 0x1?, 0xc002dbdfb8?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x390d520?, 0xc000106d80?}, 0x1?, 0x0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc002dbdfd0?, 0xacdfc7?, 0xc000ae8720?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2386
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 2193 [chan receive, 34 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc0029ceb40, 0xc000106d80)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 2200
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/transport/cache.go:122 +0x594

                                                
                                                
goroutine 2386 [chan receive, 28 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc000aee800, 0xc000106d80)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 2336
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/transport/cache.go:122 +0x594

                                                
                                                
goroutine 2213 [sync.Cond.Wait, 4 minutes]:
sync.runtime_notifyListWait(0xc0029ceb10, 0x16)
	/usr/local/go/src/runtime/sema.go:527 +0x15d
sync.(*Cond).Wait(0x38e69e0?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc000909260)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/util/workqueue/queue.go:200 +0x99
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc0029ceb40)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc002583f90?, {0x38ead80, 0xc002084c00}, 0x1, 0xc000106d80)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc002cfcea0?, 0x3b9aca00, 0x0, 0xd0?, 0x98821c?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(0xacdf65?, 0xc0004ae6e0?, 0xc002cfca20?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/backoff.go:161 +0x1e
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2193
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 2214 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x390d520, 0xc000106d80}, 0xc002051f50, 0xc002899df8?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/wait.go:205 +0xd7
k8s.io/apimachinery/pkg/util/wait.poll({0x390d520, 0xc000106d80}, 0x1?, 0x1?, 0xc002051fb8?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x390d520?, 0xc000106d80?}, 0x0?, 0x0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc002051fd0?, 0xacdfc7?, 0xc002cfc000?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2193
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 3013 [syscall, locked to thread]:
syscall.SyscallN(0x4c3b480?, {0xc00212dc28?, 0x0?, 0x3f54f40?})
	/usr/local/go/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall6(0x4bb8620?, 0x95e656?, 0x4c64c40?, 0xc00212dce8?, 0x9513bd?, 0x2b5d1860108?, 0x87?, 0xc000000000?)
	/usr/local/go/src/runtime/syscall_windows.go:488 +0x4a
syscall.readFile(0x0?, {0xc0024678ff?, 0x701, 0x9f7fbf?}, 0x0?, 0x800000?)
	/usr/local/go/src/syscall/zsyscall_windows.go:1024 +0x8e
syscall.ReadFile(...)
	/usr/local/go/src/syscall/syscall_windows.go:438
syscall.Read(0xc00043e780?, {0xc0024678ff?, 0x1?, 0xc002460000?})
	/usr/local/go/src/syscall/syscall_windows.go:417 +0x2d
internal/poll.(*FD).Read(0xc00043e780, {0xc0024678ff, 0x701, 0x701})
	/usr/local/go/src/internal/poll/fd_windows.go:422 +0x1c5
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc00009e0a0, {0xc0024678ff?, 0xc0001181b9?, 0xc00212de68?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc002a47380, {0x38e9a20, 0xc00009e0a0})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x38e9aa0, 0xc002a47380}, {0x38e9a20, 0xc00009e0a0}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:416 +0x147
io.Copy(...)
	/usr/local/go/src/io/io.go:389
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:560 +0x34
os/exec.(*Cmd).Start.func2(0xc002735c20?)
	/usr/local/go/src/os/exec/exec.go:717 +0x2c
created by os/exec.(*Cmd).Start in goroutine 3011
	/usr/local/go/src/os/exec/exec.go:716 +0xa75

                                                
                                                
goroutine 2710 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc002b9ed80)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/util/workqueue/delaying_queue.go:276 +0x305
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 2706
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/util/workqueue/delaying_queue.go:113 +0x21f

                                                
                                                
goroutine 2281 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x390d520, 0xc000106d80}, 0xc002303f50, 0xc002b9ed78?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/wait.go:205 +0xd7
k8s.io/apimachinery/pkg/util/wait.poll({0x390d520, 0xc000106d80}, 0x1?, 0x1?, 0xc002303fb8?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x390d520?, 0xc000106d80?}, 0x0?, 0x0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc002303fd0?, 0xacdfc7?, 0x0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2290
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 2374 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/poll.go:297 +0x1c5
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 2373
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/poll.go:280 +0xc5

                                                
                                                
goroutine 2722 [sync.Cond.Wait, 6 minutes]:
sync.runtime_notifyListWait(0xc000aeee10, 0x1)
	/usr/local/go/src/runtime/sema.go:527 +0x15d
sync.(*Cond).Wait(0x38e69e0?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc002b9ec60)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/util/workqueue/queue.go:200 +0x99
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc000aeee40)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc002097f90?, {0x38ead80, 0xc002772c90}, 0x1, 0xc000106d80)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0x0?, 0x3b9aca00, 0x0, 0xd0?, 0x98821c?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(0xc002097fd0?, 0xacdfc7?, 0xc000c28360?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/backoff.go:161 +0x1e
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2711
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 2974 [sync.Cond.Wait, 4 minutes]:
sync.runtime_notifyListWait(0xc0029cf190, 0x0)
	/usr/local/go/src/runtime/sema.go:527 +0x15d
sync.(*Cond).Wait(0x38e69e0?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc00219af00)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/util/workqueue/queue.go:200 +0x99
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc0029cf1c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc002167f20?, {0x38ead80, 0xc00205ce10}, 0x1, 0xc000106d80)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc002b9ff20?, 0x3b9aca00, 0x0, 0xd0?, 0x98821c?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(0xc002167fd0?, 0xecca45?, 0xc000876900?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/backoff.go:161 +0x1e
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2941
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 2723 [select, 6 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x390d520, 0xc000106d80}, 0xc002263f50, 0x0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/wait.go:205 +0xd7
k8s.io/apimachinery/pkg/util/wait.poll({0x390d520, 0xc000106d80}, 0x50?, 0xec1fe5?, 0xc002263ec0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x390d520?, 0xc000106d80?}, 0xc000ab1f90?, 0xc000c286c0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc002263fd0?, 0xebad45?, 0xc000816900?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2711
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 2711 [chan receive, 10 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc000aeee40, 0xc000106d80)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 2706
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/transport/cache.go:122 +0x594

                                                
                                                
goroutine 3094 [sync.Cond.Wait]:
sync.runtime_notifyListWait(0xc000afadd0, 0x0)
	/usr/local/go/src/runtime/sema.go:527 +0x15d
sync.(*Cond).Wait(0x38e69e0?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc002ba8f60)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/util/workqueue/queue.go:200 +0x99
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc000afae00)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc0025ddf88?, {0x38ead80, 0xc002e22150}, 0x1, 0xc000106d80)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0x0?, 0x3b9aca00, 0x0, 0xd0?, 0x98821c?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(0xc0025ddfd0?, 0xacdfc7?, 0xc002521980?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/backoff.go:161 +0x1e
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3086
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 2941 [chan receive, 4 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc0029cf1c0, 0xc000106d80)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 2939
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/transport/cache.go:122 +0x594

                                                
                                                
goroutine 2846 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/poll.go:297 +0x1c5
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 2845
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/poll.go:280 +0xc5

                                                
                                                
goroutine 3095 [select]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x390d520, 0xc000106d80}, 0xc00242bf50, 0xc000c1ebf8?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/wait.go:205 +0xd7
k8s.io/apimachinery/pkg/util/wait.poll({0x390d520, 0xc000106d80}, 0x0?, 0x0?, 0x0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x390d520?, 0xc000106d80?}, 0x0?, 0x0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xacdf65?, 0xc0023c3080?, 0xc002cfd3e0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3086
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 3027 [runnable]:
sync.runtime_notifyListWait(0xc0027eec48, 0x0)
	/usr/local/go/src/runtime/sema.go:527 +0x15d
sync.(*Cond).Wait(0x2b5d1860108?)
	/usr/local/go/src/sync/cond.go:70 +0x85
golang.org/x/net/http2.(*pipe).Read(0xc0027eec30, {0xc000b35600, 0x200, 0x200})
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.19.0/http2/pipe.go:76 +0xdf
golang.org/x/net/http2.transportResponseBody.Read({0x94e05a?}, {0xc000b35600?, 0xa?, 0x2a437a1?})
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.19.0/http2/transport.go:2558 +0x65
io.ReadAll({0x2b5f6d8af08, 0xc0027eec00})
	/usr/local/go/src/io/io.go:704 +0x7e
k8s.io/client-go/rest.(*Request).transformResponse(0xc002082d80, 0xc002e01cb0, 0xc0027ecc00)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/rest/request.go:1098 +0x98
k8s.io/client-go/rest.(*Request).Do.func1(0xc0005f7c40?, 0x0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/rest/request.go:1064 +0x31
k8s.io/client-go/rest.(*Request).request.func3.1(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/rest/request.go:1039
k8s.io/client-go/rest.(*Request).request.func3(0xc002e01cb0, 0xc0020a4d60, {0x390db10?, 0xc0005f7c40?}, 0x0?, 0x0?, 0x951265?, {0x0?, 0x0?}, 0x349a658)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/rest/request.go:1046 +0xd7
k8s.io/client-go/rest.(*Request).request(0xc002082d80, {0x390d360, 0xc00040e1c0}, 0x2?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/rest/request.go:1048 +0x4ed
k8s.io/client-go/rest.(*Request).Do(0xc002082d80, {0x390d360, 0xc00040e1c0})
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/rest/request.go:1063 +0xb0
k8s.io/client-go/kubernetes/typed/core/v1.(*pods).List(0xc0028469e0, {0x390d360, 0xc00040e1c0}, {{{0x0, 0x0}, {0x0, 0x0}}, {0x2a4a285, 0xa}, {0x0, ...}, ...})
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/kubernetes/typed/core/v1/pod.go:99 +0x165
k8s.io/minikube/test/integration.PodWait.func1({0x390d360, 0xc00040e1c0})
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:327 +0x10b
k8s.io/apimachinery/pkg/util/wait.loopConditionUntilContext.func2(0xc0020a59a8?, {0x390d360?, 0xc00040e1c0?})
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/loop.go:87 +0x52
k8s.io/apimachinery/pkg/util/wait.loopConditionUntilContext({0x390d360, 0xc00040e1c0}, {0x39015d0?, 0xc0003df9a0}, 0x1, 0x0, 0xc0022e2600?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/loop.go:88 +0x247
k8s.io/apimachinery/pkg/util/wait.PollUntilContextTimeout({0x390d360?, 0xc0004da540?}, 0x3b9aca00, 0xc0020a5bf0?, 0x0?, 0xc002867e18?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/poll.go:48 +0x98
k8s.io/minikube/test/integration.PodWait({0x390d360, 0xc0004da540}, 0xc0022a0000, {0xc002c24600, 0x15}, {0x2a437a1, 0x7}, {0x2a4a285, 0xa}, 0xd18c2e2800)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:371 +0x385
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1.4(0xc0022a0000)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:163 +0x3c5
testing.tRunner(0xc0022a0000, 0xc002994150)
	/usr/local/go/src/testing/testing.go:1595 +0xff
created by testing.(*T).Run in goroutine 2036
	/usr/local/go/src/testing/testing.go:1648 +0x3ad

                                                
                                                
goroutine 2976 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/poll.go:297 +0x1c5
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 2975
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/poll.go:280 +0xc5

                                                
                                                
goroutine 2975 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x390d520, 0xc000106d80}, 0xc00252bf50, 0xc000909d38?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/wait.go:205 +0xd7
k8s.io/apimachinery/pkg/util/wait.poll({0x390d520, 0xc000106d80}, 0x1?, 0x1?, 0xc00252bfb8?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x390d520?, 0xc000106d80?}, 0xc0003f9720?, 0xc0003f9720?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc002403140?, 0xacdfc7?, 0xc00096ff80?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2941
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 3108 [runnable, locked to thread]:
syscall.SyscallN(0x1?, {0xc002097458?, 0x45?, 0xc002097490?})
	/usr/local/go/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall9(0x208?, 0xc002097510?, 0x988e51?, 0x2b5f6e5b6a0?, 0x0?, 0xc002097508?, 0x980f6e?, 0x4c38300?, 0x965de5?, 0x0, ...)
	/usr/local/go/src/runtime/syscall_windows.go:494 +0x72
syscall.WSARecv(0x97c270bcc4f09170?, 0x2d19368716631d12?, 0x1, 0x9f8147?, 0x97c270bcc4f09170?, 0x9da48416631d12?, 0x0?)
	/usr/local/go/src/syscall/zsyscall_windows.go:1295 +0xb2
internal/poll.(*FD).Read.func1(0x29da48416631d12?)
	/usr/local/go/src/internal/poll/fd_windows.go:437 +0x38
internal/poll.execIO(0xc0025afb98, 0x3499048)
	/usr/local/go/src/internal/poll/fd_windows.go:159 +0x6c
internal/poll.(*FD).Read(0xc0025afb80, {0xc000956000, 0x2000, 0x2000})
	/usr/local/go/src/internal/poll/fd_windows.go:436 +0x2b1
net.(*netFD).Read(0xc0025afb80, {0xc000956000?, 0x1ffb?, 0xc000496280?})
	/usr/local/go/src/net/fd_posix.go:55 +0x25
net.(*conn).Read(0xc00009e020, {0xc000956000?, 0xc000956000?, 0x5?})
	/usr/local/go/src/net/net.go:179 +0x45
crypto/tls.(*atLeastReader).Read(0xc0020fdbd8, {0xc000956000?, 0xc0020fdbd8?, 0x0?})
	/usr/local/go/src/crypto/tls/conn.go:805 +0x3b
bytes.(*Buffer).ReadFrom(0xc000164628, {0x38eb500, 0xc0020fdbd8})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
crypto/tls.(*Conn).readFromUntil(0xc000164380, {0x2b5f6d8adc0?, 0xc002a6a000}, 0x2000?)
	/usr/local/go/src/crypto/tls/conn.go:827 +0xde
crypto/tls.(*Conn).readRecordOrCCS(0xc000164380, 0x0)
	/usr/local/go/src/crypto/tls/conn.go:625 +0x250
crypto/tls.(*Conn).readRecord(...)
	/usr/local/go/src/crypto/tls/conn.go:587
crypto/tls.(*Conn).Read(0xc000164380, {0xc002268000, 0x1000, 0xeb72e9?})
	/usr/local/go/src/crypto/tls/conn.go:1369 +0x158
bufio.(*Reader).Read(0xc00228c4e0, {0xc00214e200, 0x9, 0x2691580?})
	/usr/local/go/src/bufio/bufio.go:244 +0x197
io.ReadAtLeast({0x38e9b40, 0xc00228c4e0}, {0xc00214e200, 0x9, 0x9}, 0x9)
	/usr/local/go/src/io/io.go:335 +0x90
io.ReadFull(...)
	/usr/local/go/src/io/io.go:354
golang.org/x/net/http2.readFrameHeader({0xc00214e200, 0x9, 0x500000?}, {0x38e9b40?, 0xc00228c4e0?})
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.19.0/http2/frame.go:237 +0x65
golang.org/x/net/http2.(*Framer).ReadFrame(0xc00214e1c0)
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.19.0/http2/frame.go:498 +0x85
golang.org/x/net/http2.(*clientConnReadLoop).run(0xc002097f98)
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.19.0/http2/transport.go:2275 +0x11f
golang.org/x/net/http2.(*ClientConn).readLoop(0xc000876900)
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.19.0/http2/transport.go:2170 +0x65
created by golang.org/x/net/http2.(*Transport).newClientConn in goroutine 3107
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.19.0/http2/transport.go:821 +0xcbe

                                                
                                                
goroutine 3101 [select]:
golang.org/x/net/http2.(*clientStream).writeRequest(0xc0027eec00, 0xc0027ecd00)
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.19.0/http2/transport.go:1464 +0xac7
golang.org/x/net/http2.(*clientStream).doRequest(0xc000816b98?, 0xc000967fb8?)
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.19.0/http2/transport.go:1326 +0x18
created by golang.org/x/net/http2.(*ClientConn).RoundTrip in goroutine 3027
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.19.0/http2/transport.go:1232 +0x308

                                                
                                                
goroutine 3112 [syscall, locked to thread]:
syscall.SyscallN(0x4c39c80?, {0xc0022a7c28?, 0x1?, 0x3f54f40?})
	/usr/local/go/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall6(0x6d?, 0x95e656?, 0xc0022a1380?, 0xc0022a7ce8?, 0x951265?, 0x9885dc?, 0xc0022a1380?, 0xc0022a7ce0?)
	/usr/local/go/src/runtime/syscall_windows.go:488 +0x4a
syscall.readFile(0x0?, {0xc00251393a?, 0x2c6, 0x400?}, 0x0?, 0x800000?)
	/usr/local/go/src/syscall/zsyscall_windows.go:1024 +0x8e
syscall.ReadFile(...)
	/usr/local/go/src/syscall/syscall_windows.go:438
syscall.Read(0xc000751400?, {0xc00251393a?, 0x0?, 0xc002513800?})
	/usr/local/go/src/syscall/syscall_windows.go:417 +0x2d
internal/poll.(*FD).Read(0xc000751400, {0xc00251393a, 0x2c6, 0x2c6})
	/usr/local/go/src/internal/poll/fd_windows.go:422 +0x1c5
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc00009e170, {0xc00251393a?, 0xc0022a7e68?, 0xc0022a7e68?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc00218d830, {0x38e9a20, 0xc00009e170})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x38e9aa0, 0xc00218d830}, {0x38e9a20, 0xc00009e170}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:416 +0x147
io.Copy(...)
	/usr/local/go/src/io/io.go:389
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:560 +0x34
os/exec.(*Cmd).Start.func2(0xc002095e00?)
	/usr/local/go/src/os/exec/exec.go:717 +0x2c
created by os/exec.(*Cmd).Start in goroutine 2034
	/usr/local/go/src/os/exec/exec.go:716 +0xa75

                                                
                                                
goroutine 3113 [syscall, locked to thread]:
syscall.SyscallN(0x4c38900?, {0xc000965c28?, 0x982e10?, 0x3f40108?})
	/usr/local/go/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall6(0xc000965c80?, 0x10000000095e656?, 0xc0022a1520?, 0xc000965ce8?, 0x951265?, 0xc00006ea00?, 0xc0022a1520?, 0xc000965ce0?)
	/usr/local/go/src/runtime/syscall_windows.go:488 +0x4a
syscall.readFile(0x0?, {0xc002513d3a?, 0x2c6, 0x400?}, 0x0?, 0x800000?)
	/usr/local/go/src/syscall/zsyscall_windows.go:1024 +0x8e
syscall.ReadFile(...)
	/usr/local/go/src/syscall/syscall_windows.go:438
syscall.Read(0xc000751b80?, {0xc002513d3a?, 0x0?, 0xc002513c00?})
	/usr/local/go/src/syscall/syscall_windows.go:417 +0x2d
internal/poll.(*FD).Read(0xc000751b80, {0xc002513d3a, 0x2c6, 0x2c6})
	/usr/local/go/src/internal/poll/fd_windows.go:422 +0x1c5
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc00009e1a8, {0xc002513d3a?, 0xc000965e68?, 0xc000965e68?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc00218d890, {0x38e9a20, 0xc00009e1a8})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x38e9aa0, 0xc00218d890}, {0x38e9a20, 0xc00009e1a8}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:416 +0x147
io.Copy(...)
	/usr/local/go/src/io/io.go:389
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:560 +0x34
os/exec.(*Cmd).Start.func2(0xc000816a80?)
	/usr/local/go/src/os/exec/exec.go:717 +0x2c
created by os/exec.(*Cmd).Start in goroutine 2037
	/usr/local/go/src/os/exec/exec.go:716 +0xa75

                                                
                                                
goroutine 3096 [select]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/poll.go:297 +0x1c5
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 3095
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/poll.go:280 +0xc5

                                                
                                    

Test pass (164/208)

Order passed test Duration
3 TestDownloadOnly/v1.16.0/json-events 16.21
4 TestDownloadOnly/v1.16.0/preload-exists 0.09
7 TestDownloadOnly/v1.16.0/kubectl 0
8 TestDownloadOnly/v1.16.0/LogsDuration 0.3
10 TestDownloadOnly/v1.28.4/json-events 13.42
11 TestDownloadOnly/v1.28.4/preload-exists 0
14 TestDownloadOnly/v1.28.4/kubectl 0
15 TestDownloadOnly/v1.28.4/LogsDuration 0.31
17 TestDownloadOnly/v1.29.0-rc.2/json-events 12.1
18 TestDownloadOnly/v1.29.0-rc.2/preload-exists 0
21 TestDownloadOnly/v1.29.0-rc.2/kubectl 0
22 TestDownloadOnly/v1.29.0-rc.2/LogsDuration 0.32
23 TestDownloadOnly/DeleteAll 1.84
24 TestDownloadOnly/DeleteAlwaysSucceeds 1.45
26 TestBinaryMirror 7.49
27 TestOffline 258.05
30 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.35
31 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.36
32 TestAddons/Setup 394.44
35 TestAddons/parallel/Ingress 77.1
36 TestAddons/parallel/InspektorGadget 28.2
37 TestAddons/parallel/MetricsServer 23.15
38 TestAddons/parallel/HelmTiller 32.33
40 TestAddons/parallel/CSI 103.31
41 TestAddons/parallel/Headlamp 33.31
42 TestAddons/parallel/CloudSpanner 20.21
43 TestAddons/parallel/LocalPath 89.14
44 TestAddons/parallel/NvidiaDevicePlugin 21.48
45 TestAddons/parallel/Yakd 6.02
48 TestAddons/serial/GCPAuth/Namespaces 0.37
49 TestAddons/StoppedEnableDisable 48.59
50 TestCertOptions 580.6
51 TestCertExpiration 916.83
52 TestDockerFlags 437.34
53 TestForceSystemdFlag 396.19
54 TestForceSystemdEnv 357.26
61 TestErrorSpam/start 18.1
62 TestErrorSpam/status 38.11
63 TestErrorSpam/pause 23.52
64 TestErrorSpam/unpause 23.61
65 TestErrorSpam/stop 53.03
68 TestFunctional/serial/CopySyncFile 0.04
69 TestFunctional/serial/StartWithProxy 211.19
70 TestFunctional/serial/AuditLog 0
71 TestFunctional/serial/SoftStart 110.64
72 TestFunctional/serial/KubeContext 0.15
73 TestFunctional/serial/KubectlGetPods 0.25
76 TestFunctional/serial/CacheCmd/cache/add_remote 27.85
77 TestFunctional/serial/CacheCmd/cache/add_local 10.85
78 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.29
79 TestFunctional/serial/CacheCmd/cache/list 0.3
80 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 9.72
81 TestFunctional/serial/CacheCmd/cache/cache_reload 37.83
82 TestFunctional/serial/CacheCmd/cache/delete 0.59
83 TestFunctional/serial/MinikubeKubectlCmd 0.53
84 TestFunctional/serial/MinikubeKubectlCmdDirectly 1.52
85 TestFunctional/serial/ExtraConfig 121.72
86 TestFunctional/serial/ComponentHealth 0.2
87 TestFunctional/serial/LogsCmd 8.75
88 TestFunctional/serial/LogsFileCmd 11.1
89 TestFunctional/serial/InvalidService 21.85
95 TestFunctional/parallel/StatusCmd 44.93
99 TestFunctional/parallel/ServiceCmdConnect 27.38
100 TestFunctional/parallel/AddonsCmd 0.79
101 TestFunctional/parallel/PersistentVolumeClaim 41.93
103 TestFunctional/parallel/SSHCmd 21.56
104 TestFunctional/parallel/CpCmd 62.93
105 TestFunctional/parallel/MySQL 64.42
106 TestFunctional/parallel/FileSync 9.95
107 TestFunctional/parallel/CertSync 67.58
111 TestFunctional/parallel/NodeLabels 0.19
113 TestFunctional/parallel/NonActiveRuntimeDisabled 11.42
115 TestFunctional/parallel/License 3.71
116 TestFunctional/parallel/ServiceCmd/DeployApp 20.5
117 TestFunctional/parallel/Version/short 0.44
118 TestFunctional/parallel/Version/components 8.38
119 TestFunctional/parallel/ImageCommands/ImageListShort 7.54
120 TestFunctional/parallel/ImageCommands/ImageListTable 7.75
121 TestFunctional/parallel/ImageCommands/ImageListJson 7.88
122 TestFunctional/parallel/ImageCommands/ImageListYaml 7.95
123 TestFunctional/parallel/ImageCommands/ImageBuild 28.5
124 TestFunctional/parallel/ImageCommands/Setup 4.42
125 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 24.66
126 TestFunctional/parallel/ServiceCmd/List 14.48
127 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 21.85
128 TestFunctional/parallel/ServiceCmd/JSONOutput 14.3
130 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 29.29
131 TestFunctional/parallel/DockerEnv/powershell 50.42
134 TestFunctional/parallel/UpdateContextCmd/no_changes 3.21
135 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 2.57
136 TestFunctional/parallel/UpdateContextCmd/no_clusters 2.54
137 TestFunctional/parallel/ImageCommands/ImageSaveToFile 11.26
138 TestFunctional/parallel/ImageCommands/ImageRemove 18.41
139 TestFunctional/parallel/ProfileCmd/profile_not_create 9.7
140 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 21.67
141 TestFunctional/parallel/ProfileCmd/profile_list 10.21
142 TestFunctional/parallel/ProfileCmd/profile_json_output 9.44
143 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 10.79
145 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 8.83
146 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
148 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 17.61
154 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.12
155 TestFunctional/delete_addon-resizer_images 0.52
156 TestFunctional/delete_my-image_image 0.2
157 TestFunctional/delete_minikube_cached_images 0.2
161 TestImageBuild/serial/Setup 196.51
162 TestImageBuild/serial/NormalBuild 9.79
163 TestImageBuild/serial/BuildWithBuildArg 9.51
164 TestImageBuild/serial/BuildWithDockerIgnore 7.91
165 TestImageBuild/serial/BuildWithSpecifiedDockerfile 7.72
168 TestIngressAddonLegacy/StartLegacyK8sCluster 241.03
170 TestIngressAddonLegacy/serial/ValidateIngressAddonActivation 40.83
171 TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation 14.78
172 TestIngressAddonLegacy/serial/ValidateIngressAddons 96.18
175 TestJSONOutput/start/Command 206.45
176 TestJSONOutput/start/Audit 0
178 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
179 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
181 TestJSONOutput/pause/Command 8.1
182 TestJSONOutput/pause/Audit 0
184 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
185 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
187 TestJSONOutput/unpause/Command 8.03
188 TestJSONOutput/unpause/Audit 0
190 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
191 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
193 TestJSONOutput/stop/Command 29.54
194 TestJSONOutput/stop/Audit 0
196 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
197 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
198 TestErrorJSONOutput 1.66
203 TestMainNoArgs 0.25
204 TestMinikubeProfile 496.03
207 TestMountStart/serial/StartWithMountFirst 150.4
208 TestMountStart/serial/VerifyMountFirst 9.67
209 TestMountStart/serial/StartWithMountSecond 151.14
210 TestMountStart/serial/VerifyMountSecond 9.72
211 TestMountStart/serial/DeleteFirst 26.84
212 TestMountStart/serial/VerifyMountPostDelete 9.67
213 TestMountStart/serial/Stop 22.05
214 TestMountStart/serial/RestartStopped 113.76
215 TestMountStart/serial/VerifyMountPostStop 9.75
218 TestMultiNode/serial/FreshStart2Nodes 424.36
219 TestMultiNode/serial/DeployApp2Nodes 10.03
221 TestMultiNode/serial/AddNode 224.28
222 TestMultiNode/serial/MultiNodeLabels 0.2
223 TestMultiNode/serial/ProfileList 7.8
224 TestMultiNode/serial/CopyFile 367.88
225 TestMultiNode/serial/StopNode 67.57
226 TestMultiNode/serial/StartAfterStop 174.8
231 TestPreload 481.82
232 TestScheduledStopWindows 328.64
239 TestKubernetesUpgrade 1093.52
242 TestNoKubernetes/serial/StartNoK8sWithVersion 0.39
244 TestStoppedBinaryUpgrade/Setup 0.79
254 TestPause/serial/Start 301.26
255 TestPause/serial/SecondStartNoReconfiguration 385.3
267 TestStoppedBinaryUpgrade/MinikubeLogs 11.01
268 TestPause/serial/Pause 9.31
269 TestPause/serial/VerifyStatus 13.48
270 TestPause/serial/Unpause 8.17
271 TestPause/serial/PauseAgain 8.66
272 TestPause/serial/DeletePaused 45.45
273 TestPause/serial/VerifyDeletedResources 5.6
x
+
TestDownloadOnly/v1.16.0/json-events (16.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-486300 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=hyperv
aaa_download_only_test.go:69: (dbg) Done: out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-486300 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=hyperv: (16.2062349s)
--- PASS: TestDownloadOnly/v1.16.0/json-events (16.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/preload-exists (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/preload-exists
--- PASS: TestDownloadOnly/v1.16.0/preload-exists (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/kubectl
--- PASS: TestDownloadOnly/v1.16.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/LogsDuration (0.3s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/LogsDuration
aaa_download_only_test.go:172: (dbg) Run:  out/minikube-windows-amd64.exe logs -p download-only-486300
aaa_download_only_test.go:172: (dbg) Non-zero exit: out/minikube-windows-amd64.exe logs -p download-only-486300: exit status 85 (300.5968ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|-------------------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |       User        | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|-------------------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-486300 | minikube1\jenkins | v1.32.0 | 08 Jan 24 22:53 UTC |          |
	|         | -p download-only-486300        |                      |                   |         |                     |          |
	|         | --force --alsologtostderr      |                      |                   |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |                   |         |                     |          |
	|         | --container-runtime=docker     |                      |                   |         |                     |          |
	|         | --driver=hyperv                |                      |                   |         |                     |          |
	|---------|--------------------------------|----------------------|-------------------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/01/08 22:53:00
	Running on machine: minikube1
	Binary: Built with gc go1.21.5 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0108 22:53:00.773363   15084 out.go:296] Setting OutFile to fd 636 ...
	I0108 22:53:00.773906   15084 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 22:53:00.774103   15084 out.go:309] Setting ErrFile to fd 640...
	I0108 22:53:00.774103   15084 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W0108 22:53:00.788188   15084 root.go:314] Error reading config file at C:\Users\jenkins.minikube1\minikube-integration\.minikube\config\config.json: open C:\Users\jenkins.minikube1\minikube-integration\.minikube\config\config.json: The system cannot find the path specified.
	I0108 22:53:00.800505   15084 out.go:303] Setting JSON to true
	I0108 22:53:00.805108   15084 start.go:128] hostinfo: {"hostname":"minikube1","uptime":1875,"bootTime":1704752505,"procs":198,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.3803 Build 19045.3803","kernelVersion":"10.0.19045.3803 Build 19045.3803","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f3192dc9-6fb2-4797-bdaa-5f567903ef41"}
	W0108 22:53:00.805278   15084 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0108 22:53:00.822493   15084 out.go:97] [download-only-486300] minikube v1.32.0 on Microsoft Windows 10 Enterprise N 10.0.19045.3803 Build 19045.3803
	I0108 22:53:00.830365   15084 out.go:169] KUBECONFIG=C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0108 22:53:00.823224   15084 notify.go:220] Checking for updates...
	W0108 22:53:00.823224   15084 preload.go:295] Failed to list preload files: open C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball: The system cannot find the file specified.
	I0108 22:53:00.835597   15084 out.go:169] MINIKUBE_HOME=C:\Users\jenkins.minikube1\minikube-integration\.minikube
	I0108 22:53:00.838098   15084 out.go:169] MINIKUBE_LOCATION=17830
	I0108 22:53:00.840987   15084 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	W0108 22:53:00.846742   15084 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0108 22:53:00.847683   15084 driver.go:392] Setting default libvirt URI to qemu:///system
	I0108 22:53:06.643824   15084 out.go:97] Using the hyperv driver based on user configuration
	I0108 22:53:06.643947   15084 start.go:298] selected driver: hyperv
	I0108 22:53:06.644057   15084 start.go:902] validating driver "hyperv" against <nil>
	I0108 22:53:06.644389   15084 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I0108 22:53:06.698528   15084 start_flags.go:394] Using suggested 6000MB memory alloc based on sys=65534MB, container=0MB
	I0108 22:53:06.699805   15084 start_flags.go:913] Wait components to verify : map[apiserver:true system_pods:true]
	I0108 22:53:06.699805   15084 cni.go:84] Creating CNI manager for ""
	I0108 22:53:06.700179   15084 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0108 22:53:06.700179   15084 start_flags.go:323] config:
	{Name:download-only-486300 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704751654-17830@sha256:cabd32f8d9e8d804966eb117ed5366660f6363a4d1415f0b5480de6e396be617 Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-486300 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I0108 22:53:06.701470   15084 iso.go:125] acquiring lock: {Name:mk6e44fd4f974e035b521383471f58bfbae3f4f1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 22:53:06.704279   15084 out.go:97] Downloading VM boot image ...
	I0108 22:53:06.705329   15084 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/17806/minikube-v1.32.1-1702708929-17806-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/17806/minikube-v1.32.1-1702708929-17806-amd64.iso.sha256 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\iso\amd64\minikube-v1.32.1-1702708929-17806-amd64.iso
	I0108 22:53:10.617413   15084 out.go:97] Starting control plane node download-only-486300 in cluster download-only-486300
	I0108 22:53:10.617413   15084 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0108 22:53:10.662503   15084 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I0108 22:53:10.662646   15084 cache.go:56] Caching tarball of preloaded images
	I0108 22:53:10.663164   15084 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0108 22:53:10.833521   15084 out.go:97] Downloading Kubernetes v1.16.0 preload ...
	I0108 22:53:10.833521   15084 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 ...
	I0108 22:53:10.896683   15084 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4?checksum=md5:326f3ce331abb64565b50b8c9e791244 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I0108 22:53:14.698274   15084 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 ...
	I0108 22:53:14.725197   15084 preload.go:256] verifying checksum of C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 ...
	I0108 22:53:15.701561   15084 cache.go:59] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I0108 22:53:15.702680   15084 profile.go:148] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\download-only-486300\config.json ...
	I0108 22:53:15.702680   15084 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\download-only-486300\config.json: {Name:mkaf830d9a36cdafb803f441b21170075161a6f4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 22:53:15.703758   15084 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0108 22:53:15.705477   15084 download.go:107] Downloading: https://dl.k8s.io/release/v1.16.0/bin/windows/amd64/kubectl.exe?checksum=file:https://dl.k8s.io/release/v1.16.0/bin/windows/amd64/kubectl.exe.sha1 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\windows\amd64\v1.16.0/kubectl.exe
	
	
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-486300"

                                                
                                                
-- /stdout --
** stderr ** 
	W0108 22:53:16.997078    8132 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
aaa_download_only_test.go:173: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.16.0/LogsDuration (0.30s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/json-events (13.42s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-486300 --force --alsologtostderr --kubernetes-version=v1.28.4 --container-runtime=docker --driver=hyperv
aaa_download_only_test.go:69: (dbg) Done: out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-486300 --force --alsologtostderr --kubernetes-version=v1.28.4 --container-runtime=docker --driver=hyperv: (13.421925s)
--- PASS: TestDownloadOnly/v1.28.4/json-events (13.42s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/preload-exists
--- PASS: TestDownloadOnly/v1.28.4/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/kubectl
--- PASS: TestDownloadOnly/v1.28.4/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/LogsDuration (0.31s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/LogsDuration
aaa_download_only_test.go:172: (dbg) Run:  out/minikube-windows-amd64.exe logs -p download-only-486300
aaa_download_only_test.go:172: (dbg) Non-zero exit: out/minikube-windows-amd64.exe logs -p download-only-486300: exit status 85 (310.2555ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|-------------------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |       User        | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|-------------------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-486300 | minikube1\jenkins | v1.32.0 | 08 Jan 24 22:53 UTC |          |
	|         | -p download-only-486300        |                      |                   |         |                     |          |
	|         | --force --alsologtostderr      |                      |                   |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |                   |         |                     |          |
	|         | --container-runtime=docker     |                      |                   |         |                     |          |
	|         | --driver=hyperv                |                      |                   |         |                     |          |
	| start   | -o=json --download-only        | download-only-486300 | minikube1\jenkins | v1.32.0 | 08 Jan 24 22:53 UTC |          |
	|         | -p download-only-486300        |                      |                   |         |                     |          |
	|         | --force --alsologtostderr      |                      |                   |         |                     |          |
	|         | --kubernetes-version=v1.28.4   |                      |                   |         |                     |          |
	|         | --container-runtime=docker     |                      |                   |         |                     |          |
	|         | --driver=hyperv                |                      |                   |         |                     |          |
	|---------|--------------------------------|----------------------|-------------------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/01/08 22:53:17
	Running on machine: minikube1
	Binary: Built with gc go1.21.5 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0108 22:53:17.376137   12572 out.go:296] Setting OutFile to fd 636 ...
	I0108 22:53:17.377070   12572 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 22:53:17.377070   12572 out.go:309] Setting ErrFile to fd 640...
	I0108 22:53:17.377070   12572 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W0108 22:53:17.397424   12572 root.go:314] Error reading config file at C:\Users\jenkins.minikube1\minikube-integration\.minikube\config\config.json: open C:\Users\jenkins.minikube1\minikube-integration\.minikube\config\config.json: The system cannot find the file specified.
	I0108 22:53:17.406290   12572 out.go:303] Setting JSON to true
	I0108 22:53:17.411200   12572 start.go:128] hostinfo: {"hostname":"minikube1","uptime":1892,"bootTime":1704752505,"procs":199,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.3803 Build 19045.3803","kernelVersion":"10.0.19045.3803 Build 19045.3803","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f3192dc9-6fb2-4797-bdaa-5f567903ef41"}
	W0108 22:53:17.411200   12572 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0108 22:53:17.466566   12572 out.go:97] [download-only-486300] minikube v1.32.0 on Microsoft Windows 10 Enterprise N 10.0.19045.3803 Build 19045.3803
	I0108 22:53:17.467624   12572 notify.go:220] Checking for updates...
	I0108 22:53:17.632753   12572 out.go:169] KUBECONFIG=C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0108 22:53:17.685008   12572 out.go:169] MINIKUBE_HOME=C:\Users\jenkins.minikube1\minikube-integration\.minikube
	I0108 22:53:17.689117   12572 out.go:169] MINIKUBE_LOCATION=17830
	I0108 22:53:17.692211   12572 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	W0108 22:53:17.697550   12572 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0108 22:53:17.698241   12572 config.go:182] Loaded profile config "download-only-486300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	W0108 22:53:17.698955   12572 start.go:810] api.Load failed for download-only-486300: filestore "download-only-486300": Docker machine "download-only-486300" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0108 22:53:17.698955   12572 driver.go:392] Setting default libvirt URI to qemu:///system
	W0108 22:53:17.699664   12572 start.go:810] api.Load failed for download-only-486300: filestore "download-only-486300": Docker machine "download-only-486300" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0108 22:53:23.369774   12572 out.go:97] Using the hyperv driver based on existing profile
	I0108 22:53:23.370302   12572 start.go:298] selected driver: hyperv
	I0108 22:53:23.370302   12572 start.go:902] validating driver "hyperv" against &{Name:download-only-486300 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17806/minikube-v1.32.1-1702708929-17806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704751654-17830@sha256:cabd32f8d9e8d804966eb117ed5366660f6363a4d1415f0b5480de6e396be617 Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernete
sConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-486300 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:do
cker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I0108 22:53:23.424288   12572 cni.go:84] Creating CNI manager for ""
	I0108 22:53:23.424288   12572 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0108 22:53:23.424288   12572 start_flags.go:323] config:
	{Name:download-only-486300 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17806/minikube-v1.32.1-1702708929-17806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704751654-17830@sha256:cabd32f8d9e8d804966eb117ed5366660f6363a4d1415f0b5480de6e396be617 Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:download-only-486300 Namespace:de
fault APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFir
mwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I0108 22:53:23.424893   12572 iso.go:125] acquiring lock: {Name:mk6e44fd4f974e035b521383471f58bfbae3f4f1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 22:53:23.428377   12572 out.go:97] Starting control plane node download-only-486300 in cluster download-only-486300
	I0108 22:53:23.428377   12572 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0108 22:53:23.468889   12572 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.4/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4
	I0108 22:53:23.469843   12572 cache.go:56] Caching tarball of preloaded images
	I0108 22:53:23.470199   12572 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0108 22:53:23.473250   12572 out.go:97] Downloading Kubernetes v1.28.4 preload ...
	I0108 22:53:23.473250   12572 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 ...
	I0108 22:53:23.540525   12572 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.4/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4?checksum=md5:7ebdea7754e21f51b865dbfc36b53b7d -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4
	
	
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-486300"

                                                
                                                
-- /stdout --
** stderr ** 
	W0108 22:53:30.716310    9788 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
aaa_download_only_test.go:173: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.4/LogsDuration (0.31s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/json-events (12.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-486300 --force --alsologtostderr --kubernetes-version=v1.29.0-rc.2 --container-runtime=docker --driver=hyperv
aaa_download_only_test.go:69: (dbg) Done: out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-486300 --force --alsologtostderr --kubernetes-version=v1.29.0-rc.2 --container-runtime=docker --driver=hyperv: (12.098478s)
--- PASS: TestDownloadOnly/v1.29.0-rc.2/json-events (12.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/preload-exists
--- PASS: TestDownloadOnly/v1.29.0-rc.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/kubectl
--- PASS: TestDownloadOnly/v1.29.0-rc.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/LogsDuration (0.32s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/LogsDuration
aaa_download_only_test.go:172: (dbg) Run:  out/minikube-windows-amd64.exe logs -p download-only-486300
aaa_download_only_test.go:172: (dbg) Non-zero exit: out/minikube-windows-amd64.exe logs -p download-only-486300: exit status 85 (315.5943ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------|----------------------|-------------------|---------|---------------------|----------|
	| Command |               Args                |       Profile        |       User        | Version |     Start Time      | End Time |
	|---------|-----------------------------------|----------------------|-------------------|---------|---------------------|----------|
	| start   | -o=json --download-only           | download-only-486300 | minikube1\jenkins | v1.32.0 | 08 Jan 24 22:53 UTC |          |
	|         | -p download-only-486300           |                      |                   |         |                     |          |
	|         | --force --alsologtostderr         |                      |                   |         |                     |          |
	|         | --kubernetes-version=v1.16.0      |                      |                   |         |                     |          |
	|         | --container-runtime=docker        |                      |                   |         |                     |          |
	|         | --driver=hyperv                   |                      |                   |         |                     |          |
	| start   | -o=json --download-only           | download-only-486300 | minikube1\jenkins | v1.32.0 | 08 Jan 24 22:53 UTC |          |
	|         | -p download-only-486300           |                      |                   |         |                     |          |
	|         | --force --alsologtostderr         |                      |                   |         |                     |          |
	|         | --kubernetes-version=v1.28.4      |                      |                   |         |                     |          |
	|         | --container-runtime=docker        |                      |                   |         |                     |          |
	|         | --driver=hyperv                   |                      |                   |         |                     |          |
	| start   | -o=json --download-only           | download-only-486300 | minikube1\jenkins | v1.32.0 | 08 Jan 24 22:53 UTC |          |
	|         | -p download-only-486300           |                      |                   |         |                     |          |
	|         | --force --alsologtostderr         |                      |                   |         |                     |          |
	|         | --kubernetes-version=v1.29.0-rc.2 |                      |                   |         |                     |          |
	|         | --container-runtime=docker        |                      |                   |         |                     |          |
	|         | --driver=hyperv                   |                      |                   |         |                     |          |
	|---------|-----------------------------------|----------------------|-------------------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/01/08 22:53:31
	Running on machine: minikube1
	Binary: Built with gc go1.21.5 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0108 22:53:31.122604   15328 out.go:296] Setting OutFile to fd 796 ...
	I0108 22:53:31.123714   15328 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 22:53:31.123714   15328 out.go:309] Setting ErrFile to fd 800...
	I0108 22:53:31.123714   15328 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W0108 22:53:31.139741   15328 root.go:314] Error reading config file at C:\Users\jenkins.minikube1\minikube-integration\.minikube\config\config.json: open C:\Users\jenkins.minikube1\minikube-integration\.minikube\config\config.json: The system cannot find the file specified.
	I0108 22:53:31.147412   15328 out.go:303] Setting JSON to true
	I0108 22:53:31.151251   15328 start.go:128] hostinfo: {"hostname":"minikube1","uptime":1905,"bootTime":1704752505,"procs":196,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.3803 Build 19045.3803","kernelVersion":"10.0.19045.3803 Build 19045.3803","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f3192dc9-6fb2-4797-bdaa-5f567903ef41"}
	W0108 22:53:31.151251   15328 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0108 22:53:31.331967   15328 out.go:97] [download-only-486300] minikube v1.32.0 on Microsoft Windows 10 Enterprise N 10.0.19045.3803 Build 19045.3803
	I0108 22:53:31.335278   15328 out.go:169] KUBECONFIG=C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0108 22:53:31.332227   15328 notify.go:220] Checking for updates...
	I0108 22:53:31.340569   15328 out.go:169] MINIKUBE_HOME=C:\Users\jenkins.minikube1\minikube-integration\.minikube
	I0108 22:53:31.342717   15328 out.go:169] MINIKUBE_LOCATION=17830
	I0108 22:53:31.345863   15328 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	W0108 22:53:31.351500   15328 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0108 22:53:31.352496   15328 config.go:182] Loaded profile config "download-only-486300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	W0108 22:53:31.352496   15328 start.go:810] api.Load failed for download-only-486300: filestore "download-only-486300": Docker machine "download-only-486300" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0108 22:53:31.353134   15328 driver.go:392] Setting default libvirt URI to qemu:///system
	W0108 22:53:31.353459   15328 start.go:810] api.Load failed for download-only-486300: filestore "download-only-486300": Docker machine "download-only-486300" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0108 22:53:36.959182   15328 out.go:97] Using the hyperv driver based on existing profile
	I0108 22:53:36.960078   15328 start.go:298] selected driver: hyperv
	I0108 22:53:36.960078   15328 start.go:902] validating driver "hyperv" against &{Name:download-only-486300 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17806/minikube-v1.32.1-1702708929-17806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704751654-17830@sha256:cabd32f8d9e8d804966eb117ed5366660f6363a4d1415f0b5480de6e396be617 Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernete
sConfig:{KubernetesVersion:v1.28.4 ClusterName:download-only-486300 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:do
cker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I0108 22:53:37.011123   15328 cni.go:84] Creating CNI manager for ""
	I0108 22:53:37.011123   15328 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0108 22:53:37.011123   15328 start_flags.go:323] config:
	{Name:download-only-486300 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17806/minikube-v1.32.1-1702708929-17806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704751654-17830@sha256:cabd32f8d9e8d804966eb117ed5366660f6363a4d1415f0b5480de6e396be617 Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:download-only-486300 Namespa
ce:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQe
muFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I0108 22:53:37.011710   15328 iso.go:125] acquiring lock: {Name:mk6e44fd4f974e035b521383471f58bfbae3f4f1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 22:53:37.016447   15328 out.go:97] Starting control plane node download-only-486300 in cluster download-only-486300
	I0108 22:53:37.016611   15328 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime docker
	I0108 22:53:37.056033   15328 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.29.0-rc.2/preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-amd64.tar.lz4
	I0108 22:53:37.056124   15328 cache.go:56] Caching tarball of preloaded images
	I0108 22:53:37.056203   15328 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime docker
	I0108 22:53:37.059717   15328 out.go:97] Downloading Kubernetes v1.29.0-rc.2 preload ...
	I0108 22:53:37.059717   15328 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-amd64.tar.lz4 ...
	I0108 22:53:37.125116   15328 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.29.0-rc.2/preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-amd64.tar.lz4?checksum=md5:74b99cd9fa76659778caad266ad399ba -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-amd64.tar.lz4
	I0108 22:53:40.697154   15328 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-amd64.tar.lz4 ...
	I0108 22:53:40.697956   15328 preload.go:256] verifying checksum of C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-amd64.tar.lz4 ...
	
	
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-486300"

                                                
                                                
-- /stdout --
** stderr ** 
	W0108 22:53:43.147883    5724 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
aaa_download_only_test.go:173: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.29.0-rc.2/LogsDuration (0.32s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAll (1.84s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAll
aaa_download_only_test.go:190: (dbg) Run:  out/minikube-windows-amd64.exe delete --all
aaa_download_only_test.go:190: (dbg) Done: out/minikube-windows-amd64.exe delete --all: (1.844852s)
--- PASS: TestDownloadOnly/DeleteAll (1.84s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAlwaysSucceeds (1.45s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAlwaysSucceeds
aaa_download_only_test.go:202: (dbg) Run:  out/minikube-windows-amd64.exe delete -p download-only-486300
aaa_download_only_test.go:202: (dbg) Done: out/minikube-windows-amd64.exe delete -p download-only-486300: (1.4463218s)
--- PASS: TestDownloadOnly/DeleteAlwaysSucceeds (1.45s)

                                                
                                    
x
+
TestBinaryMirror (7.49s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:307: (dbg) Run:  out/minikube-windows-amd64.exe start --download-only -p binary-mirror-325800 --alsologtostderr --binary-mirror http://127.0.0.1:61448 --driver=hyperv
aaa_download_only_test.go:307: (dbg) Done: out/minikube-windows-amd64.exe start --download-only -p binary-mirror-325800 --alsologtostderr --binary-mirror http://127.0.0.1:61448 --driver=hyperv: (6.498015s)
helpers_test.go:175: Cleaning up "binary-mirror-325800" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p binary-mirror-325800
--- PASS: TestBinaryMirror (7.49s)

                                                
                                    
x
+
TestOffline (258.05s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-windows-amd64.exe start -p offline-docker-224200 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=hyperv
aab_offline_test.go:55: (dbg) Done: out/minikube-windows-amd64.exe start -p offline-docker-224200 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=hyperv: (3m37.5025295s)
helpers_test.go:175: Cleaning up "offline-docker-224200" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p offline-docker-224200
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p offline-docker-224200: (40.5432681s)
--- PASS: TestOffline (258.05s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.35s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:928: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p addons-852800
addons_test.go:928: (dbg) Non-zero exit: out/minikube-windows-amd64.exe addons enable dashboard -p addons-852800: exit status 85 (354.7835ms)

                                                
                                                
-- stdout --
	* Profile "addons-852800" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-852800"

                                                
                                                
-- /stdout --
** stderr ** 
	W0108 22:53:55.647681    8744 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.35s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.36s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:939: (dbg) Run:  out/minikube-windows-amd64.exe addons disable dashboard -p addons-852800
addons_test.go:939: (dbg) Non-zero exit: out/minikube-windows-amd64.exe addons disable dashboard -p addons-852800: exit status 85 (354.9568ms)

                                                
                                                
-- stdout --
	* Profile "addons-852800" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-852800"

                                                
                                                
-- /stdout --
** stderr ** 
	W0108 22:53:55.649675    9788 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.36s)

                                                
                                    
x
+
TestAddons/Setup (394.44s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:109: (dbg) Run:  out/minikube-windows-amd64.exe start -p addons-852800 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=hyperv --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:109: (dbg) Done: out/minikube-windows-amd64.exe start -p addons-852800 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=hyperv --addons=ingress --addons=ingress-dns --addons=helm-tiller: (6m34.4348414s)
--- PASS: TestAddons/Setup (394.44s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (77.1s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:207: (dbg) Run:  kubectl --context addons-852800 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:232: (dbg) Run:  kubectl --context addons-852800 replace --force -f testdata\nginx-ingress-v1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context addons-852800 replace --force -f testdata\nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [1084c06d-0a67-499d-8858-bd9b5ae66a0e] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [1084c06d-0a67-499d-8858-bd9b5ae66a0e] Running
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 18.0170018s
addons_test.go:262: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-852800 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:262: (dbg) Done: out/minikube-windows-amd64.exe -p addons-852800 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": (10.6478264s)
addons_test.go:269: debug: unexpected stderr for out/minikube-windows-amd64.exe -p addons-852800 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'":
W0108 23:01:18.458839   14024 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
addons_test.go:286: (dbg) Run:  kubectl --context addons-852800 replace --force -f testdata\ingress-dns-example-v1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-852800 ip
addons_test.go:291: (dbg) Done: out/minikube-windows-amd64.exe -p addons-852800 ip: (2.8347401s)
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 172.24.111.87
addons_test.go:306: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-852800 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:306: (dbg) Done: out/minikube-windows-amd64.exe -p addons-852800 addons disable ingress-dns --alsologtostderr -v=1: (17.8889844s)
addons_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-852800 addons disable ingress --alsologtostderr -v=1
addons_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe -p addons-852800 addons disable ingress --alsologtostderr -v=1: (24.516083s)
--- PASS: TestAddons/parallel/Ingress (77.10s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (28.2s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:838: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-7dq8j" [7648df13-f449-4f78-89fd-f5d91405436e] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:838: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.0131611s
addons_test.go:841: (dbg) Run:  out/minikube-windows-amd64.exe addons disable inspektor-gadget -p addons-852800
addons_test.go:841: (dbg) Done: out/minikube-windows-amd64.exe addons disable inspektor-gadget -p addons-852800: (22.1834014s)
--- PASS: TestAddons/parallel/InspektorGadget (28.20s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (23.15s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:407: metrics-server stabilized in 26.9967ms
addons_test.go:409: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-7c66d45ddc-knxpc" [fffa3d47-562c-422b-a8a1-57bf12dca99c] Running
addons_test.go:409: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.0134033s
addons_test.go:415: (dbg) Run:  kubectl --context addons-852800 top pods -n kube-system
addons_test.go:432: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-852800 addons disable metrics-server --alsologtostderr -v=1
addons_test.go:432: (dbg) Done: out/minikube-windows-amd64.exe -p addons-852800 addons disable metrics-server --alsologtostderr -v=1: (17.9134158s)
--- PASS: TestAddons/parallel/MetricsServer (23.15s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (32.33s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:456: tiller-deploy stabilized in 7.8309ms
addons_test.go:458: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-7b677967b9-zj9g8" [c6b2b083-2aed-4744-ba3e-316f467280b9] Running
addons_test.go:458: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.0081354s
addons_test.go:473: (dbg) Run:  kubectl --context addons-852800 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:473: (dbg) Done: kubectl --context addons-852800 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (11.0703443s)
addons_test.go:490: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-852800 addons disable helm-tiller --alsologtostderr -v=1
addons_test.go:490: (dbg) Done: out/minikube-windows-amd64.exe -p addons-852800 addons disable helm-tiller --alsologtostderr -v=1: (16.2241683s)
--- PASS: TestAddons/parallel/HelmTiller (32.33s)

                                                
                                    
x
+
TestAddons/parallel/CSI (103.31s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:561: csi-hostpath-driver pods stabilized in 32.006ms
addons_test.go:564: (dbg) Run:  kubectl --context addons-852800 create -f testdata\csi-hostpath-driver\pvc.yaml
addons_test.go:569: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-852800 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-852800 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-852800 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-852800 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-852800 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-852800 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-852800 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-852800 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-852800 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-852800 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-852800 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-852800 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-852800 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-852800 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-852800 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-852800 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:574: (dbg) Run:  kubectl --context addons-852800 create -f testdata\csi-hostpath-driver\pv-pod.yaml
addons_test.go:579: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [d2c57ef4-3f04-4f04-9fa9-bc0a5c9dadc3] Pending
helpers_test.go:344: "task-pv-pod" [d2c57ef4-3f04-4f04-9fa9-bc0a5c9dadc3] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [d2c57ef4-3f04-4f04-9fa9-bc0a5c9dadc3] Running
addons_test.go:579: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 25.0068912s
addons_test.go:584: (dbg) Run:  kubectl --context addons-852800 create -f testdata\csi-hostpath-driver\snapshot.yaml
addons_test.go:589: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-852800 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-852800 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:594: (dbg) Run:  kubectl --context addons-852800 delete pod task-pv-pod
addons_test.go:594: (dbg) Done: kubectl --context addons-852800 delete pod task-pv-pod: (1.4096218s)
addons_test.go:600: (dbg) Run:  kubectl --context addons-852800 delete pvc hpvc
addons_test.go:606: (dbg) Run:  kubectl --context addons-852800 create -f testdata\csi-hostpath-driver\pvc-restore.yaml
addons_test.go:611: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-852800 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-852800 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-852800 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-852800 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-852800 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-852800 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-852800 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-852800 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:616: (dbg) Run:  kubectl --context addons-852800 create -f testdata\csi-hostpath-driver\pv-pod-restore.yaml
addons_test.go:621: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [11d82d90-19cc-45bf-822c-4284032324f2] Pending
helpers_test.go:344: "task-pv-pod-restore" [11d82d90-19cc-45bf-822c-4284032324f2] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [11d82d90-19cc-45bf-822c-4284032324f2] Running
addons_test.go:621: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.0206219s
addons_test.go:626: (dbg) Run:  kubectl --context addons-852800 delete pod task-pv-pod-restore
addons_test.go:626: (dbg) Done: kubectl --context addons-852800 delete pod task-pv-pod-restore: (1.4749139s)
addons_test.go:630: (dbg) Run:  kubectl --context addons-852800 delete pvc hpvc-restore
addons_test.go:634: (dbg) Run:  kubectl --context addons-852800 delete volumesnapshot new-snapshot-demo
addons_test.go:638: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-852800 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:638: (dbg) Done: out/minikube-windows-amd64.exe -p addons-852800 addons disable csi-hostpath-driver --alsologtostderr -v=1: (23.3494659s)
addons_test.go:642: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-852800 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:642: (dbg) Done: out/minikube-windows-amd64.exe -p addons-852800 addons disable volumesnapshots --alsologtostderr -v=1: (17.2264805s)
--- PASS: TestAddons/parallel/CSI (103.31s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (33.31s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:824: (dbg) Run:  out/minikube-windows-amd64.exe addons enable headlamp -p addons-852800 --alsologtostderr -v=1
addons_test.go:824: (dbg) Done: out/minikube-windows-amd64.exe addons enable headlamp -p addons-852800 --alsologtostderr -v=1: (16.3006173s)
addons_test.go:829: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-7ddfbb94ff-swvnp" [5135bbc2-c382-4a18-b02f-95bcc0aec8f7] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7ddfbb94ff-swvnp" [5135bbc2-c382-4a18-b02f-95bcc0aec8f7] Running
addons_test.go:829: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 17.0112091s
--- PASS: TestAddons/parallel/Headlamp (33.31s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (20.21s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:857: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-64c8c85f65-4vgxj" [bce4c9c7-1bdb-44fc-9a5a-f1362a5ddf4f] Running
addons_test.go:857: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.0213986s
addons_test.go:860: (dbg) Run:  out/minikube-windows-amd64.exe addons disable cloud-spanner -p addons-852800
addons_test.go:860: (dbg) Done: out/minikube-windows-amd64.exe addons disable cloud-spanner -p addons-852800: (15.1696058s)
--- PASS: TestAddons/parallel/CloudSpanner (20.21s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (89.14s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:873: (dbg) Run:  kubectl --context addons-852800 apply -f testdata\storage-provisioner-rancher\pvc.yaml
addons_test.go:879: (dbg) Run:  kubectl --context addons-852800 apply -f testdata\storage-provisioner-rancher\pod.yaml
addons_test.go:883: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-852800 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-852800 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-852800 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-852800 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-852800 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-852800 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-852800 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-852800 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-852800 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-852800 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:886: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [e10204cc-85d2-4fe6-9654-bd0527bb972e] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [e10204cc-85d2-4fe6-9654-bd0527bb972e] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [e10204cc-85d2-4fe6-9654-bd0527bb972e] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:886: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 6.0106019s
addons_test.go:891: (dbg) Run:  kubectl --context addons-852800 get pvc test-pvc -o=json
addons_test.go:900: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-852800 ssh "cat /opt/local-path-provisioner/pvc-026a5e48-d0e7-4e79-b0a6-d014883a4060_default_test-pvc/file1"
addons_test.go:900: (dbg) Done: out/minikube-windows-amd64.exe -p addons-852800 ssh "cat /opt/local-path-provisioner/pvc-026a5e48-d0e7-4e79-b0a6-d014883a4060_default_test-pvc/file1": (10.31796s)
addons_test.go:912: (dbg) Run:  kubectl --context addons-852800 delete pod test-local-path
addons_test.go:916: (dbg) Run:  kubectl --context addons-852800 delete pvc test-pvc
addons_test.go:920: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-852800 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:920: (dbg) Done: out/minikube-windows-amd64.exe -p addons-852800 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (1m1.9392892s)
--- PASS: TestAddons/parallel/LocalPath (89.14s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (21.48s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:952: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-f9bgn" [590f0566-350f-45d1-8341-03fe799d42fa] Running
addons_test.go:952: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.0155774s
addons_test.go:955: (dbg) Run:  out/minikube-windows-amd64.exe addons disable nvidia-device-plugin -p addons-852800
addons_test.go:955: (dbg) Done: out/minikube-windows-amd64.exe addons disable nvidia-device-plugin -p addons-852800: (16.4643704s)
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (21.48s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (6.02s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:963: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-9947fc6bf-9dwhs" [5b588045-5cd1-4948-abb3-7d099fb792a0] Running
addons_test.go:963: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.0150445s
--- PASS: TestAddons/parallel/Yakd (6.02s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.37s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:650: (dbg) Run:  kubectl --context addons-852800 create ns new-namespace
addons_test.go:664: (dbg) Run:  kubectl --context addons-852800 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.37s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (48.59s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-windows-amd64.exe stop -p addons-852800
addons_test.go:172: (dbg) Done: out/minikube-windows-amd64.exe stop -p addons-852800: (35.6802848s)
addons_test.go:176: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p addons-852800
addons_test.go:176: (dbg) Done: out/minikube-windows-amd64.exe addons enable dashboard -p addons-852800: (5.1553227s)
addons_test.go:180: (dbg) Run:  out/minikube-windows-amd64.exe addons disable dashboard -p addons-852800
addons_test.go:180: (dbg) Done: out/minikube-windows-amd64.exe addons disable dashboard -p addons-852800: (5.1447203s)
addons_test.go:185: (dbg) Run:  out/minikube-windows-amd64.exe addons disable gvisor -p addons-852800
addons_test.go:185: (dbg) Done: out/minikube-windows-amd64.exe addons disable gvisor -p addons-852800: (2.6082932s)
--- PASS: TestAddons/StoppedEnableDisable (48.59s)

                                                
                                    
x
+
TestCertOptions (580.6s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-windows-amd64.exe start -p cert-options-789700 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=hyperv
E0109 01:08:27.431527   14288 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ingress-addon-legacy-744200\client.crt: The system cannot find the path specified.
cert_options_test.go:49: (dbg) Done: out/minikube-windows-amd64.exe start -p cert-options-789700 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=hyperv: (8m37.0358283s)
cert_options_test.go:60: (dbg) Run:  out/minikube-windows-amd64.exe -p cert-options-789700 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:60: (dbg) Done: out/minikube-windows-amd64.exe -p cert-options-789700 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt": (10.4012816s)
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-789700 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p cert-options-789700 -- "sudo cat /etc/kubernetes/admin.conf"
cert_options_test.go:100: (dbg) Done: out/minikube-windows-amd64.exe ssh -p cert-options-789700 -- "sudo cat /etc/kubernetes/admin.conf": (10.0403425s)
helpers_test.go:175: Cleaning up "cert-options-789700" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p cert-options-789700
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p cert-options-789700: (42.9669075s)
--- PASS: TestCertOptions (580.60s)

                                                
                                    
x
+
TestCertExpiration (916.83s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-windows-amd64.exe start -p cert-expiration-532100 --memory=2048 --cert-expiration=3m --driver=hyperv
E0109 01:05:30.319299   14288 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-852800\client.crt: The system cannot find the path specified.
cert_options_test.go:123: (dbg) Done: out/minikube-windows-amd64.exe start -p cert-expiration-532100 --memory=2048 --cert-expiration=3m --driver=hyperv: (6m41.8658317s)
cert_options_test.go:131: (dbg) Run:  out/minikube-windows-amd64.exe start -p cert-expiration-532100 --memory=2048 --cert-expiration=8760h --driver=hyperv
cert_options_test.go:131: (dbg) Done: out/minikube-windows-amd64.exe start -p cert-expiration-532100 --memory=2048 --cert-expiration=8760h --driver=hyperv: (4m56.0599252s)
helpers_test.go:175: Cleaning up "cert-expiration-532100" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p cert-expiration-532100
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p cert-expiration-532100: (38.8949097s)
--- PASS: TestCertExpiration (916.83s)

                                                
                                    
x
+
TestDockerFlags (437.34s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-windows-amd64.exe start -p docker-flags-721400 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=hyperv
docker_test.go:51: (dbg) Done: out/minikube-windows-amd64.exe start -p docker-flags-721400 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=hyperv: (6m14.0098329s)
docker_test.go:56: (dbg) Run:  out/minikube-windows-amd64.exe -p docker-flags-721400 ssh "sudo systemctl show docker --property=Environment --no-pager"
E0109 01:13:27.423500   14288 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ingress-addon-legacy-744200\client.crt: The system cannot find the path specified.
docker_test.go:56: (dbg) Done: out/minikube-windows-amd64.exe -p docker-flags-721400 ssh "sudo systemctl show docker --property=Environment --no-pager": (10.4788398s)
docker_test.go:67: (dbg) Run:  out/minikube-windows-amd64.exe -p docker-flags-721400 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
docker_test.go:67: (dbg) Done: out/minikube-windows-amd64.exe -p docker-flags-721400 ssh "sudo systemctl show docker --property=ExecStart --no-pager": (10.2903709s)
helpers_test.go:175: Cleaning up "docker-flags-721400" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p docker-flags-721400
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p docker-flags-721400: (42.5593081s)
--- PASS: TestDockerFlags (437.34s)

                                                
                                    
x
+
TestForceSystemdFlag (396.19s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-windows-amd64.exe start -p force-systemd-flag-926300 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=hyperv
docker_test.go:91: (dbg) Done: out/minikube-windows-amd64.exe start -p force-systemd-flag-926300 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=hyperv: (5m39.8321931s)
docker_test.go:110: (dbg) Run:  out/minikube-windows-amd64.exe -p force-systemd-flag-926300 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Done: out/minikube-windows-amd64.exe -p force-systemd-flag-926300 ssh "docker info --format {{.CgroupDriver}}": (10.2607683s)
helpers_test.go:175: Cleaning up "force-systemd-flag-926300" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p force-systemd-flag-926300
E0109 01:03:27.417992   14288 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ingress-addon-legacy-744200\client.crt: The system cannot find the path specified.
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p force-systemd-flag-926300: (46.0968826s)
--- PASS: TestForceSystemdFlag (396.19s)

                                                
                                    
x
+
TestForceSystemdEnv (357.26s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-windows-amd64.exe start -p force-systemd-env-561000 --memory=2048 --alsologtostderr -v=5 --driver=hyperv
docker_test.go:155: (dbg) Done: out/minikube-windows-amd64.exe start -p force-systemd-env-561000 --memory=2048 --alsologtostderr -v=5 --driver=hyperv: (5m3.5437591s)
docker_test.go:110: (dbg) Run:  out/minikube-windows-amd64.exe -p force-systemd-env-561000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Done: out/minikube-windows-amd64.exe -p force-systemd-env-561000 ssh "docker info --format {{.CgroupDriver}}": (10.3196532s)
helpers_test.go:175: Cleaning up "force-systemd-env-561000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p force-systemd-env-561000
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p force-systemd-env-561000: (43.3952787s)
--- PASS: TestForceSystemdEnv (357.26s)

                                                
                                    
x
+
TestErrorSpam/start (18.1s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-827500 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-827500 start --dry-run
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-827500 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-827500 start --dry-run: (5.9717169s)
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-827500 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-827500 start --dry-run
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-827500 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-827500 start --dry-run: (6.0326624s)
error_spam_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-827500 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-827500 start --dry-run
error_spam_test.go:182: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-827500 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-827500 start --dry-run: (6.0816906s)
--- PASS: TestErrorSpam/start (18.10s)

                                                
                                    
x
+
TestErrorSpam/status (38.11s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-827500 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-827500 status
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-827500 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-827500 status: (13.1574273s)
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-827500 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-827500 status
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-827500 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-827500 status: (12.4838852s)
error_spam_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-827500 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-827500 status
error_spam_test.go:182: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-827500 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-827500 status: (12.469898s)
--- PASS: TestErrorSpam/status (38.11s)

                                                
                                    
x
+
TestErrorSpam/pause (23.52s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-827500 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-827500 pause
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-827500 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-827500 pause: (8.067415s)
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-827500 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-827500 pause
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-827500 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-827500 pause: (7.7187256s)
error_spam_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-827500 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-827500 pause
error_spam_test.go:182: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-827500 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-827500 pause: (7.7303815s)
--- PASS: TestErrorSpam/pause (23.52s)

                                                
                                    
x
+
TestErrorSpam/unpause (23.61s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-827500 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-827500 unpause
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-827500 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-827500 unpause: (7.9382707s)
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-827500 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-827500 unpause
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-827500 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-827500 unpause: (7.8024873s)
error_spam_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-827500 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-827500 unpause
error_spam_test.go:182: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-827500 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-827500 unpause: (7.8631406s)
--- PASS: TestErrorSpam/unpause (23.61s)

                                                
                                    
x
+
TestErrorSpam/stop (53.03s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-827500 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-827500 stop
E0108 23:10:30.315550   14288 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-852800\client.crt: The system cannot find the path specified.
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-827500 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-827500 stop: (34.7002937s)
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-827500 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-827500 stop
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-827500 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-827500 stop: (9.4128741s)
error_spam_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-827500 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-827500 stop
error_spam_test.go:182: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-827500 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-827500 stop: (8.9122444s)
--- PASS: TestErrorSpam/stop (53.03s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1854: local sync path: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\test\nested\copy\14288\hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.04s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (211.19s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2233: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-838800 --memory=4000 --apiserver-port=8441 --wait=all --driver=hyperv
functional_test.go:2233: (dbg) Done: out/minikube-windows-amd64.exe start -p functional-838800 --memory=4000 --apiserver-port=8441 --wait=all --driver=hyperv: (3m31.1796277s)
--- PASS: TestFunctional/serial/StartWithProxy (211.19s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (110.64s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-838800 --alsologtostderr -v=8
E0108 23:15:30.316061   14288 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-852800\client.crt: The system cannot find the path specified.
functional_test.go:655: (dbg) Done: out/minikube-windows-amd64.exe start -p functional-838800 --alsologtostderr -v=8: (1m50.6394982s)
functional_test.go:659: soft start took 1m50.6414277s for "functional-838800" cluster.
--- PASS: TestFunctional/serial/SoftStart (110.64s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.15s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.15s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.25s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-838800 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.25s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (27.85s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-838800 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-windows-amd64.exe -p functional-838800 cache add registry.k8s.io/pause:3.1: (9.3562508s)
functional_test.go:1045: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-838800 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-windows-amd64.exe -p functional-838800 cache add registry.k8s.io/pause:3.3: (9.1390746s)
functional_test.go:1045: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-838800 cache add registry.k8s.io/pause:latest
functional_test.go:1045: (dbg) Done: out/minikube-windows-amd64.exe -p functional-838800 cache add registry.k8s.io/pause:latest: (9.354121s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (27.85s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (10.85s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-838800 C:\Users\jenkins.minikube1\AppData\Local\Temp\TestFunctionalserialCacheCmdcacheadd_local3236569296\001
functional_test.go:1073: (dbg) Done: docker build -t minikube-local-cache-test:functional-838800 C:\Users\jenkins.minikube1\AppData\Local\Temp\TestFunctionalserialCacheCmdcacheadd_local3236569296\001: (1.9879259s)
functional_test.go:1085: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-838800 cache add minikube-local-cache-test:functional-838800
functional_test.go:1085: (dbg) Done: out/minikube-windows-amd64.exe -p functional-838800 cache add minikube-local-cache-test:functional-838800: (8.340017s)
functional_test.go:1090: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-838800 cache delete minikube-local-cache-test:functional-838800
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-838800
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (10.85s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.29s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-windows-amd64.exe cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.29s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.3s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-windows-amd64.exe cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.30s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (9.72s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-838800 ssh sudo crictl images
functional_test.go:1120: (dbg) Done: out/minikube-windows-amd64.exe -p functional-838800 ssh sudo crictl images: (9.7238774s)
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (9.72s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (37.83s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-838800 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1143: (dbg) Done: out/minikube-windows-amd64.exe -p functional-838800 ssh sudo docker rmi registry.k8s.io/pause:latest: (9.74935s)
functional_test.go:1149: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-838800 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-838800 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (9.7704898s)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	W0108 23:17:28.884584   10328 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-838800 cache reload
functional_test.go:1154: (dbg) Done: out/minikube-windows-amd64.exe -p functional-838800 cache reload: (8.6325677s)
functional_test.go:1159: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-838800 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1159: (dbg) Done: out/minikube-windows-amd64.exe -p functional-838800 ssh sudo crictl inspecti registry.k8s.io/pause:latest: (9.6757706s)
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (37.83s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.59s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-windows-amd64.exe cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-windows-amd64.exe cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.59s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.53s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-838800 kubectl -- --context functional-838800 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.53s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (1.52s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out\kubectl.exe --context functional-838800 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (1.52s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (121.72s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-838800 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:753: (dbg) Done: out/minikube-windows-amd64.exe start -p functional-838800 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (2m1.7225135s)
functional_test.go:757: restart took 2m1.7227976s for "functional-838800" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (121.72s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.2s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-838800 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.20s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (8.75s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-838800 logs
functional_test.go:1232: (dbg) Done: out/minikube-windows-amd64.exe -p functional-838800 logs: (8.7532114s)
--- PASS: TestFunctional/serial/LogsCmd (8.75s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (11.1s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-838800 logs --file C:\Users\jenkins.minikube1\AppData\Local\Temp\TestFunctionalserialLogsFileCmd760812719\001\logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-windows-amd64.exe -p functional-838800 logs --file C:\Users\jenkins.minikube1\AppData\Local\Temp\TestFunctionalserialLogsFileCmd760812719\001\logs.txt: (11.0910662s)
--- PASS: TestFunctional/serial/LogsFileCmd (11.10s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (21.85s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2320: (dbg) Run:  kubectl --context functional-838800 apply -f testdata\invalidsvc.yaml
functional_test.go:2334: (dbg) Run:  out/minikube-windows-amd64.exe service invalid-svc -p functional-838800
E0108 23:20:30.321119   14288 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-852800\client.crt: The system cannot find the path specified.
functional_test.go:2334: (dbg) Non-zero exit: out/minikube-windows-amd64.exe service invalid-svc -p functional-838800: exit status 115 (17.0461341s)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|-----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |             URL             |
	|-----------|-------------|-------------|-----------------------------|
	| default   | invalid-svc |          80 | http://172.24.109.223:30946 |
	|-----------|-------------|-------------|-----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0108 23:20:24.837886   10784 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - C:\Users\jenkins.minikube1\AppData\Local\Temp\minikube_service_f513297bf07cd3fd942cead3a34f1b094af52476_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2326: (dbg) Run:  kubectl --context functional-838800 delete -f testdata\invalidsvc.yaml
functional_test.go:2326: (dbg) Done: kubectl --context functional-838800 delete -f testdata\invalidsvc.yaml: (1.3487219s)
--- PASS: TestFunctional/serial/InvalidService (21.85s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (44.93s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-838800 status
functional_test.go:850: (dbg) Done: out/minikube-windows-amd64.exe -p functional-838800 status: (14.736899s)
functional_test.go:856: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-838800 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:856: (dbg) Done: out/minikube-windows-amd64.exe -p functional-838800 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}: (15.2851561s)
functional_test.go:868: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-838800 status -o json
functional_test.go:868: (dbg) Done: out/minikube-windows-amd64.exe -p functional-838800 status -o json: (14.9117109s)
--- PASS: TestFunctional/parallel/StatusCmd (44.93s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (27.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1628: (dbg) Run:  kubectl --context functional-838800 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1634: (dbg) Run:  kubectl --context functional-838800 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-55497b8b78-kjbln" [0c43e487-2d96-48b7-a222-c752d429b6ed] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-55497b8b78-kjbln" [0c43e487-2d96-48b7-a222-c752d429b6ed] Running
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 9.0112024s
functional_test.go:1648: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-838800 service hello-node-connect --url
functional_test.go:1648: (dbg) Done: out/minikube-windows-amd64.exe -p functional-838800 service hello-node-connect --url: (17.9424748s)
functional_test.go:1654: found endpoint for hello-node-connect: http://172.24.109.223:32394
functional_test.go:1674: http://172.24.109.223:32394: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-55497b8b78-kjbln

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://172.24.109.223:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=172.24.109.223:32394
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (27.38s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.79s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1689: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-838800 addons list
functional_test.go:1701: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-838800 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.79s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (41.93s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [9eb499d0-b76a-427c-8a3e-5577dad6986f] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.0199097s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-838800 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-838800 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-838800 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-838800 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [2c9e9b93-d86e-41d0-9209-761b45148add] Pending
helpers_test.go:344: "sp-pod" [2c9e9b93-d86e-41d0-9209-761b45148add] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [2c9e9b93-d86e-41d0-9209-761b45148add] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 24.0136904s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-838800 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-838800 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-838800 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [ba5dfe2f-6dd3-41c6-aa99-cb30535537e8] Pending
helpers_test.go:344: "sp-pod" [ba5dfe2f-6dd3-41c6-aa99-cb30535537e8] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [ba5dfe2f-6dd3-41c6-aa99-cb30535537e8] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 8.0251771s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-838800 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (41.93s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (21.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1724: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-838800 ssh "echo hello"
functional_test.go:1724: (dbg) Done: out/minikube-windows-amd64.exe -p functional-838800 ssh "echo hello": (11.0446048s)
functional_test.go:1741: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-838800 ssh "cat /etc/hostname"
functional_test.go:1741: (dbg) Done: out/minikube-windows-amd64.exe -p functional-838800 ssh "cat /etc/hostname": (10.5165562s)
--- PASS: TestFunctional/parallel/SSHCmd (21.56s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (62.93s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-838800 cp testdata\cp-test.txt /home/docker/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p functional-838800 cp testdata\cp-test.txt /home/docker/cp-test.txt: (9.145081s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-838800 ssh -n functional-838800 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p functional-838800 ssh -n functional-838800 "sudo cat /home/docker/cp-test.txt": (10.8430487s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-838800 cp functional-838800:/home/docker/cp-test.txt C:\Users\jenkins.minikube1\AppData\Local\Temp\TestFunctionalparallelCpCmd1252801389\001\cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p functional-838800 cp functional-838800:/home/docker/cp-test.txt C:\Users\jenkins.minikube1\AppData\Local\Temp\TestFunctionalparallelCpCmd1252801389\001\cp-test.txt: (11.1570412s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-838800 ssh -n functional-838800 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p functional-838800 ssh -n functional-838800 "sudo cat /home/docker/cp-test.txt": (11.4877168s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-838800 cp testdata\cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p functional-838800 cp testdata\cp-test.txt /tmp/does/not/exist/cp-test.txt: (8.5825762s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-838800 ssh -n functional-838800 "sudo cat /tmp/does/not/exist/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p functional-838800 ssh -n functional-838800 "sudo cat /tmp/does/not/exist/cp-test.txt": (11.7050921s)
--- PASS: TestFunctional/parallel/CpCmd (62.93s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (64.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1792: (dbg) Run:  kubectl --context functional-838800 replace --force -f testdata\mysql.yaml
functional_test.go:1798: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-859648c796-g2qlx" [d65c531b-cd26-4aab-8f8b-abb334fd68fc] Pending
helpers_test.go:344: "mysql-859648c796-g2qlx" [d65c531b-cd26-4aab-8f8b-abb334fd68fc] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-859648c796-g2qlx" [d65c531b-cd26-4aab-8f8b-abb334fd68fc] Running
functional_test.go:1798: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 49.0152682s
functional_test.go:1806: (dbg) Run:  kubectl --context functional-838800 exec mysql-859648c796-g2qlx -- mysql -ppassword -e "show databases;"
functional_test.go:1806: (dbg) Non-zero exit: kubectl --context functional-838800 exec mysql-859648c796-g2qlx -- mysql -ppassword -e "show databases;": exit status 1 (328.1281ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1806: (dbg) Run:  kubectl --context functional-838800 exec mysql-859648c796-g2qlx -- mysql -ppassword -e "show databases;"
functional_test.go:1806: (dbg) Non-zero exit: kubectl --context functional-838800 exec mysql-859648c796-g2qlx -- mysql -ppassword -e "show databases;": exit status 1 (317.3554ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1806: (dbg) Run:  kubectl --context functional-838800 exec mysql-859648c796-g2qlx -- mysql -ppassword -e "show databases;"
functional_test.go:1806: (dbg) Non-zero exit: kubectl --context functional-838800 exec mysql-859648c796-g2qlx -- mysql -ppassword -e "show databases;": exit status 1 (379.8481ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1806: (dbg) Run:  kubectl --context functional-838800 exec mysql-859648c796-g2qlx -- mysql -ppassword -e "show databases;"
functional_test.go:1806: (dbg) Non-zero exit: kubectl --context functional-838800 exec mysql-859648c796-g2qlx -- mysql -ppassword -e "show databases;": exit status 1 (408.8323ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1806: (dbg) Run:  kubectl --context functional-838800 exec mysql-859648c796-g2qlx -- mysql -ppassword -e "show databases;"
functional_test.go:1806: (dbg) Non-zero exit: kubectl --context functional-838800 exec mysql-859648c796-g2qlx -- mysql -ppassword -e "show databases;": exit status 1 (324.013ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1806: (dbg) Run:  kubectl --context functional-838800 exec mysql-859648c796-g2qlx -- mysql -ppassword -e "show databases;"
E0108 23:25:30.317356   14288 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-852800\client.crt: The system cannot find the path specified.
--- PASS: TestFunctional/parallel/MySQL (64.42s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (9.95s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1928: Checking for existence of /etc/test/nested/copy/14288/hosts within VM
functional_test.go:1930: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-838800 ssh "sudo cat /etc/test/nested/copy/14288/hosts"
functional_test.go:1930: (dbg) Done: out/minikube-windows-amd64.exe -p functional-838800 ssh "sudo cat /etc/test/nested/copy/14288/hosts": (9.9469548s)
functional_test.go:1935: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (9.95s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (67.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1971: Checking for existence of /etc/ssl/certs/14288.pem within VM
functional_test.go:1972: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-838800 ssh "sudo cat /etc/ssl/certs/14288.pem"
functional_test.go:1972: (dbg) Done: out/minikube-windows-amd64.exe -p functional-838800 ssh "sudo cat /etc/ssl/certs/14288.pem": (10.6702202s)
functional_test.go:1971: Checking for existence of /usr/share/ca-certificates/14288.pem within VM
functional_test.go:1972: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-838800 ssh "sudo cat /usr/share/ca-certificates/14288.pem"
functional_test.go:1972: (dbg) Done: out/minikube-windows-amd64.exe -p functional-838800 ssh "sudo cat /usr/share/ca-certificates/14288.pem": (11.3719435s)
functional_test.go:1971: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1972: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-838800 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1972: (dbg) Done: out/minikube-windows-amd64.exe -p functional-838800 ssh "sudo cat /etc/ssl/certs/51391683.0": (11.2314394s)
functional_test.go:1998: Checking for existence of /etc/ssl/certs/142882.pem within VM
functional_test.go:1999: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-838800 ssh "sudo cat /etc/ssl/certs/142882.pem"
functional_test.go:1999: (dbg) Done: out/minikube-windows-amd64.exe -p functional-838800 ssh "sudo cat /etc/ssl/certs/142882.pem": (11.5858591s)
functional_test.go:1998: Checking for existence of /usr/share/ca-certificates/142882.pem within VM
functional_test.go:1999: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-838800 ssh "sudo cat /usr/share/ca-certificates/142882.pem"
functional_test.go:1999: (dbg) Done: out/minikube-windows-amd64.exe -p functional-838800 ssh "sudo cat /usr/share/ca-certificates/142882.pem": (11.5813869s)
functional_test.go:1998: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1999: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-838800 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
E0108 23:21:53.499194   14288 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-852800\client.crt: The system cannot find the path specified.
functional_test.go:1999: (dbg) Done: out/minikube-windows-amd64.exe -p functional-838800 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0": (11.1413983s)
--- PASS: TestFunctional/parallel/CertSync (67.58s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-838800 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (11.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2026: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-838800 ssh "sudo systemctl is-active crio"
functional_test.go:2026: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-838800 ssh "sudo systemctl is-active crio": exit status 1 (11.4186193s)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	W0108 23:20:45.154795   13392 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (11.42s)

                                                
                                    
x
+
TestFunctional/parallel/License (3.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2287: (dbg) Run:  out/minikube-windows-amd64.exe license
functional_test.go:2287: (dbg) Done: out/minikube-windows-amd64.exe license: (3.6946367s)
--- PASS: TestFunctional/parallel/License (3.71s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (20.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1438: (dbg) Run:  kubectl --context functional-838800 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1444: (dbg) Run:  kubectl --context functional-838800 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-d7447cc7f-pdpgx" [d37b8c2e-9374-44e2-b395-b44052f90ae0] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-d7447cc7f-pdpgx" [d37b8c2e-9374-44e2-b395-b44052f90ae0] Running
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 20.0115438s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (20.50s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2255: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-838800 version --short
--- PASS: TestFunctional/parallel/Version/short (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (8.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2269: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-838800 version -o=json --components
functional_test.go:2269: (dbg) Done: out/minikube-windows-amd64.exe -p functional-838800 version -o=json --components: (8.3757844s)
--- PASS: TestFunctional/parallel/Version/components (8.38s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (7.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-838800 image ls --format short --alsologtostderr
functional_test.go:260: (dbg) Done: out/minikube-windows-amd64.exe -p functional-838800 image ls --format short --alsologtostderr: (7.5440604s)
functional_test.go:265: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-838800 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.28.4
registry.k8s.io/kube-proxy:v1.28.4
registry.k8s.io/kube-controller-manager:v1.28.4
registry.k8s.io/kube-apiserver:v1.28.4
registry.k8s.io/etcd:3.5.9-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.10.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/google-containers/addon-resizer:functional-838800
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/minikube-local-cache-test:functional-838800
functional_test.go:268: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-838800 image ls --format short --alsologtostderr:
W0108 23:23:58.996591    7316 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
I0108 23:23:59.084343    7316 out.go:296] Setting OutFile to fd 928 ...
I0108 23:23:59.085328    7316 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0108 23:23:59.085328    7316 out.go:309] Setting ErrFile to fd 960...
I0108 23:23:59.085328    7316 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0108 23:23:59.106329    7316 config.go:182] Loaded profile config "functional-838800": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0108 23:23:59.106329    7316 config.go:182] Loaded profile config "functional-838800": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0108 23:23:59.107340    7316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-838800 ).state
I0108 23:24:01.391075    7316 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0108 23:24:01.391075    7316 main.go:141] libmachine: [stderr =====>] : 
I0108 23:24:01.409273    7316 ssh_runner.go:195] Run: systemctl --version
I0108 23:24:01.409273    7316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-838800 ).state
I0108 23:24:03.645439    7316 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0108 23:24:03.645439    7316 main.go:141] libmachine: [stderr =====>] : 
I0108 23:24:03.645439    7316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-838800 ).networkadapters[0]).ipaddresses[0]
I0108 23:24:06.215544    7316 main.go:141] libmachine: [stdout =====>] : 172.24.109.223

                                                
                                                
I0108 23:24:06.215806    7316 main.go:141] libmachine: [stderr =====>] : 
I0108 23:24:06.216037    7316 sshutil.go:53] new ssh client: &{IP:172.24.109.223 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\functional-838800\id_rsa Username:docker}
I0108 23:24:06.319292    7316 ssh_runner.go:235] Completed: systemctl --version: (4.9100192s)
I0108 23:24:06.331165    7316 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (7.54s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (7.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-838800 image ls --format table --alsologtostderr
functional_test.go:260: (dbg) Done: out/minikube-windows-amd64.exe -p functional-838800 image ls --format table --alsologtostderr: (7.7490898s)
functional_test.go:265: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-838800 image ls --format table --alsologtostderr:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| registry.k8s.io/pause                       | latest            | 350b164e7ae1d | 240kB  |
| docker.io/library/minikube-local-cache-test | functional-838800 | 347e8b025075c | 30B    |
| registry.k8s.io/kube-proxy                  | v1.28.4           | 83f6cc407eed8 | 73.2MB |
| gcr.io/google-containers/addon-resizer      | functional-838800 | ffd4cfbbe753e | 32.9MB |
| registry.k8s.io/pause                       | 3.1               | da86e6ba6ca19 | 742kB  |
| docker.io/library/nginx                     | alpine            | 529b5644c430c | 42.6MB |
| registry.k8s.io/coredns/coredns             | v1.10.1           | ead0a4a53df89 | 53.6MB |
| registry.k8s.io/pause                       | 3.9               | e6f1816883972 | 744kB  |
| gcr.io/k8s-minikube/busybox                 | latest            | beae173ccac6a | 1.24MB |
| registry.k8s.io/echoserver                  | 1.8               | 82e4c8a736a4f | 95.4MB |
| docker.io/localhost/my-image                | functional-838800 | 4c17cf0cb43e0 | 1.24MB |
| docker.io/library/mysql                     | 5.7               | 5107333e08a87 | 501MB  |
| registry.k8s.io/kube-controller-manager     | v1.28.4           | d058aa5ab969c | 122MB  |
| registry.k8s.io/kube-scheduler              | v1.28.4           | e3db313c6dbc0 | 60.1MB |
| registry.k8s.io/pause                       | 3.3               | 0184c1613d929 | 683kB  |
| registry.k8s.io/kube-apiserver              | v1.28.4           | 7fe0e6f37db33 | 126MB  |
| docker.io/library/nginx                     | latest            | d453dd892d935 | 187MB  |
| registry.k8s.io/etcd                        | 3.5.9-0           | 73deb9a3f7025 | 294MB  |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | 6e38f40d628db | 31.5MB |
|---------------------------------------------|-------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-838800 image ls --format table --alsologtostderr:
W0108 23:24:19.884930   14928 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
I0108 23:24:19.967647   14928 out.go:296] Setting OutFile to fd 796 ...
I0108 23:24:19.968800   14928 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0108 23:24:19.968861   14928 out.go:309] Setting ErrFile to fd 928...
I0108 23:24:19.968918   14928 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0108 23:24:19.987604   14928 config.go:182] Loaded profile config "functional-838800": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0108 23:24:19.997055   14928 config.go:182] Loaded profile config "functional-838800": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0108 23:24:19.998074   14928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-838800 ).state
I0108 23:24:22.285587   14928 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0108 23:24:22.285655   14928 main.go:141] libmachine: [stderr =====>] : 
I0108 23:24:22.301431   14928 ssh_runner.go:195] Run: systemctl --version
I0108 23:24:22.302012   14928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-838800 ).state
I0108 23:24:24.585185   14928 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0108 23:24:24.585267   14928 main.go:141] libmachine: [stderr =====>] : 
I0108 23:24:24.585267   14928 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-838800 ).networkadapters[0]).ipaddresses[0]
I0108 23:24:27.305527   14928 main.go:141] libmachine: [stdout =====>] : 172.24.109.223

                                                
                                                
I0108 23:24:27.305527   14928 main.go:141] libmachine: [stderr =====>] : 
I0108 23:24:27.305759   14928 sshutil.go:53] new ssh client: &{IP:172.24.109.223 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\functional-838800\id_rsa Username:docker}
I0108 23:24:27.410876   14928 ssh_runner.go:235] Completed: systemctl --version: (5.1089085s)
I0108 23:24:27.422082   14928 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (7.75s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (7.88s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-838800 image ls --format json --alsologtostderr
functional_test.go:260: (dbg) Done: out/minikube-windows-amd64.exe -p functional-838800 image ls --format json --alsologtostderr: (7.8768372s)
functional_test.go:265: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-838800 image ls --format json --alsologtostderr:
[{"id":"7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.28.4"],"size":"126000000"},{"id":"73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.9-0"],"size":"294000000"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":[],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"95400000"},{"id":"e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.28.4"],"size":"60100000"},{"id":"d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.28.4"],"size":"122000000"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31500000"},{"id":"0184c1613d92931126feb4c548e5da11
015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"683000"},{"id":"347e8b025075c49dc9aa38dbb381931f1c7ab26de0bd3b935e400c0560c3c839","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-838800"],"size":"30"},{"id":"529b5644c430c06553d2e8082c6713fe19a4169c9dc2369cbb960081f52924ff","repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"],"size":"42600000"},{"id":"ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.10.1"],"size":"53600000"},{"id":"e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.9"],"size":"744000"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"742000"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["registry.k8s.io/p
ause:latest"],"size":"240000"},{"id":"83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.28.4"],"size":"73200000"},{"id":"d453dd892d9357f3559b967478ae9cbc417b52de66b53142f6c16c8a275486b9","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"187000000"},{"id":"ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":[],"repoTags":["gcr.io/google-containers/addon-resizer:functional-838800"],"size":"32900000"}]
functional_test.go:268: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-838800 image ls --format json --alsologtostderr:
W0108 23:24:04.051773   10464 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
I0108 23:24:04.157928   10464 out.go:296] Setting OutFile to fd 796 ...
I0108 23:24:04.224490   10464 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0108 23:24:04.224490   10464 out.go:309] Setting ErrFile to fd 784...
I0108 23:24:04.224589   10464 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0108 23:24:04.243667   10464 config.go:182] Loaded profile config "functional-838800": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0108 23:24:04.245030   10464 config.go:182] Loaded profile config "functional-838800": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0108 23:24:04.246532   10464 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-838800 ).state
I0108 23:24:06.530551   10464 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0108 23:24:06.530551   10464 main.go:141] libmachine: [stderr =====>] : 
I0108 23:24:06.545567   10464 ssh_runner.go:195] Run: systemctl --version
I0108 23:24:06.545567   10464 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-838800 ).state
I0108 23:24:08.844418   10464 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0108 23:24:08.844628   10464 main.go:141] libmachine: [stderr =====>] : 
I0108 23:24:08.844628   10464 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-838800 ).networkadapters[0]).ipaddresses[0]
I0108 23:24:11.590741   10464 main.go:141] libmachine: [stdout =====>] : 172.24.109.223

                                                
                                                
I0108 23:24:11.590903   10464 main.go:141] libmachine: [stderr =====>] : 
I0108 23:24:11.591110   10464 sshutil.go:53] new ssh client: &{IP:172.24.109.223 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\functional-838800\id_rsa Username:docker}
I0108 23:24:11.710535   10464 ssh_runner.go:235] Completed: systemctl --version: (5.1649674s)
I0108 23:24:11.721971   10464 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (7.88s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (7.95s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-838800 image ls --format yaml --alsologtostderr
functional_test.go:260: (dbg) Done: out/minikube-windows-amd64.exe -p functional-838800 image ls --format yaml --alsologtostderr: (7.9460293s)
functional_test.go:265: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-838800 image ls --format yaml --alsologtostderr:
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests: []
repoTags:
- registry.k8s.io/echoserver:1.8
size: "95400000"
- id: d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.28.4
size: "122000000"
- id: d453dd892d9357f3559b967478ae9cbc417b52de66b53142f6c16c8a275486b9
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "187000000"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "742000"
- id: 73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.9-0
size: "294000000"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests: []
repoTags:
- gcr.io/google-containers/addon-resizer:functional-838800
size: "32900000"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"
- id: 5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests: []
repoTags:
- docker.io/library/mysql:5.7
size: "501000000"
- id: 83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.28.4
size: "73200000"
- id: e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.28.4
size: "60100000"
- id: e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.9
size: "744000"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31500000"
- id: 347e8b025075c49dc9aa38dbb381931f1c7ab26de0bd3b935e400c0560c3c839
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-838800
size: "30"
- id: 529b5644c430c06553d2e8082c6713fe19a4169c9dc2369cbb960081f52924ff
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "42600000"
- id: ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.10.1
size: "53600000"
- id: 7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.28.4
size: "126000000"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "683000"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-838800 image ls --format yaml --alsologtostderr:
W0108 23:24:11.944497    4768 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
I0108 23:24:12.023439    4768 out.go:296] Setting OutFile to fd 840 ...
I0108 23:24:12.024431    4768 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0108 23:24:12.024431    4768 out.go:309] Setting ErrFile to fd 748...
I0108 23:24:12.024431    4768 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0108 23:24:12.042443    4768 config.go:182] Loaded profile config "functional-838800": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0108 23:24:12.042443    4768 config.go:182] Loaded profile config "functional-838800": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0108 23:24:12.043441    4768 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-838800 ).state
I0108 23:24:14.332216    4768 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0108 23:24:14.332295    4768 main.go:141] libmachine: [stderr =====>] : 
I0108 23:24:14.347256    4768 ssh_runner.go:195] Run: systemctl --version
I0108 23:24:14.347256    4768 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-838800 ).state
I0108 23:24:16.643583    4768 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0108 23:24:16.643583    4768 main.go:141] libmachine: [stderr =====>] : 
I0108 23:24:16.643583    4768 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-838800 ).networkadapters[0]).ipaddresses[0]
I0108 23:24:19.378373    4768 main.go:141] libmachine: [stdout =====>] : 172.24.109.223

                                                
                                                
I0108 23:24:19.378373    4768 main.go:141] libmachine: [stderr =====>] : 
I0108 23:24:19.378654    4768 sshutil.go:53] new ssh client: &{IP:172.24.109.223 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\functional-838800\id_rsa Username:docker}
I0108 23:24:19.500835    4768 ssh_runner.go:235] Completed: systemctl --version: (5.1535778s)
I0108 23:24:19.511606    4768 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (7.95s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (28.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-838800 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-838800 ssh pgrep buildkitd: exit status 1 (9.9855731s)

                                                
                                                
** stderr ** 
	W0108 23:24:06.541547    6816 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-838800 image build -t localhost/my-image:functional-838800 testdata\build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-windows-amd64.exe -p functional-838800 image build -t localhost/my-image:functional-838800 testdata\build --alsologtostderr: (10.8353488s)
functional_test.go:319: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-838800 image build -t localhost/my-image:functional-838800 testdata\build --alsologtostderr:
Sending build context to Docker daemon  3.072kB

Step 1/3 : FROM gcr.io/k8s-minikube/busybox
latest: Pulling from k8s-minikube/busybox
5cc84ad355aa: Pulling fs layer
5cc84ad355aa: Verifying Checksum
5cc84ad355aa: Download complete
5cc84ad355aa: Pull complete
Digest: sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:latest
---> beae173ccac6
Step 2/3 : RUN true
---> Running in 75c891275e00
Removing intermediate container 75c891275e00
---> ce439292eea8
Step 3/3 : ADD content.txt /
---> 4c17cf0cb43e
Successfully built 4c17cf0cb43e
Successfully tagged localhost/my-image:functional-838800
functional_test.go:322: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-838800 image build -t localhost/my-image:functional-838800 testdata\build --alsologtostderr:
W0108 23:24:16.525887    9504 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
I0108 23:24:16.607429    9504 out.go:296] Setting OutFile to fd 804 ...
I0108 23:24:16.623596    9504 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0108 23:24:16.623596    9504 out.go:309] Setting ErrFile to fd 960...
I0108 23:24:16.623596    9504 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0108 23:24:16.645588    9504 config.go:182] Loaded profile config "functional-838800": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0108 23:24:16.661589    9504 config.go:182] Loaded profile config "functional-838800": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0108 23:24:16.662662    9504 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-838800 ).state
I0108 23:24:18.951044    9504 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0108 23:24:18.951113    9504 main.go:141] libmachine: [stderr =====>] : 
I0108 23:24:18.965853    9504 ssh_runner.go:195] Run: systemctl --version
I0108 23:24:18.966817    9504 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-838800 ).state
I0108 23:24:21.300681    9504 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0108 23:24:21.300745    9504 main.go:141] libmachine: [stderr =====>] : 
I0108 23:24:21.301050    9504 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-838800 ).networkadapters[0]).ipaddresses[0]
I0108 23:24:24.049679    9504 main.go:141] libmachine: [stdout =====>] : 172.24.109.223

                                                
                                                
I0108 23:24:24.049777    9504 main.go:141] libmachine: [stderr =====>] : 
I0108 23:24:24.050279    9504 sshutil.go:53] new ssh client: &{IP:172.24.109.223 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\functional-838800\id_rsa Username:docker}
I0108 23:24:24.169879    9504 ssh_runner.go:235] Completed: systemctl --version: (5.2030615s)
I0108 23:24:24.169879    9504 build_images.go:151] Building image from path: C:\Users\jenkins.minikube1\AppData\Local\Temp\build.2827448836.tar
I0108 23:24:24.186600    9504 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0108 23:24:24.223834    9504 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.2827448836.tar
I0108 23:24:24.231568    9504 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.2827448836.tar: stat -c "%s %y" /var/lib/minikube/build/build.2827448836.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.2827448836.tar': No such file or directory
I0108 23:24:24.231904    9504 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\AppData\Local\Temp\build.2827448836.tar --> /var/lib/minikube/build/build.2827448836.tar (3072 bytes)
I0108 23:24:24.302724    9504 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.2827448836
I0108 23:24:24.334538    9504 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.2827448836 -xf /var/lib/minikube/build/build.2827448836.tar
I0108 23:24:24.353553    9504 docker.go:346] Building image: /var/lib/minikube/build/build.2827448836
I0108 23:24:24.364693    9504 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-838800 /var/lib/minikube/build/build.2827448836
DEPRECATED: The legacy builder is deprecated and will be removed in a future release.
Install the buildx component to build images with BuildKit:
https://docs.docker.com/go/buildx/

                                                
                                                
I0108 23:24:27.128441    9504 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-838800 /var/lib/minikube/build/build.2827448836: (2.7637471s)
I0108 23:24:27.142451    9504 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.2827448836
I0108 23:24:27.178438    9504 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.2827448836.tar
I0108 23:24:27.194526    9504 build_images.go:207] Built localhost/my-image:functional-838800 from C:\Users\jenkins.minikube1\AppData\Local\Temp\build.2827448836.tar
I0108 23:24:27.194603    9504 build_images.go:123] succeeded building to: functional-838800
I0108 23:24:27.194672    9504 build_images.go:124] failed building to: 
functional_test.go:447: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-838800 image ls
functional_test.go:447: (dbg) Done: out/minikube-windows-amd64.exe -p functional-838800 image ls: (7.6809718s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (28.50s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (4.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (4.138043s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-838800
--- PASS: TestFunctional/parallel/ImageCommands/Setup (4.42s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (24.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-838800 image load --daemon gcr.io/google-containers/addon-resizer:functional-838800 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-windows-amd64.exe -p functional-838800 image load --daemon gcr.io/google-containers/addon-resizer:functional-838800 --alsologtostderr: (15.9636859s)
functional_test.go:447: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-838800 image ls
functional_test.go:447: (dbg) Done: out/minikube-windows-amd64.exe -p functional-838800 image ls: (8.6969762s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (24.66s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (14.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1458: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-838800 service list
functional_test.go:1458: (dbg) Done: out/minikube-windows-amd64.exe -p functional-838800 service list: (14.4792024s)
--- PASS: TestFunctional/parallel/ServiceCmd/List (14.48s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (21.85s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-838800 image load --daemon gcr.io/google-containers/addon-resizer:functional-838800 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-windows-amd64.exe -p functional-838800 image load --daemon gcr.io/google-containers/addon-resizer:functional-838800 --alsologtostderr: (13.2126156s)
functional_test.go:447: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-838800 image ls
functional_test.go:447: (dbg) Done: out/minikube-windows-amd64.exe -p functional-838800 image ls: (8.6319904s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (21.85s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (14.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1488: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-838800 service list -o json
functional_test.go:1488: (dbg) Done: out/minikube-windows-amd64.exe -p functional-838800 service list -o json: (14.3014817s)
functional_test.go:1493: Took "14.3014817s" to run "out/minikube-windows-amd64.exe -p functional-838800 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (14.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (29.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:234: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (3.8792299s)
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-838800
functional_test.go:244: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-838800 image load --daemon gcr.io/google-containers/addon-resizer:functional-838800 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-windows-amd64.exe -p functional-838800 image load --daemon gcr.io/google-containers/addon-resizer:functional-838800 --alsologtostderr: (16.195986s)
functional_test.go:447: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-838800 image ls
functional_test.go:447: (dbg) Done: out/minikube-windows-amd64.exe -p functional-838800 image ls: (8.9241621s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (29.29s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/powershell (50.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/powershell
functional_test.go:495: (dbg) Run:  powershell.exe -NoProfile -NonInteractive "out/minikube-windows-amd64.exe -p functional-838800 docker-env | Invoke-Expression ; out/minikube-windows-amd64.exe status -p functional-838800"
functional_test.go:495: (dbg) Done: powershell.exe -NoProfile -NonInteractive "out/minikube-windows-amd64.exe -p functional-838800 docker-env | Invoke-Expression ; out/minikube-windows-amd64.exe status -p functional-838800": (32.3737533s)
functional_test.go:518: (dbg) Run:  powershell.exe -NoProfile -NonInteractive "out/minikube-windows-amd64.exe -p functional-838800 docker-env | Invoke-Expression ; docker images"
functional_test.go:518: (dbg) Done: powershell.exe -NoProfile -NonInteractive "out/minikube-windows-amd64.exe -p functional-838800 docker-env | Invoke-Expression ; docker images": (18.0299653s)
--- PASS: TestFunctional/parallel/DockerEnv/powershell (50.42s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (3.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2118: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-838800 update-context --alsologtostderr -v=2
functional_test.go:2118: (dbg) Done: out/minikube-windows-amd64.exe -p functional-838800 update-context --alsologtostderr -v=2: (3.2096156s)
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (3.21s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (2.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2118: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-838800 update-context --alsologtostderr -v=2
functional_test.go:2118: (dbg) Done: out/minikube-windows-amd64.exe -p functional-838800 update-context --alsologtostderr -v=2: (2.5665433s)
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (2.57s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (2.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2118: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-838800 update-context --alsologtostderr -v=2
functional_test.go:2118: (dbg) Done: out/minikube-windows-amd64.exe -p functional-838800 update-context --alsologtostderr -v=2: (2.5405524s)
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (2.54s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (11.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-838800 image save gcr.io/google-containers/addon-resizer:functional-838800 C:\jenkins\workspace\Hyper-V_Windows_integration\addon-resizer-save.tar --alsologtostderr
functional_test.go:379: (dbg) Done: out/minikube-windows-amd64.exe -p functional-838800 image save gcr.io/google-containers/addon-resizer:functional-838800 C:\jenkins\workspace\Hyper-V_Windows_integration\addon-resizer-save.tar --alsologtostderr: (11.2550389s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (11.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (18.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-838800 image rm gcr.io/google-containers/addon-resizer:functional-838800 --alsologtostderr
functional_test.go:391: (dbg) Done: out/minikube-windows-amd64.exe -p functional-838800 image rm gcr.io/google-containers/addon-resizer:functional-838800 --alsologtostderr: (9.4502354s)
functional_test.go:447: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-838800 image ls
functional_test.go:447: (dbg) Done: out/minikube-windows-amd64.exe -p functional-838800 image ls: (8.9617008s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (18.41s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (9.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1269: (dbg) Run:  out/minikube-windows-amd64.exe profile lis
functional_test.go:1274: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
functional_test.go:1274: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (9.1303501s)
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (9.70s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (21.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-838800 image load C:\jenkins\workspace\Hyper-V_Windows_integration\addon-resizer-save.tar --alsologtostderr
functional_test.go:408: (dbg) Done: out/minikube-windows-amd64.exe -p functional-838800 image load C:\jenkins\workspace\Hyper-V_Windows_integration\addon-resizer-save.tar --alsologtostderr: (12.012343s)
functional_test.go:447: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-838800 image ls
functional_test.go:447: (dbg) Done: out/minikube-windows-amd64.exe -p functional-838800 image ls: (9.6590009s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (21.67s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (10.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1309: (dbg) Run:  out/minikube-windows-amd64.exe profile list
functional_test.go:1309: (dbg) Done: out/minikube-windows-amd64.exe profile list: (9.9096591s)
functional_test.go:1314: Took "9.9101353s" to run "out/minikube-windows-amd64.exe profile list"
functional_test.go:1323: (dbg) Run:  out/minikube-windows-amd64.exe profile list -l
functional_test.go:1328: Took "296.5625ms" to run "out/minikube-windows-amd64.exe profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (10.21s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (9.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1360: (dbg) Run:  out/minikube-windows-amd64.exe profile list -o json
functional_test.go:1360: (dbg) Done: out/minikube-windows-amd64.exe profile list -o json: (9.1328183s)
functional_test.go:1365: Took "9.1332417s" to run "out/minikube-windows-amd64.exe profile list -o json"
functional_test.go:1373: (dbg) Run:  out/minikube-windows-amd64.exe profile list -o json --light
functional_test.go:1378: Took "307.9879ms" to run "out/minikube-windows-amd64.exe profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (9.44s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (10.79s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-838800
functional_test.go:423: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-838800 image save --daemon gcr.io/google-containers/addon-resizer:functional-838800 --alsologtostderr
functional_test.go:423: (dbg) Done: out/minikube-windows-amd64.exe -p functional-838800 image save --daemon gcr.io/google-containers/addon-resizer:functional-838800 --alsologtostderr: (10.3231082s)
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-838800
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (10.79s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (8.83s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-windows-amd64.exe -p functional-838800 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-windows-amd64.exe -p functional-838800 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-windows-amd64.exe -p functional-838800 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-windows-amd64.exe -p functional-838800 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 7440: OpenProcess: The parameter is incorrect.
helpers_test.go:508: unable to kill pid 768: TerminateProcess: Access is denied.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (8.83s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-windows-amd64.exe -p functional-838800 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (17.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-838800 apply -f testdata\testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [2ac8b461-b822-4c06-b7d1-10430ea0a316] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [2ac8b461-b822-4c06-b7d1-10430ea0a316] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 17.0119862s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (17.61s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-windows-amd64.exe -p functional-838800 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 3888: TerminateProcess: Access is denied.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.12s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.52s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-838800
--- PASS: TestFunctional/delete_addon-resizer_images (0.52s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.2s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-838800
--- PASS: TestFunctional/delete_my-image_image (0.20s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.2s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-838800
--- PASS: TestFunctional/delete_minikube_cached_images (0.20s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (196.51s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-windows-amd64.exe start -p image-271900 --driver=hyperv
E0108 23:30:30.310798   14288 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-852800\client.crt: The system cannot find the path specified.
E0108 23:30:43.614741   14288 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-838800\client.crt: The system cannot find the path specified.
E0108 23:30:43.630177   14288 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-838800\client.crt: The system cannot find the path specified.
E0108 23:30:43.646064   14288 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-838800\client.crt: The system cannot find the path specified.
E0108 23:30:43.677646   14288 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-838800\client.crt: The system cannot find the path specified.
E0108 23:30:43.724437   14288 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-838800\client.crt: The system cannot find the path specified.
E0108 23:30:43.818964   14288 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-838800\client.crt: The system cannot find the path specified.
E0108 23:30:43.994032   14288 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-838800\client.crt: The system cannot find the path specified.
E0108 23:30:44.325383   14288 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-838800\client.crt: The system cannot find the path specified.
E0108 23:30:44.976709   14288 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-838800\client.crt: The system cannot find the path specified.
E0108 23:30:46.265680   14288 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-838800\client.crt: The system cannot find the path specified.
E0108 23:30:48.829292   14288 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-838800\client.crt: The system cannot find the path specified.
E0108 23:30:53.954431   14288 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-838800\client.crt: The system cannot find the path specified.
E0108 23:31:04.204118   14288 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-838800\client.crt: The system cannot find the path specified.
E0108 23:31:24.690927   14288 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-838800\client.crt: The system cannot find the path specified.
E0108 23:32:05.665168   14288 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-838800\client.crt: The system cannot find the path specified.
image_test.go:69: (dbg) Done: out/minikube-windows-amd64.exe start -p image-271900 --driver=hyperv: (3m16.5092312s)
--- PASS: TestImageBuild/serial/Setup (196.51s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (9.79s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-windows-amd64.exe image build -t aaa:latest ./testdata/image-build/test-normal -p image-271900
image_test.go:78: (dbg) Done: out/minikube-windows-amd64.exe image build -t aaa:latest ./testdata/image-build/test-normal -p image-271900: (9.7940046s)
--- PASS: TestImageBuild/serial/NormalBuild (9.79s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (9.51s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-windows-amd64.exe image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-271900
image_test.go:99: (dbg) Done: out/minikube-windows-amd64.exe image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-271900: (9.5105004s)
--- PASS: TestImageBuild/serial/BuildWithBuildArg (9.51s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (7.91s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-271900
image_test.go:133: (dbg) Done: out/minikube-windows-amd64.exe image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-271900: (7.9066007s)
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (7.91s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (7.72s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-windows-amd64.exe image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-271900
image_test.go:88: (dbg) Done: out/minikube-windows-amd64.exe image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-271900: (7.7196387s)
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (7.72s)

                                                
                                    
x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (241.03s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run:  out/minikube-windows-amd64.exe start -p ingress-addon-legacy-744200 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=hyperv
E0108 23:35:30.318153   14288 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-852800\client.crt: The system cannot find the path specified.
E0108 23:35:43.623464   14288 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-838800\client.crt: The system cannot find the path specified.
E0108 23:36:11.450346   14288 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-838800\client.crt: The system cannot find the path specified.
ingress_addon_legacy_test.go:39: (dbg) Done: out/minikube-windows-amd64.exe start -p ingress-addon-legacy-744200 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=hyperv: (4m1.0290175s)
--- PASS: TestIngressAddonLegacy/StartLegacyK8sCluster (241.03s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (40.83s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:70: (dbg) Run:  out/minikube-windows-amd64.exe -p ingress-addon-legacy-744200 addons enable ingress --alsologtostderr -v=5
ingress_addon_legacy_test.go:70: (dbg) Done: out/minikube-windows-amd64.exe -p ingress-addon-legacy-744200 addons enable ingress --alsologtostderr -v=5: (40.8336231s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (40.83s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (14.78s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation
ingress_addon_legacy_test.go:79: (dbg) Run:  out/minikube-windows-amd64.exe -p ingress-addon-legacy-744200 addons enable ingress-dns --alsologtostderr -v=5
ingress_addon_legacy_test.go:79: (dbg) Done: out/minikube-windows-amd64.exe -p ingress-addon-legacy-744200 addons enable ingress-dns --alsologtostderr -v=5: (14.7841202s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (14.78s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddons (96.18s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddons
addons_test.go:207: (dbg) Run:  kubectl --context ingress-addon-legacy-744200 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:232: (dbg) Run:  kubectl --context ingress-addon-legacy-744200 replace --force -f testdata\nginx-ingress-v1beta1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context ingress-addon-legacy-744200 replace --force -f testdata\nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [1dbe619c-f8e1-4b2c-bb4d-6184a9f81249] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
E0108 23:38:33.506885   14288 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-852800\client.crt: The system cannot find the path specified.
helpers_test.go:344: "nginx" [1dbe619c-f8e1-4b2c-bb4d-6184a9f81249] Running
addons_test.go:250: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: run=nginx healthy within 32.011811s
addons_test.go:262: (dbg) Run:  out/minikube-windows-amd64.exe -p ingress-addon-legacy-744200 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:262: (dbg) Done: out/minikube-windows-amd64.exe -p ingress-addon-legacy-744200 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": (9.5691365s)
addons_test.go:269: debug: unexpected stderr for out/minikube-windows-amd64.exe -p ingress-addon-legacy-744200 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'":
W0108 23:39:01.621044     772 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
addons_test.go:286: (dbg) Run:  kubectl --context ingress-addon-legacy-744200 replace --force -f testdata\ingress-dns-example-v1beta1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-windows-amd64.exe -p ingress-addon-legacy-744200 ip
addons_test.go:291: (dbg) Done: out/minikube-windows-amd64.exe -p ingress-addon-legacy-744200 ip: (2.4750291s)
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 172.24.96.142
addons_test.go:306: (dbg) Run:  out/minikube-windows-amd64.exe -p ingress-addon-legacy-744200 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:306: (dbg) Done: out/minikube-windows-amd64.exe -p ingress-addon-legacy-744200 addons disable ingress-dns --alsologtostderr -v=1: (27.3844987s)
addons_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe -p ingress-addon-legacy-744200 addons disable ingress --alsologtostderr -v=1
addons_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe -p ingress-addon-legacy-744200 addons disable ingress --alsologtostderr -v=1: (21.8323447s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddons (96.18s)

                                                
                                    
x
+
TestJSONOutput/start/Command (206.45s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe start -p json-output-219800 --output=json --user=testUser --memory=2200 --wait=true --driver=hyperv
E0108 23:43:27.424247   14288 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ingress-addon-legacy-744200\client.crt: The system cannot find the path specified.
E0108 23:43:27.439316   14288 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ingress-addon-legacy-744200\client.crt: The system cannot find the path specified.
E0108 23:43:27.454747   14288 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ingress-addon-legacy-744200\client.crt: The system cannot find the path specified.
E0108 23:43:27.485427   14288 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ingress-addon-legacy-744200\client.crt: The system cannot find the path specified.
E0108 23:43:27.532086   14288 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ingress-addon-legacy-744200\client.crt: The system cannot find the path specified.
E0108 23:43:27.625477   14288 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ingress-addon-legacy-744200\client.crt: The system cannot find the path specified.
E0108 23:43:27.795750   14288 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ingress-addon-legacy-744200\client.crt: The system cannot find the path specified.
E0108 23:43:28.122315   14288 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ingress-addon-legacy-744200\client.crt: The system cannot find the path specified.
E0108 23:43:28.769731   14288 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ingress-addon-legacy-744200\client.crt: The system cannot find the path specified.
E0108 23:43:30.052156   14288 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ingress-addon-legacy-744200\client.crt: The system cannot find the path specified.
E0108 23:43:32.620517   14288 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ingress-addon-legacy-744200\client.crt: The system cannot find the path specified.
E0108 23:43:37.749995   14288 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ingress-addon-legacy-744200\client.crt: The system cannot find the path specified.
E0108 23:43:47.999755   14288 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ingress-addon-legacy-744200\client.crt: The system cannot find the path specified.
E0108 23:44:08.482188   14288 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ingress-addon-legacy-744200\client.crt: The system cannot find the path specified.
json_output_test.go:63: (dbg) Done: out/minikube-windows-amd64.exe start -p json-output-219800 --output=json --user=testUser --memory=2200 --wait=true --driver=hyperv: (3m26.4473931s)
--- PASS: TestJSONOutput/start/Command (206.45s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (8.1s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe pause -p json-output-219800 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-windows-amd64.exe pause -p json-output-219800 --output=json --user=testUser: (8.0978815s)
--- PASS: TestJSONOutput/pause/Command (8.10s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (8.03s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe unpause -p json-output-219800 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-windows-amd64.exe unpause -p json-output-219800 --output=json --user=testUser: (8.0304736s)
--- PASS: TestJSONOutput/unpause/Command (8.03s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (29.54s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe stop -p json-output-219800 --output=json --user=testUser
E0108 23:44:49.450304   14288 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ingress-addon-legacy-744200\client.crt: The system cannot find the path specified.
json_output_test.go:63: (dbg) Done: out/minikube-windows-amd64.exe stop -p json-output-219800 --output=json --user=testUser: (29.540281s)
--- PASS: TestJSONOutput/stop/Command (29.54s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (1.66s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-windows-amd64.exe start -p json-output-error-690700 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p json-output-error-690700 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (343.5422ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"627d28ef-e010-43fd-bdfb-e0b5b23411bd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-690700] minikube v1.32.0 on Microsoft Windows 10 Enterprise N 10.0.19045.3803 Build 19045.3803","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"d7690b5f-107f-43d5-97a4-52a13024aed4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=C:\\Users\\jenkins.minikube1\\minikube-integration\\kubeconfig"}}
	{"specversion":"1.0","id":"27233b5b-bcd3-45e9-ab9d-3cc1c42a2886","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"70656c69-ec55-4f09-bb4f-78aec589297c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube"}}
	{"specversion":"1.0","id":"c9d19599-72ae-4b8f-a78a-7b8a12a9a1af","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=17830"}}
	{"specversion":"1.0","id":"7d40c14d-e7a8-41ff-a591-a37eb95a5ba5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"39c141b3-b29b-45f1-ad05-8c953d1f5993","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on windows/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
** stderr ** 
	W0108 23:45:17.310528   11416 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "json-output-error-690700" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p json-output-error-690700
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p json-output-error-690700: (1.3159842s)
--- PASS: TestErrorJSONOutput (1.66s)

                                                
                                    
x
+
TestMainNoArgs (0.25s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-windows-amd64.exe
--- PASS: TestMainNoArgs (0.25s)

                                                
                                    
x
+
TestMinikubeProfile (496.03s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-windows-amd64.exe start -p first-371300 --driver=hyperv
E0108 23:45:30.316994   14288 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-852800\client.crt: The system cannot find the path specified.
E0108 23:45:43.614817   14288 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-838800\client.crt: The system cannot find the path specified.
E0108 23:46:11.381729   14288 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ingress-addon-legacy-744200\client.crt: The system cannot find the path specified.
E0108 23:47:06.822945   14288 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-838800\client.crt: The system cannot find the path specified.
E0108 23:48:27.429965   14288 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ingress-addon-legacy-744200\client.crt: The system cannot find the path specified.
minikube_profile_test.go:44: (dbg) Done: out/minikube-windows-amd64.exe start -p first-371300 --driver=hyperv: (3m14.3216672s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-windows-amd64.exe start -p second-371300 --driver=hyperv
E0108 23:48:55.232333   14288 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ingress-addon-legacy-744200\client.crt: The system cannot find the path specified.
E0108 23:50:30.317875   14288 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-852800\client.crt: The system cannot find the path specified.
E0108 23:50:43.620590   14288 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-838800\client.crt: The system cannot find the path specified.
minikube_profile_test.go:44: (dbg) Done: out/minikube-windows-amd64.exe start -p second-371300 --driver=hyperv: (3m16.1918974s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-windows-amd64.exe profile first-371300
minikube_profile_test.go:55: (dbg) Run:  out/minikube-windows-amd64.exe profile list -ojson
minikube_profile_test.go:55: (dbg) Done: out/minikube-windows-amd64.exe profile list -ojson: (15.1323836s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-windows-amd64.exe profile second-371300
minikube_profile_test.go:55: (dbg) Run:  out/minikube-windows-amd64.exe profile list -ojson
minikube_profile_test.go:55: (dbg) Done: out/minikube-windows-amd64.exe profile list -ojson: (15.0844745s)
helpers_test.go:175: Cleaning up "second-371300" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p second-371300
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p second-371300: (37.1429554s)
helpers_test.go:175: Cleaning up "first-371300" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p first-371300
E0108 23:53:27.424115   14288 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ingress-addon-legacy-744200\client.crt: The system cannot find the path specified.
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p first-371300: (37.1849313s)
--- PASS: TestMinikubeProfile (496.03s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (150.4s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-windows-amd64.exe start -p mount-start-1-878400 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=hyperv
E0108 23:55:13.512538   14288 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-852800\client.crt: The system cannot find the path specified.
E0108 23:55:30.309268   14288 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-852800\client.crt: The system cannot find the path specified.
E0108 23:55:43.614496   14288 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-838800\client.crt: The system cannot find the path specified.
mount_start_test.go:98: (dbg) Done: out/minikube-windows-amd64.exe start -p mount-start-1-878400 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=hyperv: (2m29.4044933s)
--- PASS: TestMountStart/serial/StartWithMountFirst (150.40s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (9.67s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-windows-amd64.exe -p mount-start-1-878400 ssh -- ls /minikube-host
mount_start_test.go:114: (dbg) Done: out/minikube-windows-amd64.exe -p mount-start-1-878400 ssh -- ls /minikube-host: (9.6725528s)
--- PASS: TestMountStart/serial/VerifyMountFirst (9.67s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (151.14s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-windows-amd64.exe start -p mount-start-2-966700 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=hyperv
E0108 23:58:27.419654   14288 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ingress-addon-legacy-744200\client.crt: The system cannot find the path specified.
mount_start_test.go:98: (dbg) Done: out/minikube-windows-amd64.exe start -p mount-start-2-966700 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=hyperv: (2m30.1321881s)
--- PASS: TestMountStart/serial/StartWithMountSecond (151.14s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (9.72s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-windows-amd64.exe -p mount-start-2-966700 ssh -- ls /minikube-host
mount_start_test.go:114: (dbg) Done: out/minikube-windows-amd64.exe -p mount-start-2-966700 ssh -- ls /minikube-host: (9.7203768s)
--- PASS: TestMountStart/serial/VerifyMountSecond (9.72s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (26.84s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-windows-amd64.exe delete -p mount-start-1-878400 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-windows-amd64.exe delete -p mount-start-1-878400 --alsologtostderr -v=5: (26.8437607s)
--- PASS: TestMountStart/serial/DeleteFirst (26.84s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (9.67s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-windows-amd64.exe -p mount-start-2-966700 ssh -- ls /minikube-host
mount_start_test.go:114: (dbg) Done: out/minikube-windows-amd64.exe -p mount-start-2-966700 ssh -- ls /minikube-host: (9.6657044s)
--- PASS: TestMountStart/serial/VerifyMountPostDelete (9.67s)

                                                
                                    
x
+
TestMountStart/serial/Stop (22.05s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-windows-amd64.exe stop -p mount-start-2-966700
E0108 23:59:50.605546   14288 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ingress-addon-legacy-744200\client.crt: The system cannot find the path specified.
mount_start_test.go:155: (dbg) Done: out/minikube-windows-amd64.exe stop -p mount-start-2-966700: (22.0457882s)
--- PASS: TestMountStart/serial/Stop (22.05s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (113.76s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-windows-amd64.exe start -p mount-start-2-966700
E0109 00:00:30.316523   14288 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-852800\client.crt: The system cannot find the path specified.
E0109 00:00:43.613842   14288 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-838800\client.crt: The system cannot find the path specified.
mount_start_test.go:166: (dbg) Done: out/minikube-windows-amd64.exe start -p mount-start-2-966700: (1m52.7403504s)
--- PASS: TestMountStart/serial/RestartStopped (113.76s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (9.75s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-windows-amd64.exe -p mount-start-2-966700 ssh -- ls /minikube-host
mount_start_test.go:114: (dbg) Done: out/minikube-windows-amd64.exe -p mount-start-2-966700 ssh -- ls /minikube-host: (9.7455047s)
--- PASS: TestMountStart/serial/VerifyMountPostStop (9.75s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (424.36s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:86: (dbg) Run:  out/minikube-windows-amd64.exe start -p multinode-173500 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=hyperv
E0109 00:03:27.419644   14288 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ingress-addon-legacy-744200\client.crt: The system cannot find the path specified.
E0109 00:03:46.825231   14288 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-838800\client.crt: The system cannot find the path specified.
E0109 00:05:30.307960   14288 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-852800\client.crt: The system cannot find the path specified.
E0109 00:05:43.625078   14288 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-838800\client.crt: The system cannot find the path specified.
E0109 00:08:27.431249   14288 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ingress-addon-legacy-744200\client.crt: The system cannot find the path specified.
multinode_test.go:86: (dbg) Done: out/minikube-windows-amd64.exe start -p multinode-173500 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=hyperv: (6m39.7307698s)
multinode_test.go:92: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-173500 status --alsologtostderr
multinode_test.go:92: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-173500 status --alsologtostderr: (24.6319293s)
--- PASS: TestMultiNode/serial/FreshStart2Nodes (424.36s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (10.03s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:509: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-173500 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:514: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-173500 -- rollout status deployment/busybox
multinode_test.go:514: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p multinode-173500 -- rollout status deployment/busybox: (3.402321s)
multinode_test.go:521: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-173500 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:544: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-173500 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-173500 -- exec busybox-5bc68d56bd-cfnc7 -- nslookup kubernetes.io
multinode_test.go:552: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p multinode-173500 -- exec busybox-5bc68d56bd-cfnc7 -- nslookup kubernetes.io: (2.1755796s)
multinode_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-173500 -- exec busybox-5bc68d56bd-txtnl -- nslookup kubernetes.io
multinode_test.go:562: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-173500 -- exec busybox-5bc68d56bd-cfnc7 -- nslookup kubernetes.default
multinode_test.go:562: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-173500 -- exec busybox-5bc68d56bd-txtnl -- nslookup kubernetes.default
multinode_test.go:570: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-173500 -- exec busybox-5bc68d56bd-cfnc7 -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:570: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-173500 -- exec busybox-5bc68d56bd-txtnl -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (10.03s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (224.28s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:111: (dbg) Run:  out/minikube-windows-amd64.exe node add -p multinode-173500 -v 3 --alsologtostderr
E0109 00:10:43.616235   14288 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-838800\client.crt: The system cannot find the path specified.
E0109 00:11:53.526433   14288 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-852800\client.crt: The system cannot find the path specified.
E0109 00:13:27.420620   14288 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ingress-addon-legacy-744200\client.crt: The system cannot find the path specified.
multinode_test.go:111: (dbg) Done: out/minikube-windows-amd64.exe node add -p multinode-173500 -v 3 --alsologtostderr: (3m7.2776227s)
multinode_test.go:117: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-173500 status --alsologtostderr
multinode_test.go:117: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-173500 status --alsologtostderr: (37.0045625s)
--- PASS: TestMultiNode/serial/AddNode (224.28s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.2s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:211: (dbg) Run:  kubectl --context multinode-173500 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.20s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (7.8s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
multinode_test.go:133: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (7.8018994s)
--- PASS: TestMultiNode/serial/ProfileList (7.80s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (367.88s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:174: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-173500 status --output json --alsologtostderr
multinode_test.go:174: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-173500 status --output json --alsologtostderr: (36.3262884s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-173500 cp testdata\cp-test.txt multinode-173500:/home/docker/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-173500 cp testdata\cp-test.txt multinode-173500:/home/docker/cp-test.txt: (9.6332527s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-173500 ssh -n multinode-173500 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-173500 ssh -n multinode-173500 "sudo cat /home/docker/cp-test.txt": (9.7164434s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-173500 cp multinode-173500:/home/docker/cp-test.txt C:\Users\jenkins.minikube1\AppData\Local\Temp\TestMultiNodeserialCopyFile273249123\001\cp-test_multinode-173500.txt
E0109 00:15:30.312442   14288 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-852800\client.crt: The system cannot find the path specified.
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-173500 cp multinode-173500:/home/docker/cp-test.txt C:\Users\jenkins.minikube1\AppData\Local\Temp\TestMultiNodeserialCopyFile273249123\001\cp-test_multinode-173500.txt: (9.5801213s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-173500 ssh -n multinode-173500 "sudo cat /home/docker/cp-test.txt"
E0109 00:15:43.614872   14288 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-838800\client.crt: The system cannot find the path specified.
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-173500 ssh -n multinode-173500 "sudo cat /home/docker/cp-test.txt": (9.5974921s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-173500 cp multinode-173500:/home/docker/cp-test.txt multinode-173500-m02:/home/docker/cp-test_multinode-173500_multinode-173500-m02.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-173500 cp multinode-173500:/home/docker/cp-test.txt multinode-173500-m02:/home/docker/cp-test_multinode-173500_multinode-173500-m02.txt: (16.6652926s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-173500 ssh -n multinode-173500 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-173500 ssh -n multinode-173500 "sudo cat /home/docker/cp-test.txt": (9.5505473s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-173500 ssh -n multinode-173500-m02 "sudo cat /home/docker/cp-test_multinode-173500_multinode-173500-m02.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-173500 ssh -n multinode-173500-m02 "sudo cat /home/docker/cp-test_multinode-173500_multinode-173500-m02.txt": (9.610004s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-173500 cp multinode-173500:/home/docker/cp-test.txt multinode-173500-m03:/home/docker/cp-test_multinode-173500_multinode-173500-m03.txt
E0109 00:16:30.620983   14288 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ingress-addon-legacy-744200\client.crt: The system cannot find the path specified.
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-173500 cp multinode-173500:/home/docker/cp-test.txt multinode-173500-m03:/home/docker/cp-test_multinode-173500_multinode-173500-m03.txt: (16.7119232s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-173500 ssh -n multinode-173500 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-173500 ssh -n multinode-173500 "sudo cat /home/docker/cp-test.txt": (9.5901714s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-173500 ssh -n multinode-173500-m03 "sudo cat /home/docker/cp-test_multinode-173500_multinode-173500-m03.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-173500 ssh -n multinode-173500-m03 "sudo cat /home/docker/cp-test_multinode-173500_multinode-173500-m03.txt": (9.4862645s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-173500 cp testdata\cp-test.txt multinode-173500-m02:/home/docker/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-173500 cp testdata\cp-test.txt multinode-173500-m02:/home/docker/cp-test.txt: (9.664221s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-173500 ssh -n multinode-173500-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-173500 ssh -n multinode-173500-m02 "sudo cat /home/docker/cp-test.txt": (9.6617195s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-173500 cp multinode-173500-m02:/home/docker/cp-test.txt C:\Users\jenkins.minikube1\AppData\Local\Temp\TestMultiNodeserialCopyFile273249123\001\cp-test_multinode-173500-m02.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-173500 cp multinode-173500-m02:/home/docker/cp-test.txt C:\Users\jenkins.minikube1\AppData\Local\Temp\TestMultiNodeserialCopyFile273249123\001\cp-test_multinode-173500-m02.txt: (9.7421442s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-173500 ssh -n multinode-173500-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-173500 ssh -n multinode-173500-m02 "sudo cat /home/docker/cp-test.txt": (9.6514676s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-173500 cp multinode-173500-m02:/home/docker/cp-test.txt multinode-173500:/home/docker/cp-test_multinode-173500-m02_multinode-173500.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-173500 cp multinode-173500-m02:/home/docker/cp-test.txt multinode-173500:/home/docker/cp-test_multinode-173500-m02_multinode-173500.txt: (16.6996544s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-173500 ssh -n multinode-173500-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-173500 ssh -n multinode-173500-m02 "sudo cat /home/docker/cp-test.txt": (9.6351749s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-173500 ssh -n multinode-173500 "sudo cat /home/docker/cp-test_multinode-173500-m02_multinode-173500.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-173500 ssh -n multinode-173500 "sudo cat /home/docker/cp-test_multinode-173500-m02_multinode-173500.txt": (9.6845686s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-173500 cp multinode-173500-m02:/home/docker/cp-test.txt multinode-173500-m03:/home/docker/cp-test_multinode-173500-m02_multinode-173500-m03.txt
E0109 00:18:27.429441   14288 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ingress-addon-legacy-744200\client.crt: The system cannot find the path specified.
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-173500 cp multinode-173500-m02:/home/docker/cp-test.txt multinode-173500-m03:/home/docker/cp-test_multinode-173500-m02_multinode-173500-m03.txt: (16.9132347s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-173500 ssh -n multinode-173500-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-173500 ssh -n multinode-173500-m02 "sudo cat /home/docker/cp-test.txt": (9.5433209s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-173500 ssh -n multinode-173500-m03 "sudo cat /home/docker/cp-test_multinode-173500-m02_multinode-173500-m03.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-173500 ssh -n multinode-173500-m03 "sudo cat /home/docker/cp-test_multinode-173500-m02_multinode-173500-m03.txt": (9.5735998s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-173500 cp testdata\cp-test.txt multinode-173500-m03:/home/docker/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-173500 cp testdata\cp-test.txt multinode-173500-m03:/home/docker/cp-test.txt: (9.5280013s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-173500 ssh -n multinode-173500-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-173500 ssh -n multinode-173500-m03 "sudo cat /home/docker/cp-test.txt": (9.631182s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-173500 cp multinode-173500-m03:/home/docker/cp-test.txt C:\Users\jenkins.minikube1\AppData\Local\Temp\TestMultiNodeserialCopyFile273249123\001\cp-test_multinode-173500-m03.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-173500 cp multinode-173500-m03:/home/docker/cp-test.txt C:\Users\jenkins.minikube1\AppData\Local\Temp\TestMultiNodeserialCopyFile273249123\001\cp-test_multinode-173500-m03.txt: (9.6342576s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-173500 ssh -n multinode-173500-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-173500 ssh -n multinode-173500-m03 "sudo cat /home/docker/cp-test.txt": (9.7716872s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-173500 cp multinode-173500-m03:/home/docker/cp-test.txt multinode-173500:/home/docker/cp-test_multinode-173500-m03_multinode-173500.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-173500 cp multinode-173500-m03:/home/docker/cp-test.txt multinode-173500:/home/docker/cp-test_multinode-173500-m03_multinode-173500.txt: (16.8969271s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-173500 ssh -n multinode-173500-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-173500 ssh -n multinode-173500-m03 "sudo cat /home/docker/cp-test.txt": (9.6237241s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-173500 ssh -n multinode-173500 "sudo cat /home/docker/cp-test_multinode-173500-m03_multinode-173500.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-173500 ssh -n multinode-173500 "sudo cat /home/docker/cp-test_multinode-173500-m03_multinode-173500.txt": (9.6239011s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-173500 cp multinode-173500-m03:/home/docker/cp-test.txt multinode-173500-m02:/home/docker/cp-test_multinode-173500-m03_multinode-173500-m02.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-173500 cp multinode-173500-m03:/home/docker/cp-test.txt multinode-173500-m02:/home/docker/cp-test_multinode-173500-m03_multinode-173500-m02.txt: (16.6277763s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-173500 ssh -n multinode-173500-m03 "sudo cat /home/docker/cp-test.txt"
E0109 00:20:26.833427   14288 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-838800\client.crt: The system cannot find the path specified.
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-173500 ssh -n multinode-173500-m03 "sudo cat /home/docker/cp-test.txt": (9.5953689s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-173500 ssh -n multinode-173500-m02 "sudo cat /home/docker/cp-test_multinode-173500-m03_multinode-173500-m02.txt"
E0109 00:20:30.312094   14288 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-852800\client.crt: The system cannot find the path specified.
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-173500 ssh -n multinode-173500-m02 "sudo cat /home/docker/cp-test_multinode-173500-m03_multinode-173500-m02.txt": (9.6954971s)
--- PASS: TestMultiNode/serial/CopyFile (367.88s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (67.57s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:238: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-173500 node stop m03
E0109 00:20:43.617368   14288 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-838800\client.crt: The system cannot find the path specified.
multinode_test.go:238: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-173500 node stop m03: (14.5384732s)
multinode_test.go:244: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-173500 status
multinode_test.go:244: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-173500 status: exit status 7 (26.6855981s)

                                                
                                                
-- stdout --
	multinode-173500
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-173500-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-173500-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0109 00:20:53.763922    7720 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
multinode_test.go:251: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-173500 status --alsologtostderr
multinode_test.go:251: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-173500 status --alsologtostderr: exit status 7 (26.3457179s)

                                                
                                                
-- stdout --
	multinode-173500
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-173500-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-173500-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0109 00:21:20.439405    9988 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I0109 00:21:20.521696    9988 out.go:296] Setting OutFile to fd 804 ...
	I0109 00:21:20.522072    9988 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0109 00:21:20.522647    9988 out.go:309] Setting ErrFile to fd 1016...
	I0109 00:21:20.522647    9988 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0109 00:21:20.536713    9988 out.go:303] Setting JSON to false
	I0109 00:21:20.536713    9988 mustload.go:65] Loading cluster: multinode-173500
	I0109 00:21:20.536956    9988 notify.go:220] Checking for updates...
	I0109 00:21:20.537167    9988 config.go:182] Loaded profile config "multinode-173500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0109 00:21:20.537167    9988 status.go:255] checking status of multinode-173500 ...
	I0109 00:21:20.537963    9988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-173500 ).state
	I0109 00:21:22.769293    9988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0109 00:21:22.769293    9988 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:21:22.769293    9988 status.go:330] multinode-173500 host status = "Running" (err=<nil>)
	I0109 00:21:22.769293    9988 host.go:66] Checking if "multinode-173500" exists ...
	I0109 00:21:22.770124    9988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-173500 ).state
	I0109 00:21:24.973879    9988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0109 00:21:24.973879    9988 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:21:24.974046    9988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-173500 ).networkadapters[0]).ipaddresses[0]
	I0109 00:21:27.580911    9988 main.go:141] libmachine: [stdout =====>] : 172.24.100.178
	
	I0109 00:21:27.580969    9988 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:21:27.580969    9988 host.go:66] Checking if "multinode-173500" exists ...
	I0109 00:21:27.595273    9988 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0109 00:21:27.595273    9988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-173500 ).state
	I0109 00:21:29.718059    9988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0109 00:21:29.718059    9988 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:21:29.718166    9988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-173500 ).networkadapters[0]).ipaddresses[0]
	I0109 00:21:32.294741    9988 main.go:141] libmachine: [stdout =====>] : 172.24.100.178
	
	I0109 00:21:32.294983    9988 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:21:32.295338    9988 sshutil.go:53] new ssh client: &{IP:172.24.100.178 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-173500\id_rsa Username:docker}
	I0109 00:21:32.398400    9988 ssh_runner.go:235] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (4.803127s)
	I0109 00:21:32.413976    9988 ssh_runner.go:195] Run: systemctl --version
	I0109 00:21:32.439092    9988 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0109 00:21:32.460718    9988 kubeconfig.go:92] found "multinode-173500" server: "https://172.24.100.178:8443"
	I0109 00:21:32.460793    9988 api_server.go:166] Checking apiserver status ...
	I0109 00:21:32.474410    9988 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0109 00:21:32.508067    9988 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2039/cgroup
	I0109 00:21:32.522900    9988 api_server.go:182] apiserver freezer: "4:freezer:/kubepods/burstable/pod6d4780fbf78826137e2d0549410b3c52/e4e40eb718ff1811cfffe281d5c6abadd3dea086fad69e9f27695c381a839f74"
	I0109 00:21:32.535400    9988 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod6d4780fbf78826137e2d0549410b3c52/e4e40eb718ff1811cfffe281d5c6abadd3dea086fad69e9f27695c381a839f74/freezer.state
	I0109 00:21:32.550207    9988 api_server.go:204] freezer state: "THAWED"
	I0109 00:21:32.550241    9988 api_server.go:253] Checking apiserver healthz at https://172.24.100.178:8443/healthz ...
	I0109 00:21:32.560737    9988 api_server.go:279] https://172.24.100.178:8443/healthz returned 200:
	ok
	I0109 00:21:32.560737    9988 status.go:421] multinode-173500 apiserver status = Running (err=<nil>)
	I0109 00:21:32.560737    9988 status.go:257] multinode-173500 status: &{Name:multinode-173500 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0109 00:21:32.560737    9988 status.go:255] checking status of multinode-173500-m02 ...
	I0109 00:21:32.560737    9988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-173500-m02 ).state
	I0109 00:21:34.696220    9988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0109 00:21:34.696406    9988 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:21:34.696406    9988 status.go:330] multinode-173500-m02 host status = "Running" (err=<nil>)
	I0109 00:21:34.696685    9988 host.go:66] Checking if "multinode-173500-m02" exists ...
	I0109 00:21:34.697454    9988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-173500-m02 ).state
	I0109 00:21:36.917020    9988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0109 00:21:36.917114    9988 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:21:36.917347    9988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-173500-m02 ).networkadapters[0]).ipaddresses[0]
	I0109 00:21:39.570384    9988 main.go:141] libmachine: [stdout =====>] : 172.24.108.84
	
	I0109 00:21:39.570590    9988 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:21:39.570590    9988 host.go:66] Checking if "multinode-173500-m02" exists ...
	I0109 00:21:39.585281    9988 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0109 00:21:39.585281    9988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-173500-m02 ).state
	I0109 00:21:41.770334    9988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0109 00:21:41.770334    9988 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:21:41.770334    9988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-173500-m02 ).networkadapters[0]).ipaddresses[0]
	I0109 00:21:44.328553    9988 main.go:141] libmachine: [stdout =====>] : 172.24.108.84
	
	I0109 00:21:44.328801    9988 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:21:44.329347    9988 sshutil.go:53] new ssh client: &{IP:172.24.108.84 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-173500-m02\id_rsa Username:docker}
	I0109 00:21:44.432153    9988 ssh_runner.go:235] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (4.8467503s)
	I0109 00:21:44.446122    9988 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0109 00:21:44.466768    9988 status.go:257] multinode-173500-m02 status: &{Name:multinode-173500-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0109 00:21:44.466974    9988 status.go:255] checking status of multinode-173500-m03 ...
	I0109 00:21:44.467812    9988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-173500-m03 ).state
	I0109 00:21:46.617990    9988 main.go:141] libmachine: [stdout =====>] : Off
	
	I0109 00:21:46.617990    9988 main.go:141] libmachine: [stderr =====>] : 
	I0109 00:21:46.617990    9988 status.go:330] multinode-173500-m03 host status = "Stopped" (err=<nil>)
	I0109 00:21:46.617990    9988 status.go:343] host is not running, skipping remaining checks
	I0109 00:21:46.617990    9988 status.go:257] multinode-173500-m03 status: &{Name:multinode-173500-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (67.57s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (174.8s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-173500 node start m03 --alsologtostderr
E0109 00:23:27.429412   14288 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ingress-addon-legacy-744200\client.crt: The system cannot find the path specified.
multinode_test.go:282: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-173500 node start m03 --alsologtostderr: (2m18.1523877s)
multinode_test.go:289: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-173500 status
multinode_test.go:289: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-173500 status: (36.4428999s)
multinode_test.go:303: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (174.80s)

                                                
                                    
x
+
TestPreload (481.82s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-windows-amd64.exe start -p test-preload-511200 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=hyperv --kubernetes-version=v1.24.4
E0109 00:35:30.311445   14288 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-852800\client.crt: The system cannot find the path specified.
E0109 00:35:43.620557   14288 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-838800\client.crt: The system cannot find the path specified.
E0109 00:37:06.840250   14288 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-838800\client.crt: The system cannot find the path specified.
E0109 00:38:27.421895   14288 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ingress-addon-legacy-744200\client.crt: The system cannot find the path specified.
preload_test.go:44: (dbg) Done: out/minikube-windows-amd64.exe start -p test-preload-511200 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=hyperv --kubernetes-version=v1.24.4: (3m49.6689891s)
preload_test.go:52: (dbg) Run:  out/minikube-windows-amd64.exe -p test-preload-511200 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-windows-amd64.exe -p test-preload-511200 image pull gcr.io/k8s-minikube/busybox: (8.6855701s)
preload_test.go:58: (dbg) Run:  out/minikube-windows-amd64.exe stop -p test-preload-511200
preload_test.go:58: (dbg) Done: out/minikube-windows-amd64.exe stop -p test-preload-511200: (34.6810214s)
preload_test.go:66: (dbg) Run:  out/minikube-windows-amd64.exe start -p test-preload-511200 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=hyperv
E0109 00:40:30.319490   14288 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-852800\client.crt: The system cannot find the path specified.
E0109 00:40:43.619545   14288 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-838800\client.crt: The system cannot find the path specified.
preload_test.go:66: (dbg) Done: out/minikube-windows-amd64.exe start -p test-preload-511200 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=hyperv: (2m43.544808s)
preload_test.go:71: (dbg) Run:  out/minikube-windows-amd64.exe -p test-preload-511200 image list
preload_test.go:71: (dbg) Done: out/minikube-windows-amd64.exe -p test-preload-511200 image list: (7.6054602s)
helpers_test.go:175: Cleaning up "test-preload-511200" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p test-preload-511200
E0109 00:43:27.420439   14288 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ingress-addon-legacy-744200\client.crt: The system cannot find the path specified.
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p test-preload-511200: (37.6282894s)
--- PASS: TestPreload (481.82s)

                                                
                                    
x
+
TestScheduledStopWindows (328.64s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-windows-amd64.exe start -p scheduled-stop-706300 --memory=2048 --driver=hyperv
E0109 00:45:13.544933   14288 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-852800\client.crt: The system cannot find the path specified.
E0109 00:45:30.307587   14288 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-852800\client.crt: The system cannot find the path specified.
E0109 00:45:43.622114   14288 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-838800\client.crt: The system cannot find the path specified.
scheduled_stop_test.go:128: (dbg) Done: out/minikube-windows-amd64.exe start -p scheduled-stop-706300 --memory=2048 --driver=hyperv: (3m14.5702813s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-windows-amd64.exe stop -p scheduled-stop-706300 --schedule 5m
scheduled_stop_test.go:137: (dbg) Done: out/minikube-windows-amd64.exe stop -p scheduled-stop-706300 --schedule 5m: (10.9451615s)
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.TimeToStop}} -p scheduled-stop-706300 -n scheduled-stop-706300
scheduled_stop_test.go:191: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.TimeToStop}} -p scheduled-stop-706300 -n scheduled-stop-706300: exit status 1 (10.0331493s)

                                                
                                                
** stderr ** 
	W0109 00:46:54.247051    2460 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
scheduled_stop_test.go:191: status error: exit status 1 (may be ok)
scheduled_stop_test.go:54: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p scheduled-stop-706300 -- sudo systemctl show minikube-scheduled-stop --no-page
scheduled_stop_test.go:54: (dbg) Done: out/minikube-windows-amd64.exe ssh -p scheduled-stop-706300 -- sudo systemctl show minikube-scheduled-stop --no-page: (9.876811s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-windows-amd64.exe stop -p scheduled-stop-706300 --schedule 5s
scheduled_stop_test.go:137: (dbg) Done: out/minikube-windows-amd64.exe stop -p scheduled-stop-706300 --schedule 5s: (10.918356s)
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-windows-amd64.exe status -p scheduled-stop-706300
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status -p scheduled-stop-706300: exit status 7 (2.4509283s)

                                                
                                                
-- stdout --
	scheduled-stop-706300
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0109 00:48:25.090871    8268 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p scheduled-stop-706300 -n scheduled-stop-706300
E0109 00:48:27.421821   14288 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ingress-addon-legacy-744200\client.crt: The system cannot find the path specified.
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p scheduled-stop-706300 -n scheduled-stop-706300: exit status 7 (2.4947939s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
** stderr ** 
	W0109 00:48:27.542897   10904 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-706300" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p scheduled-stop-706300
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p scheduled-stop-706300: (27.3422787s)
--- PASS: TestScheduledStopWindows (328.64s)

                                                
                                    
x
+
TestKubernetesUpgrade (1093.52s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:235: (dbg) Run:  out/minikube-windows-amd64.exe start -p kubernetes-upgrade-248700 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=hyperv
version_upgrade_test.go:235: (dbg) Done: out/minikube-windows-amd64.exe start -p kubernetes-upgrade-248700 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=hyperv: (6m0.2587949s)
version_upgrade_test.go:240: (dbg) Run:  out/minikube-windows-amd64.exe stop -p kubernetes-upgrade-248700
E0109 00:55:30.321339   14288 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-852800\client.crt: The system cannot find the path specified.
version_upgrade_test.go:240: (dbg) Done: out/minikube-windows-amd64.exe stop -p kubernetes-upgrade-248700: (37.5480631s)
version_upgrade_test.go:245: (dbg) Run:  out/minikube-windows-amd64.exe -p kubernetes-upgrade-248700 status --format={{.Host}}
version_upgrade_test.go:245: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p kubernetes-upgrade-248700 status --format={{.Host}}: exit status 7 (2.5456049s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
** stderr ** 
	W0109 00:55:35.193233    6092 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
version_upgrade_test.go:247: status error: exit status 7 (may be ok)
version_upgrade_test.go:256: (dbg) Run:  out/minikube-windows-amd64.exe start -p kubernetes-upgrade-248700 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=hyperv
E0109 00:55:43.622613   14288 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-838800\client.crt: The system cannot find the path specified.
version_upgrade_test.go:256: (dbg) Done: out/minikube-windows-amd64.exe start -p kubernetes-upgrade-248700 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=hyperv: (4m33.8720093s)
version_upgrade_test.go:261: (dbg) Run:  kubectl --context kubernetes-upgrade-248700 version --output=json
version_upgrade_test.go:280: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:282: (dbg) Run:  out/minikube-windows-amd64.exe start -p kubernetes-upgrade-248700 --memory=2200 --kubernetes-version=v1.16.0 --driver=hyperv
version_upgrade_test.go:282: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p kubernetes-upgrade-248700 --memory=2200 --kubernetes-version=v1.16.0 --driver=hyperv: exit status 106 (345.9915ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-248700] minikube v1.32.0 on Microsoft Windows 10 Enterprise N 10.0.19045.3803 Build 19045.3803
	  - KUBECONFIG=C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube1\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=17830
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0109 01:00:11.829290   14892 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.29.0-rc.2 cluster to v1.16.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.16.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-248700
	    minikube start -p kubernetes-upgrade-248700 --kubernetes-version=v1.16.0
	    
	    2) Create a second cluster with Kubernetes 1.16.0, by running:
	    
	    minikube start -p kubernetes-upgrade-2487002 --kubernetes-version=v1.16.0
	    
	    3) Use the existing cluster at version Kubernetes 1.29.0-rc.2, by running:
	    
	    minikube start -p kubernetes-upgrade-248700 --kubernetes-version=v1.29.0-rc.2
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:286: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:288: (dbg) Run:  out/minikube-windows-amd64.exe start -p kubernetes-upgrade-248700 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=hyperv
E0109 01:00:30.309009   14288 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-852800\client.crt: The system cannot find the path specified.
E0109 01:00:43.611117   14288 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-838800\client.crt: The system cannot find the path specified.
E0109 01:01:53.555363   14288 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-852800\client.crt: The system cannot find the path specified.
version_upgrade_test.go:288: (dbg) Done: out/minikube-windows-amd64.exe start -p kubernetes-upgrade-248700 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=hyperv: (6m18.8193241s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-248700" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p kubernetes-upgrade-248700
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p kubernetes-upgrade-248700: (39.9274576s)
--- PASS: TestKubernetesUpgrade (1093.52s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.39s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-windows-amd64.exe start -p NoKubernetes-248700 --no-kubernetes --kubernetes-version=1.20 --driver=hyperv
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p NoKubernetes-248700 --no-kubernetes --kubernetes-version=1.20 --driver=hyperv: exit status 14 (387.2769ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-248700] minikube v1.32.0 on Microsoft Windows 10 Enterprise N 10.0.19045.3803 Build 19045.3803
	  - KUBECONFIG=C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube1\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=17830
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0109 00:48:57.400415    2544 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.39s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.79s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.79s)

                                                
                                    
x
+
TestPause/serial/Start (301.26s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-windows-amd64.exe start -p pause-108400 --memory=2048 --install-addons=false --wait=all --driver=hyperv
pause_test.go:80: (dbg) Done: out/minikube-windows-amd64.exe start -p pause-108400 --memory=2048 --install-addons=false --wait=all --driver=hyperv: (5m1.2561321s)
--- PASS: TestPause/serial/Start (301.26s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (385.3s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-windows-amd64.exe start -p pause-108400 --alsologtostderr -v=1 --driver=hyperv
pause_test.go:92: (dbg) Done: out/minikube-windows-amd64.exe start -p pause-108400 --alsologtostderr -v=1 --driver=hyperv: (6m25.2773568s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (385.30s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (11.01s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:219: (dbg) Run:  out/minikube-windows-amd64.exe logs -p stopped-upgrade-748100
version_upgrade_test.go:219: (dbg) Done: out/minikube-windows-amd64.exe logs -p stopped-upgrade-748100: (11.0086944s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (11.01s)

                                                
                                    
x
+
TestPause/serial/Pause (9.31s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-windows-amd64.exe pause -p pause-108400 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-windows-amd64.exe pause -p pause-108400 --alsologtostderr -v=5: (9.3052606s)
--- PASS: TestPause/serial/Pause (9.31s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (13.48s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-windows-amd64.exe status -p pause-108400 --output=json --layout=cluster
E0109 01:05:43.616434   14288 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-838800\client.crt: The system cannot find the path specified.
status_test.go:76: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status -p pause-108400 --output=json --layout=cluster: exit status 2 (13.4817108s)

                                                
                                                
-- stdout --
	{"Name":"pause-108400","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 12 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.32.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-108400","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	W0109 01:05:40.899554    8588 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
--- PASS: TestPause/serial/VerifyStatus (13.48s)

                                                
                                    
x
+
TestPause/serial/Unpause (8.17s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-windows-amd64.exe unpause -p pause-108400 --alsologtostderr -v=5
pause_test.go:121: (dbg) Done: out/minikube-windows-amd64.exe unpause -p pause-108400 --alsologtostderr -v=5: (8.1669242s)
--- PASS: TestPause/serial/Unpause (8.17s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (8.66s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-windows-amd64.exe pause -p pause-108400 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-windows-amd64.exe pause -p pause-108400 --alsologtostderr -v=5: (8.6566658s)
--- PASS: TestPause/serial/PauseAgain (8.66s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (45.45s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-windows-amd64.exe delete -p pause-108400 --alsologtostderr -v=5
E0109 01:06:30.647747   14288 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ingress-addon-legacy-744200\client.crt: The system cannot find the path specified.
pause_test.go:132: (dbg) Done: out/minikube-windows-amd64.exe delete -p pause-108400 --alsologtostderr -v=5: (45.4466559s)
--- PASS: TestPause/serial/DeletePaused (45.45s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (5.6s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
pause_test.go:142: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (5.5964054s)
--- PASS: TestPause/serial/VerifyDeletedResources (5.60s)

                                                
                                    

Test skip (32/208)

x
+
TestDownloadOnly/v1.16.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.16.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/binaries
aaa_download_only_test.go:139: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.16.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.4/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/binaries
aaa_download_only_test.go:139: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.4/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.29.0-rc.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/binaries
aaa_download_only_test.go:139: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.29.0-rc.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:213: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:498: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker false windows amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:41: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (300.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-windows-amd64.exe dashboard --url --port 36195 -p functional-838800 --alsologtostderr -v=1]
functional_test.go:912: output didn't produce a URL
functional_test.go:906: (dbg) stopping [out/minikube-windows-amd64.exe dashboard --url --port 36195 -p functional-838800 --alsologtostderr -v=1] ...
helpers_test.go:502: unable to terminate pid 11052: Access is denied.
--- SKIP: TestFunctional/parallel/DashboardCmd (300.01s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (5.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-838800 --dry-run --memory 250MB --alsologtostderr --driver=hyperv
functional_test.go:970: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p functional-838800 --dry-run --memory 250MB --alsologtostderr --driver=hyperv: exit status 1 (5.0612469s)

                                                
                                                
-- stdout --
	* [functional-838800] minikube v1.32.0 on Microsoft Windows 10 Enterprise N 10.0.19045.3803 Build 19045.3803
	  - KUBECONFIG=C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube1\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=17830
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true

                                                
                                                
-- /stdout --
** stderr ** 
	W0108 23:22:49.308948   15324 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I0108 23:22:49.396947   15324 out.go:296] Setting OutFile to fd 804 ...
	I0108 23:22:49.396947   15324 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 23:22:49.396947   15324 out.go:309] Setting ErrFile to fd 956...
	I0108 23:22:49.396947   15324 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 23:22:49.423947   15324 out.go:303] Setting JSON to false
	I0108 23:22:49.430949   15324 start.go:128] hostinfo: {"hostname":"minikube1","uptime":3664,"bootTime":1704752505,"procs":206,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.3803 Build 19045.3803","kernelVersion":"10.0.19045.3803 Build 19045.3803","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f3192dc9-6fb2-4797-bdaa-5f567903ef41"}
	W0108 23:22:49.430949   15324 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0108 23:22:49.435951   15324 out.go:177] * [functional-838800] minikube v1.32.0 on Microsoft Windows 10 Enterprise N 10.0.19045.3803 Build 19045.3803
	I0108 23:22:49.439956   15324 notify.go:220] Checking for updates...
	I0108 23:22:49.442955   15324 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0108 23:22:49.444945   15324 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0108 23:22:49.447953   15324 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube1\minikube-integration\.minikube
	I0108 23:22:49.450969   15324 out.go:177]   - MINIKUBE_LOCATION=17830
	I0108 23:22:49.460113   15324 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0108 23:22:49.463881   15324 config.go:182] Loaded profile config "functional-838800": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0108 23:22:49.464504   15324 driver.go:392] Setting default libvirt URI to qemu:///system

                                                
                                                
** /stderr **
functional_test.go:976: skipping this error on HyperV till this issue is solved https://github.com/kubernetes/minikube/issues/9785
--- SKIP: TestFunctional/parallel/DryRun (5.06s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (5.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-838800 --dry-run --memory 250MB --alsologtostderr --driver=hyperv
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p functional-838800 --dry-run --memory 250MB --alsologtostderr --driver=hyperv: exit status 1 (5.0328895s)

                                                
                                                
-- stdout --
	* [functional-838800] minikube v1.32.0 sur Microsoft Windows 10 Enterprise N 10.0.19045.3803 Build 19045.3803
	  - KUBECONFIG=C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube1\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=17830
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true

                                                
                                                
-- /stdout --
** stderr ** 
	W0108 23:22:54.366735    9264 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I0108 23:22:54.471741    9264 out.go:296] Setting OutFile to fd 956 ...
	I0108 23:22:54.471741    9264 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 23:22:54.471741    9264 out.go:309] Setting ErrFile to fd 748...
	I0108 23:22:54.471741    9264 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 23:22:54.496734    9264 out.go:303] Setting JSON to false
	I0108 23:22:54.500723    9264 start.go:128] hostinfo: {"hostname":"minikube1","uptime":3669,"bootTime":1704752505,"procs":207,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.3803 Build 19045.3803","kernelVersion":"10.0.19045.3803 Build 19045.3803","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f3192dc9-6fb2-4797-bdaa-5f567903ef41"}
	W0108 23:22:54.500723    9264 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0108 23:22:54.505725    9264 out.go:177] * [functional-838800] minikube v1.32.0 sur Microsoft Windows 10 Enterprise N 10.0.19045.3803 Build 19045.3803
	I0108 23:22:54.508723    9264 notify.go:220] Checking for updates...
	I0108 23:22:54.510739    9264 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0108 23:22:54.513731    9264 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0108 23:22:54.516733    9264 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube1\minikube-integration\.minikube
	I0108 23:22:54.518730    9264 out.go:177]   - MINIKUBE_LOCATION=17830
	I0108 23:22:54.521726    9264 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0108 23:22:54.524732    9264 config.go:182] Loaded profile config "functional-838800": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0108 23:22:54.525732    9264 driver.go:392] Setting default libvirt URI to qemu:///system

                                                
                                                
** /stderr **
functional_test.go:1021: skipping this error on HyperV till this issue is solved https://github.com/kubernetes/minikube/issues/9785
--- SKIP: TestFunctional/parallel/InternationalLanguage (5.03s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd
=== PAUSE TestFunctional/parallel/MountCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd
functional_test_mount_test.go:57: skipping: mount broken on hyperv: https://github.com/kubernetes/minikube/issues/5029
--- SKIP: TestFunctional/parallel/MountCmd (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:230: The test WaitService/IngressIP is broken on hyperv https://github.com/kubernetes/minikube/issues/8381
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:258: skipping: access direct test is broken on windows: https://github.com/kubernetes/minikube/issues/8304
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestScheduledStopUnix (0s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:76: test only runs on unix
--- SKIP: TestScheduledStopUnix (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:39: skipping due to https://github.com/kubernetes/minikube/issues/14232
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:297: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
Copied to clipboard